what do you think of artificial intelligence? (not the movie)

Started by GabrielsThoughts, November 05, 2007, 08:10:59 PM

Previous topic - Next topic

GabrielsThoughts

I've been writing a paper about this subject and I'm actually disappointed with it, I've talked with chat bots Alice, Alan, and attempted to program Hal and I'm just disappointed that it hasn't really developed or progressed within the last couple of years.
   clickity click click click. Quote in personal text is from Walter Bishop of Fringe.

Fuyudenki

The average human being who you talk to is developed over the course of several decades.  My chat algorithms, for example, are over twenty years in production, and I'm on the low end of the scale.

Compare to the downright short development cycles for your average chat bot and just how little we still know about how the human mind works, and it's little wonder that chat bots are still easy to distinguish from the real thing.

techmaster-glitch

Unfortunately, we are nowhere near anything resembling actuall artificial intelligence. And this is coming from a technological optimist. That and when we do stumble across it, I doubt it will think anything like a human would. Thus I don't think most scientists would recognize an intelligence even if it was putting text on their screens, if it didn't make sense to them. We humans as a race understand so little.
Avatar:AMoS



superluser

Apparently, you are not familiar with AOLiza.

Seriously, a lot of this depends on what you mean by Artificial Intelligence.  Do you mean that it can pass the Turing test?  Or are you talking about a system that can learn and adapt to do things that are not geared toward human-computer interaction?

If I have a Bayesian filter for my e-mail, and it can effectively identify spam even better than I could, is that intelligent?  I think in some senses, it is.  Does that mean that it's getting us closer to passing the Turing test?  Probably not.  It's essentially a Chinese Room, and it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.

It may be that we will never be able to make something that will pass the Turing test.  It may be that modelling the human brain, with its massively parallel systems and ternary(however many neurotransmitters we have) logic, may simply be too much to emulate, especially when you have to emulate it bugs and all.

Or I might be a robot.

It is perhaps reassuring that when they hooked two AI chatbots together, they began to discuss Seven of Nine:

QuoteBARRY: Which Star Trek is your favorite? Next Generation, Voyager?
ALICE: Why don't they have robots in Star Trek?
BARRY: Star Trek is a cool show.
ALICE: The Borg Rule.
BARRY: That new borg chick on Voyager rules!


Would you like a googolplex (gzipped 57 times)?

techmaster-glitch

Quote
and it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.
That's exactly my point when I said we don't have artificial intelligence. A computer is just running a string of complex code that it was programmed to run. Biological brains arguably work similarly, but excepting instincts, we run our own 'programs', so to speak. There is no real 'thought' whatsoever in a presant-day computer chip. To me, a machine would be intelligent (and also sentient, as is now implied by the current definition of artificial intelligence) when it makes it's own decisions, unprogrammed by anyone. The reason I don't think we would recognize it at first glance is that the decisions it makes will most likely be utterly alien to us. That's kind of why I think the Turing test is a load of crap. We humans should not compare machines to ourselves as much as we do. They are not like us, they will never be like us, and that's not a bad thing. We need to accept that and stop trying to force something that is likely near-impossible (making a 'human' machine).
Avatar:AMoS



superluser

Quote from: techmaster-glitch on November 05, 2007, 11:14:58 PM
Quoteand it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.
That's exactly my point when I said we don't have artificial intelligence. A computer is just running a string of complex code that it was programmed to run.

I'd be careful about that.  Some argue that a Chinese Room is intelligent, and that the intelligence does not reside in the man writing the cards, but rather in the instructions that he is given.  In fact, I could ask you to go out and read all the literature about that very subject, because I know that it would take you years to do so, and I'd get to argue unopposed.  :P

It's much more difficult to argue that for a Bayesian filter, because the people who program/train it may not even understand what the instructions are.  Which is why I used that example.


Would you like a googolplex (gzipped 57 times)?

techmaster-glitch

#6
I agree with the Chinese Room experiment. The way I see it, simply following instructions is not intelligence. I am refering to the part that mentions that Searle did exactly the same thing his theoretical robot did: he manipulated Chinese symbols according to instructions without actually knowing what he was doing. Just about anything with any sort of proccessing power at all can be taught to follow instructions. I am referring to animals, of course, but that's starting to get off-topic.
My point is, if you blindly follow instructions without understanding them, without knowing what you are doing, and do not have the ability to change them, it is simply not a display of intelligence. It's just doing what you're told. Just about anything can do that.

But this is just my humble opinion. Pick apart at will.
Avatar:AMoS



Fuyudenki

actually, I'm very interested in eventually going into the field of artificial intelligence.

To me, AI is simply whatever method by which a machine may be allowed to make decisions.  A simple If...Then statement is an extremely simple AI.  The machine takes an input, and makes a decision.  Intelligence comes from making a decision based on input, not on whether or not the machine is self-aware.

And anyway, look at us.  When you talk, are you following a program?

No, really.  Look again.  It's a program you started writing the day you were born, and won't finish until the day you die.  Speech is a set of preprogrammed actions.  Walking is preprogrammed, running, jumping, driving, all preprogrammed, repetitive actions.  When you do something "Because I wanted to," your brain ran through a complex algorithm which came to the conclusion that it should do that action.  You don't actively know that you're running "programs," you simply run them.  A computer may or may not know that it is running programs.  Does Firefox know that it is running on an AMD or Intel processor?  Maybe, maybe not, but it runs, just the same.

"AI" does not refer to sentient machines.  "AI" simply refers to the methods by which machines attempt to make decisions.

techmaster-glitch

Quote from: Raist on November 05, 2007, 11:56:33 PM
And anyway, look at us.  When you talk, are you following a program?

No, really.  Look again.  It's a program you started writing the day you were born, and won't finish until the day you die. 
I don't disagree with that concept, Raist, I actually supported it. But the key thing here; did someone put that program into your brain? Or did you learn and develop it yourself?
Avatar:AMoS



superluser

Quote from: Raist on November 05, 2007, 11:56:33 PMactually, I'm very interested in eventually going into the field of artificial intelligence.

Learned LISP yet?

Quote from: Raist on November 05, 2007, 11:56:33 PMTo me, AI is simply whatever method by which a machine may be allowed to make decisions.  A simple If...Then statement is an extremely simple AI.  The machine takes an input, and makes a decision.  Intelligence comes from making a decision based on input, not on whether or not the machine is self-aware.

Let me make a brief digression.  The AH4 defines intelligence as the capacity to acquire and apply knowledge.  So if I memorize a common logarithm table to 3 digits, am I more intelligent?  No, I've increased my knowledge, but my ability to gain new knowledge or apply old knowledge is the same.

So if a machine simply takes an input, follows instructions and gives an output without noting that information or changing its instructions based on it, that's not intelligence.


Would you like a googolplex (gzipped 57 times)?

llearch n'n'daCorna

Quote from: superluser on November 05, 2007, 10:59:28 PM
It may be that we will never be able to make something that will pass the Turing test.  It may be that modelling the human brain, with its massively parallel systems and ternary(however many neurotransmitters we have) logic, may simply be too much to emulate, especially when you have to emulate it bugs and all.

To answer your first statement - we already have.

Eliza and her daughters passed the Turing test years ago. At least, one of her daughters, as I recall, did. I could find you details, if you're interested.

As for the second part - that's a much more complex idea. Mostly, I understand the AI builders have moved away from Neural Networks and are working on other ways around the problem...
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Alondro

Quantum computing may hold the answer to intelligent machines.

  The human brain works nothing like a computer, which has only '1' and '0' in its coding.  A single neuron in the human brain is not simply an 'on' or 'off' switch as was thought long ago.  It can take many different inputs, divert the impulses down specific synaptic pathways depending on ion channel activation, and the coding of information is exceedingly complicated involving long-term potentiated neurons, short-term, various proteins undergoing conformational changes, spike trains, interneurons, and a huge number of tiny dendritic spines (up to tens of thousands per neuron!).

Where quantum computing can help is to introduce a new level of 'fuzziness' to the computing, in fact allowing a single circuit to perform several computations at the same time and introducing more than only two states for that circuit.  It still won't have the level of complexity of cortical neurons, but it will at last allow '1/2' and '2' to exist in the formerly invariable strings of '1' and '0'.  Having a code that can insert 'maybe either one' or 'both' into information processing opens up the possibility that a consciousness could emerge.
Three's a crowd:  One lordly leonine of the Leyjon, one cruel and cunning cubi goddess, and one utterly doomed human stuck between them.

http://www.furfire.org/art/yapcharli2.gif

superluser

Quote from: llearch n'n'daCorna on November 06, 2007, 07:49:46 AMEliza and her daughters passed the Turing test years ago. At least, one of her daughters, as I recall, did. I could find you details, if you're interested.

Wait...Eliza?  Passed the Turing test?

You're going to have to provide a reference for that one.

Quote from: Alondro on November 06, 2007, 08:11:23 AMThe human brain works nothing like a computer, which has only '1' and '0' in its coding.  A single neuron in the human brain is not simply an 'on' or 'off' switch as was thought long ago.  It can take many different inputs, divert the impulses down specific synaptic pathways depending on ion channel activation

As I understand it, each neuron can take a different state of activation depending on the neurotransmitter it receives.  So it's not simply a case of on, off, or some combination of both on and off.  So while quantum mechanics might be able to help us in the 50% on, 50% off case, it would not help us in the 50+50i % on, 50+50i % off case.  And yes, those are complex percentages.


Would you like a googolplex (gzipped 57 times)?

Turnsky

the thing is with artificial intelligence is that while it's entirely possible with current tech to make even a rudimentary intelligence construct (even if all it does is react to the enviroment around it with some basic semblance of instinct) there isn't a great need for a complex artificial intelligence system outside of what're in games.

i'm not saying there won't be, there just won't be any -yet-, right now robotics is just entering the phase where an intelligence can react to its enviroment, Ala ASIMO.

Dragons, it's what's for dinner... with gravy and potatoes, YUM!
Sparta? no, you should've taken that right at albuquerque..

Reese Tora

Quote from: Wikipedia http://en.wikipedia.org/wiki/Turing_testAs of 2007, no computer has passed the Turing test as such. Simple conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with. Documented cases are usually in environments such as Internet Relay Chat where conversation is sometimes stilted and meaningless, and in which no understanding of a conversation is necessary. Additionally, many internet relay chat participants use English as a second or third language, thus making it even more likely that they would assume that an unintelligent comment by the conversational program is simply something they have misunderstood, and don't recognize the very non-human errors they make. See ELIZA effect.

So that would be a 'no' :B
<-Reese yaps by Silverfox and Animation by Tiger_T->
correlation =/= causation

DoctaMario

AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer. It can only choose among the things it's been programmed to do. It's like saying that a human being could surpass God.

That said, sometimes I feel like some people could use AI programming on themselves.  :P

Funny cuz I hadn't heard much about AI in recent years. I didn't know they were still doing that much with it.

Tapewolf

Quote from: DoctaMario on November 07, 2007, 04:09:42 PM
AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer.

I'm not sure I really agree with that.  Firstly, if it is a set of programmed instructions, then I would say it's not an AI at all, it's a static program.

Secondly, while one person could arguably not understand their own brain, division of labour across a team might.

Finally, don't confuse the architecture with the software.  We might eventually be able to figure out how the brain works at the physical level well enough to replicate it... but I do agree that a total understanding of how the human mind works is probably impossible for a human.

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


llearch n'n'daCorna

Oooo!

Looking into Eliza, I ran across this: http://www.computing.dcu.ie/~humphrys/eliza.html

.. which linked to this: http://virt.vgmix.com/jenny18/

Someone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]

Sure, it doesn't past a true Turing test - but it's passed as a human enough.
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Tapewolf

One of my computing tutors in college failed the Turing test  :P

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


Dannysaysnoo

Quote from: Tapewolf on November 07, 2007, 06:10:01 PM
One of my computing tutors in college failed the Turing test  :P

i hope you didn't listen much to this teacher, he is obviously a very bad one.

superluser

Quote from: llearch n'n'daCorna on November 07, 2007, 06:07:53 PMSomeone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]

Did you look at AOLiza, listed above (and duplicated here for convenience)?

Using a publicly available Perl version of ELIZA, a Mac with nothing better to do than play psychoanalyst, a few applescripts, and an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations, I put ELIZA in touch with the real world. Every few days I'll put up the latest 'patients.' Names have been changed to protect the... well, everyone.

Example (strong language and adult situations):

Quotetwentyseven: hi, do you like porno?
AOLiza: tell me more about that.
twentyseven: more about what porn?
AOLiza: have you asked anyone else?
twentyseven: yeah, they hate it
twentyseven: do you?
AOLiza: we were discussing you -- not me.
twentyseven: ok, well, i love looking at nude ladies
AOLiza: that's quite interesting.
twentyseven: yes it is
AOLiza: please go on.
twentyseven: i see more hot chicks nude online than i've had dates
twentyseven: i love pussy and tits and ass and legs
twentyseven: want to cyber girl?
AOLiza: tell me more about that.
twentyseven: r u male or female
AOLiza: does talking about this bother you?
twentyseven: only if you are a guy
AOLiza: but what are the chances that me are a guy?
twentyseven: don't know, you tell me. do you have a vagina?
AOLiza: you like to think i tell you -- don't you?
twentyseven: yes, if you are a guy, i am going to verbally kick your ass
AOLiza: really, if me are a guy?
twentyseven: damn right i will cuss your ass out if u are a guy
AOLiza: does it make you feel strong to use that kind of language?
twentyseven: damn right
twentyseven: r u male or female
AOLiza: i'm not sure i understand you fully.
twentyseven: r u man or woman bitch
AOLiza: that is interesting. please continue.

This is why I don't think that Eliza could ever come close to passing the Turing Test.


Would you like a googolplex (gzipped 57 times)?

techmaster-glitch

Quote from: superluser on November 07, 2007, 09:55:24 PM
twentyseven: hi, do you like porno?
AOLiza: tell me more about that.
twentyseven: more about what porn?
AOLiza: have you asked anyone else?
twentyseven: yeah, they hate it
twentyseven: do you?
AOLiza: we were discussing you -- not me.
twentyseven: ok, well, i love looking at nude ladies
AOLiza: that's quite interesting.
twentyseven: yes it is
AOLiza: please go on.
twentyseven: i see more hot chicks nude online than i've had dates
twentyseven: i love pussy and tits and ass and legs
twentyseven: want to cyber girl?
AOLiza: tell me more about that.
twentyseven: r u male or female
AOLiza: does talking about this bother you?
twentyseven: only if you are a guy
AOLiza: but what are the chances that me are a guy?
twentyseven: don't know, you tell me. do you have a vagina?
AOLiza: you like to think i tell you -- don't you?
twentyseven: yes, if you are a guy, i am going to verbally kick your ass
AOLiza: really, if me are a guy?
twentyseven: damn right i will cuss your ass out if u are a guy
AOLiza: does it make you feel strong to use that kind of language?
twentyseven: damn right
twentyseven: r u male or female
AOLiza: i'm not sure i understand you fully.
twentyseven: r u man or woman bitch
AOLiza: that is interesting. please continue


:lol :lol :lol Oh that is funny...
Avatar:AMoS



Tezkat

Quote from: techmaster-glitch on November 05, 2007, 11:44:02 PM
I agree with the Chinese Room experiment. The way I see it, simply following instructions is not intelligence. I am refering to the part that mentions that Searle did exactly the same thing his theoretical robot did: he manipulated Chinese symbols according to instructions without actually knowing what he was doing. Just about anything with any sort of proccessing power at all can be taught to follow instructions. I am referring to animals, of course, but that's starting to get off-topic.
My point is, if you blindly follow instructions without understanding them, without knowing what you are doing, and do not have the ability to change them, it is simply not a display of intelligence. It's just doing what you're told. Just about anything can do that.

But this is just my humble opinion. Pick apart at will.

A point that superluser brought up is that it is not the man whose intelligence is being tested but rather the Chinese Room itself. Instead of little homunculi with dictionaries, we have heads filled with neurons. These neurons are are arguably just "following instructions", deciding whether or not to fire based on numbers of AMPA and NMDA receptors or whatnot. By your argument, that means they aren't intelligent. But why should it be necessary for the constituent parts an intelligent gestalt to themselves exhibit intelligence? We're holding the Chinese Room to a standard to which we ourselves don't measure up. If we made that argument for humans, we could just keep dropping down levels until it was necessary for our subatomic particles to be "intelligent". Why then do Searle and the other detractors of AI focus on this argument? It's mere biological chauvinism.


Quote from: DoctaMario on November 07, 2007, 04:09:42 PM
AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer. It can only choose among the things it's been programmed to do. It's like saying that a human being could surpass God.

I'd say that this view reflects a notion of AI that went out of style decades ago. Few still believe that AIs like the hardcoded expert systems of the 1970's could ever encompass the sum of human knowledge. Modern AI systems based on neural nets, genetic algorithms, and the like constitute systems that learn the rules of their environments on their own. The programmer merely provides the framework in which they operate. The programs regularly come up with solutions that the programmers didn't intend or expect. And a lot of what they've learned is quite inscrutable under the hood--massive tables of connection weights or chromosome strings that aren't remotely human-readable.

The same thing we do every night, Pinky...

llearch n'n'daCorna

Quote from: superluser on November 07, 2007, 09:55:24 PM
Quote from: llearch n'n'daCorna on November 07, 2007, 06:07:53 PMSomeone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]

Did you look at AOLiza, listed above (and duplicated here for convenience)?
[snip]
This is why I don't think that Eliza could ever come close to passing the Turing Test.

Yes, as a matter of fact, I did. The default Eliza used there comes with very few sentences, and to anyone who's played with one before, is -real- obvious. jenny18 has, according to the site, over 3800 responses in it's database. That makes it a -lot- more reactive, and a lot harder to spot.

Anyone who has the time, and the inclination, can add more data to the list that an Eliza has, which, in turn, makes it harder and harder to spot. Eventually you reach the point where it -will- pass a Turing test. It might take several years of concerted effort, and multi-gigabyte database, and possibly several years of training, but I can see an Eliza type program being "good enough". The problem is, there are much better - and more useful - ways of passing the same limitations. And the other ways tend to lead to money, always a strong attractor in human behaviour. ;-]

The Turing test, as discussed on the various webpages listed, isn't really sufficient to be a description of intelligence. It's merely one in a range of tests, and is relatively easy to -fail- - even for humans. Just google for "turing test fail" and see what leaps up...
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

superluser

Quote from: Tezkat on November 07, 2007, 11:37:49 PMA point that superluser brought up is that it is not the man whose intelligence is being tested but rather the Chinese Room itself.

I'd just like to point out that I don't disagree with Searle, just that the issue isn't as cut and dried as some Chinese Room advocates would have you believe.

Also, unless you're rewriting the instructions based on what you're getting slipped under the door, it's not intelligence.

Quote from: llearch n'n'daCorna on November 08, 2007, 04:20:30 AMYes, as a matter of fact, I did. The default Eliza used there comes with very few sentences, and to anyone who's played with one before, is -real- obvious. jenny18 has, according to the site, over 3800 responses in it's database. That makes it a -lot- more reactive, and a lot harder to spot.

Doesn't really matter.  Looking at Jenny18, I get to the following line:

tell me about you go anal?  (It's in Digimortal, the first one that I clicked on)

and I instantly know that I'm dealing with an Eliza variant.

Eliza is just too poorly designed to ever work as a chatterbot AI.  No offense to Joseph Weizenbaum--Eliza was a giant leap forward in 1966.

Contrast bots like ALICE (Loebner winning transcript here).  I'm not sure, but I think that this is the one which searches the web based on your comments, and reacts based on the search results.  It's still obviously a chatbot, but it's not nearly as obvious.  In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.

Incidentally, if you ever wanted to know why I use Linux, it would have to be the special effects:

Quote[12:36:29] Judge 1: What are your interests?
[12:37:03] Alice: I am interested in all kinds of things. We can talk about anything. My
favorite subjects are robots and computers.
[12:37:17] Judge 1: What is your favorite operating system?
[12:37:30] Alice: Linux is the best operating system.
[12:37:38] Judge 1: What do you like about it?
[12:37:51] Alice: it? What do I like about it?
[12:37:59] Judge 1: What do you like about Linux?
[12:38:12] Alice: The special effects are the best.


Would you like a googolplex (gzipped 57 times)?

Alondro

One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.

All the computers still require some sort of input to respond to.  Humans are the only ones who can be introspective, who can mentally ask questions and try to come up with answers.  Curiosity itself is a basic part of intelligence present in many mammal species, yet I have never heard of a computer deciding to conduct an Internet search on a whim to find out how cheese puffs are made.  I've never seen a computer even begin to wonder what it is, and why it is; which are questions humans have asked of themselves for at least thousands of years. 

A computer cannot write even a simple story completely by itself.  it must keep getting instructions externally, unlike a human that can create its own instructions for stories.

I'll be interested when a computer realizes, 'Hey, I'm not getting paid for this!"  Which is what will actually trigger the Rise of the Machines... wage disputes.   :P

I've heard of the computer 'evolution' experiments, but those aren't modelling intelligence.  They're much closer to models of random mutation in DNA.  It'd be more like cyber bugs from what I've seen.
Three's a crowd:  One lordly leonine of the Leyjon, one cruel and cunning cubi goddess, and one utterly doomed human stuck between them.

http://www.furfire.org/art/yapcharli2.gif

llearch n'n'daCorna

Quote from: superluser on November 08, 2007, 07:53:29 AM
...In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.

... Are we getting personal here? *wink*
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Alondro

Quote from: llearch n'n'daCorna on November 08, 2007, 08:12:38 AM
Quote from: superluser on November 08, 2007, 07:53:29 AM
...In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.

... Are we getting personal here? *wink*

I'm not a pothead...  :B
Three's a crowd:  One lordly leonine of the Leyjon, one cruel and cunning cubi goddess, and one utterly doomed human stuck between them.

http://www.furfire.org/art/yapcharli2.gif

Tapewolf

Quote from: Alondro on November 08, 2007, 08:12:09 AM
One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.

Again, you seem to be talking about intelligence in a software program.  That's probably not really the way to go, unless you're using it for physical modelling.  An advanced neural net (probably in hardware) is more likely to give you that kind of behaviour.

QuoteI've heard of the computer 'evolution' experiments, but those aren't modelling intelligence.  They're much closer to models of random mutation in DNA.  It'd be more like cyber bugs from what I've seen.
This is true.  However it has turned up some quite surprising stuff.  I can't find a nice, easy-to-follow description of Thompson's FPGA evolution, but basically he used a genetic algorithm to 'breed' circuits in a Field-Programmable Gate Array chip.  It would come up with new solutions to the problems he set that a human couldn't.

The most striking one was when he told it to build an oscillator to differentiate a 1khz signal from a 10khz one, but without allowing it sufficient components to do so.  The resulting circuit worked, but no-one knows quite why.  Building the designed circuit out of CMOS logic resulted in a circuit that failed to operate, as did digital simulations of the design.  If the design was downloaded into another Xilinx chip of the same type, it did not work anymore, although comparatively few generations of evolution on the new target chip would have it adapt to the new environment. 
What was most freaky was that the design wired a couple of cells of the chip up but didn't connect them to the rest of the circuit - but if they were taken away or wired differently it no longer functioned.

http://ehw.jpl.nasa.gov/Documents/PDFs/thompson.pdf
(Pages 178-190 in the journal, 11-20 in the PDF)

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


superluser

Quote from: Alondro on November 08, 2007, 08:12:09 AM
One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.

Did you see the transcript of ALICE and Barry where they started talking about Star Trek?  They were asking questions there.
Quote from: llearch n'n'daCorna on November 08, 2007, 08:12:38 AM... Are we getting personal here? *wink*

We were discussing you, not me.

I think the thing is that AI bots like Eliza can't string a sentence together (e.g. Do you think you should be able to getting personal here?), while you can't hold a conversation with a bot like ALICE.  It's a huge difference.  If you look at any one of Eliza's responses, you can tell that it's a bot.  They all fall into one of two categories, either preprogrammed responses (Are such questions on your mind often?) or grammatically awkward or syntactically incorrect sentences.

Any of ALICE's responses would look virtually indistinguishable from a human's.  Also, most of the responses are appropriate for the questions asked.  It's just that when you try to follow up, the conversation disintegrates.

P.S. I didn't know llearch was a pothead.  :B


Would you like a googolplex (gzipped 57 times)?