I've been writing a paper about this subject and I'm actually disappointed with it, I've talked with chat bots Alice, Alan, and attempted to program Hal and I'm just disappointed that it hasn't really developed or progressed within the last couple of years.
The average human being who you talk to is developed over the course of several decades. My chat algorithms, for example, are over twenty years in production, and I'm on the low end of the scale.
Compare to the downright short development cycles for your average chat bot and just how little we still know about how the human mind works, and it's little wonder that chat bots are still easy to distinguish from the real thing.
Unfortunately, we are nowhere near anything resembling actuall artificial intelligence. And this is coming from a technological optimist. That and when we do stumble across it, I doubt it will think anything like a human would. Thus I don't think most scientists would recognize an intelligence even if it was putting text on their screens, if it didn't make sense to them. We humans as a race understand so little.
Apparently, you are not familiar with AOLiza (http://fury.com/aoliza/).
Seriously, a lot of this depends on what you mean by Artificial Intelligence. Do you mean that it can pass the Turing test? Or are you talking about a system that can learn and adapt to do things that are not geared toward human-computer interaction?
If I have a Bayesian filter for my e-mail, and it can effectively identify spam even better than I could, is that intelligent? I think in some senses, it is. Does that mean that it's getting us closer to passing the Turing test? Probably not. It's essentially a Chinese Room, and it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.
It may be that we will never be able to make something that will pass the Turing test. It may be that modelling the human brain, with its massively parallel systems and
ternary(however many neurotransmitters we have) logic, may simply be too much to emulate, especially when you have to emulate it bugs and all.
Or I might be a robot.
It is perhaps reassuring that when they hooked two AI chatbots together, they began to discuss Seven of Nine (http://query.nytimes.com/gst/fullpage.html?res=9B04E3DD1631F93BA25750C0A96F958260&sec=&spon=&pagewanted=3):
QuoteBARRY: Which Star Trek is your favorite? Next Generation, Voyager?
ALICE: Why don't they have robots in Star Trek?
BARRY: Star Trek is a cool show.
ALICE: The Borg Rule.
BARRY: That new borg chick on Voyager rules!
Quote
and it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.
That's exactly my point when I said we don't have artificial intelligence. A computer is just running a string of complex code that it was programmed to run. Biological brains arguably work similarly, but excepting instincts, we run our
own 'programs', so to speak. There is no real 'thought' whatsoever in a presant-day computer chip. To me, a machine would be intelligent (and also sentient, as is now implied by the current definition of artificial intelligence) when it makes it's
own decisions, unprogrammed by anyone. The reason I don't think we would recognize it at first glance is that the decisions it makes will most likely be utterly alien to us. That's kind of why I think the Turing test is a load of crap. We humans should not compare machines to ourselves as much as we do. They are not like us, they will never be like us, and that's
not a bad thing. We need to accept that and stop trying to force something that is likely near-impossible (making a 'human' machine).
Quote from: techmaster-glitch on November 05, 2007, 11:14:58 PMQuoteand it doesn't understand what it's processing, just how to identify whether what it is dealing with is spam.
That's exactly my point when I said we don't have artificial intelligence. A computer is just running a string of complex code that it was programmed to run.
I'd be careful about that. Some argue that a Chinese Room is intelligent, and that the intelligence does not reside in the man writing the cards, but rather in the instructions that he is given. In fact, I could ask you to go out and read all the literature about that very subject, because I know that it would take you years to do so, and I'd get to argue unopposed. :P
It's much more difficult to argue that for a Bayesian filter, because the people who program/train it may not even understand what the instructions are. Which is why I used that example.
I agree with the Chinese Room experiment. The way I see it, simply following instructions is not intelligence. I am refering to the part that mentions that Searle did exactly the same thing his theoretical robot did: he manipulated Chinese symbols according to instructions without actually knowing what he was doing. Just about anything with any sort of proccessing power at all can be taught to follow instructions. I am referring to animals, of course, but that's starting to get off-topic.
My point is, if you blindly follow instructions without understanding them, without knowing what you are doing, and do not have the ability to change them, it is simply not a display of intelligence. It's just doing what you're told. Just about anything can do that.
But this is just my humble opinion. Pick apart at will.
actually, I'm very interested in eventually going into the field of artificial intelligence.
To me, AI is simply whatever method by which a machine may be allowed to make decisions. A simple If...Then statement is an extremely simple AI. The machine takes an input, and makes a decision. Intelligence comes from making a decision based on input, not on whether or not the machine is self-aware.
And anyway, look at us. When you talk, are you following a program?
No, really. Look again. It's a program you started writing the day you were born, and won't finish until the day you die. Speech is a set of preprogrammed actions. Walking is preprogrammed, running, jumping, driving, all preprogrammed, repetitive actions. When you do something "Because I wanted to," your brain ran through a complex algorithm which came to the conclusion that it should do that action. You don't actively know that you're running "programs," you simply run them. A computer may or may not know that it is running programs. Does Firefox know that it is running on an AMD or Intel processor? Maybe, maybe not, but it runs, just the same.
"AI" does not refer to sentient machines. "AI" simply refers to the methods by which machines attempt to make decisions.
Quote from: Raist on November 05, 2007, 11:56:33 PM
And anyway, look at us. When you talk, are you following a program?
No, really. Look again. It's a program you started writing the day you were born, and won't finish until the day you die.
I don't disagree with that concept, Raist, I actually supported it. But the key thing here; did someone
put that program into your brain? Or did you learn and develop it yourself?
Quote from: Raist on November 05, 2007, 11:56:33 PMactually, I'm very interested in eventually going into the field of artificial intelligence.
Learned LISP yet?
Quote from: Raist on November 05, 2007, 11:56:33 PMTo me, AI is simply whatever method by which a machine may be allowed to make decisions. A simple If...Then statement is an extremely simple AI. The machine takes an input, and makes a decision. Intelligence comes from making a decision based on input, not on whether or not the machine is self-aware.
Let me make a brief digression. The AH4 defines intelligence as the capacity to acquire and apply knowledge. So if I memorize a common logarithm table to 3 digits, am I more intelligent? No, I've increased my knowledge, but my ability to gain new knowledge or apply old knowledge is the same.
So if a machine simply takes an input, follows instructions and gives an output without noting that information or changing its instructions based on it, that's not intelligence.
Quote from: superluser on November 05, 2007, 10:59:28 PM
It may be that we will never be able to make something that will pass the Turing test. It may be that modelling the human brain, with its massively parallel systems and ternary(however many neurotransmitters we have) logic, may simply be too much to emulate, especially when you have to emulate it bugs and all.
To answer your first statement - we already have.
Eliza and her daughters passed the Turing test years ago. At least, one of her daughters, as I recall, did. I could find you details, if you're interested.
As for the second part - that's a much more complex idea. Mostly, I understand the AI builders have moved away from Neural Networks and are working on other ways around the problem...
Quantum computing may hold the answer to intelligent machines.
The human brain works nothing like a computer, which has only '1' and '0' in its coding. A single neuron in the human brain is not simply an 'on' or 'off' switch as was thought long ago. It can take many different inputs, divert the impulses down specific synaptic pathways depending on ion channel activation, and the coding of information is exceedingly complicated involving long-term potentiated neurons, short-term, various proteins undergoing conformational changes, spike trains, interneurons, and a huge number of tiny dendritic spines (up to tens of thousands per neuron!).
Where quantum computing can help is to introduce a new level of 'fuzziness' to the computing, in fact allowing a single circuit to perform several computations at the same time and introducing more than only two states for that circuit. It still won't have the level of complexity of cortical neurons, but it will at last allow '1/2' and '2' to exist in the formerly invariable strings of '1' and '0'. Having a code that can insert 'maybe either one' or 'both' into information processing opens up the possibility that a consciousness could emerge.
Quote from: llearch n'n'daCorna on November 06, 2007, 07:49:46 AMEliza and her daughters passed the Turing test years ago. At least, one of her daughters, as I recall, did. I could find you details, if you're interested.
Wait...Eliza? Passed the Turing test?
You're going to have to provide a reference for that one.
Quote from: Alondro on November 06, 2007, 08:11:23 AMThe human brain works nothing like a computer, which has only '1' and '0' in its coding. A single neuron in the human brain is not simply an 'on' or 'off' switch as was thought long ago. It can take many different inputs, divert the impulses down specific synaptic pathways depending on ion channel activation
As I understand it, each neuron can take a different state of activation depending on the neurotransmitter it receives. So it's not simply a case of on, off, or some combination of both on and off. So while quantum mechanics might be able to help us in the 50% on, 50% off case, it would not help us in the 50+50i % on, 50+50i % off case. And yes, those are complex percentages.
the thing is with artificial intelligence is that while it's entirely possible with current tech to make even a rudimentary intelligence construct (even if all it does is react to the enviroment around it with some basic semblance of instinct) there isn't a great need for a complex artificial intelligence system outside of what're in games.
i'm not saying there won't be, there just won't be any -yet-, right now robotics is just entering the phase where an intelligence can react to its enviroment, Ala ASIMO.
Quote from: Wikipedia http://en.wikipedia.org/wiki/Turing_testAs of 2007, no computer has passed the Turing test as such. Simple conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with. Documented cases are usually in environments such as Internet Relay Chat where conversation is sometimes stilted and meaningless, and in which no understanding of a conversation is necessary. Additionally, many internet relay chat participants use English as a second or third language, thus making it even more likely that they would assume that an unintelligent comment by the conversational program is simply something they have misunderstood, and don't recognize the very non-human errors they make. See ELIZA effect.
So that would be a 'no' :B
AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer. It can only choose among the things it's been programmed to do. It's like saying that a human being could surpass God.
That said, sometimes I feel like some people could use AI programming on themselves. :P
Funny cuz I hadn't heard much about AI in recent years. I didn't know they were still doing that much with it.
Quote from: DoctaMario on November 07, 2007, 04:09:42 PM
AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer.
I'm not sure I really agree with that. Firstly, if it is a set of programmed instructions, then I would say it's not an AI at all, it's a static program.
Secondly, while one person could arguably not understand their own brain, division of labour across a team might.
Finally, don't confuse the architecture with the software. We might eventually be able to figure out how the brain works at the physical level well enough to replicate it... but I do agree that a total understanding of how the human
mind works is probably impossible for a human.
Oooo!
Looking into Eliza, I ran across this: http://www.computing.dcu.ie/~humphrys/eliza.html
.. which linked to this: http://virt.vgmix.com/jenny18/
Someone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]
Sure, it doesn't past a true Turing test - but it's passed as a human enough.
One of my computing tutors in college failed the Turing test :P
Quote from: Tapewolf on November 07, 2007, 06:10:01 PM
One of my computing tutors in college failed the Turing test :P
i hope you didn't listen much to this teacher, he is obviously a very bad one.
Quote from: llearch n'n'daCorna on November 07, 2007, 06:07:53 PMSomeone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]
Did you look at AOLiza (http://fury.com/aoliza/), listed above (and duplicated here for convenience)?
Using a publicly available Perl version of ELIZA, a Mac with nothing better to do than play psychoanalyst, a few applescripts, and an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations, I put ELIZA in touch with the real world. Every few days I'll put up the latest 'patients.' Names have been changed to protect the... well, everyone.
Example (strong language and adult situations):
Quotetwentyseven: hi, do you like porno?
AOLiza: tell me more about that.
twentyseven: more about what porn?
AOLiza: have you asked anyone else?
twentyseven: yeah, they hate it
twentyseven: do you?
AOLiza: we were discussing you -- not me.
twentyseven: ok, well, i love looking at nude ladies
AOLiza: that's quite interesting.
twentyseven: yes it is
AOLiza: please go on.
twentyseven: i see more hot chicks nude online than i've had dates
twentyseven: i love pussy and tits and ass and legs
twentyseven: want to cyber girl?
AOLiza: tell me more about that.
twentyseven: r u male or female
AOLiza: does talking about this bother you?
twentyseven: only if you are a guy
AOLiza: but what are the chances that me are a guy?
twentyseven: don't know, you tell me. do you have a vagina?
AOLiza: you like to think i tell you -- don't you?
twentyseven: yes, if you are a guy, i am going to verbally kick your ass
AOLiza: really, if me are a guy?
twentyseven: damn right i will cuss your ass out if u are a guy
AOLiza: does it make you feel strong to use that kind of language?
twentyseven: damn right
twentyseven: r u male or female
AOLiza: i'm not sure i understand you fully.
twentyseven: r u man or woman bitch
AOLiza: that is interesting. please continue.
This is why I don't think that Eliza could ever come close to passing the Turing Test.
Quote from: superluser on November 07, 2007, 09:55:24 PM
twentyseven: hi, do you like porno?
AOLiza: tell me more about that.
twentyseven: more about what porn?
AOLiza: have you asked anyone else?
twentyseven: yeah, they hate it
twentyseven: do you?
AOLiza: we were discussing you -- not me.
twentyseven: ok, well, i love looking at nude ladies
AOLiza: that's quite interesting.
twentyseven: yes it is
AOLiza: please go on.
twentyseven: i see more hot chicks nude online than i've had dates
twentyseven: i love pussy and tits and ass and legs
twentyseven: want to cyber girl?
AOLiza: tell me more about that.
twentyseven: r u male or female
AOLiza: does talking about this bother you?
twentyseven: only if you are a guy
AOLiza: but what are the chances that me are a guy?
twentyseven: don't know, you tell me. do you have a vagina?
AOLiza: you like to think i tell you -- don't you?
twentyseven: yes, if you are a guy, i am going to verbally kick your ass
AOLiza: really, if me are a guy?
twentyseven: damn right i will cuss your ass out if u are a guy
AOLiza: does it make you feel strong to use that kind of language?
twentyseven: damn right
twentyseven: r u male or female
AOLiza: i'm not sure i understand you fully.
twentyseven: r u man or woman bitch
AOLiza: that is interesting. please continue
:lol :lol :lol Oh that is
funny...
Quote from: techmaster-glitch on November 05, 2007, 11:44:02 PM
I agree with the Chinese Room experiment. The way I see it, simply following instructions is not intelligence. I am refering to the part that mentions that Searle did exactly the same thing his theoretical robot did: he manipulated Chinese symbols according to instructions without actually knowing what he was doing. Just about anything with any sort of proccessing power at all can be taught to follow instructions. I am referring to animals, of course, but that's starting to get off-topic.
My point is, if you blindly follow instructions without understanding them, without knowing what you are doing, and do not have the ability to change them, it is simply not a display of intelligence. It's just doing what you're told. Just about anything can do that.
But this is just my humble opinion. Pick apart at will.
A point that superluser brought up is that it is not the
man whose intelligence is being tested but rather the Chinese Room itself. Instead of little homunculi with dictionaries, we have heads filled with neurons. These neurons are are arguably just "following instructions", deciding whether or not to fire based on numbers of AMPA and NMDA receptors or whatnot. By your argument, that means they aren't intelligent. But why should it be necessary for the constituent parts an intelligent gestalt to themselves exhibit intelligence? We're holding the Chinese Room to a standard to which we ourselves don't measure up. If we made that argument for humans, we could just keep dropping down levels until it was necessary for our subatomic particles to be "intelligent". Why then do Searle and the other detractors of AI focus on this argument? It's mere biological chauvinism.
Quote from: DoctaMario on November 07, 2007, 04:09:42 PM
AI will NEVER surpass actual human thinking because there is a limit as to how much can be programmed. The AI bot could only really be as smart as whoever programmed it and because it NEEDS instructions programmed, it'll never surpass the programmer. It can only choose among the things it's been programmed to do. It's like saying that a human being could surpass God.
I'd say that this view reflects a notion of AI that went out of style decades ago. Few still believe that AIs like the hardcoded expert systems of the 1970's could ever encompass the sum of human knowledge. Modern AI systems based on neural nets, genetic algorithms, and the like constitute systems that
learn the rules of their environments on their own. The programmer merely provides the framework in which they operate. The programs regularly come up with solutions that the programmers didn't intend or expect. And a lot of what they've learned is quite inscrutable under the hood--massive tables of connection weights or chromosome strings that aren't remotely human-readable.
Quote from: superluser on November 07, 2007, 09:55:24 PM
Quote from: llearch n'n'daCorna on November 07, 2007, 06:07:53 PMSomeone wrote an Eliza for DalNET IRC network. Made to sound like a horny 18yo girl. And he did a good job on it, too. ;-]
Did you look at AOLiza (http://fury.com/aoliza/), listed above (and duplicated here for convenience)?
[snip]
This is why I don't think that Eliza could ever come close to passing the Turing Test.
Yes, as a matter of fact, I did. The default Eliza used there comes with very few sentences, and to anyone who's played with one before, is -real- obvious. jenny18 has, according to the site, over 3800 responses in it's database. That makes it a -lot- more reactive, and a lot harder to spot.
Anyone who has the time, and the inclination, can add more data to the list that an Eliza has, which, in turn, makes it harder and harder to spot. Eventually you reach the point where it -will- pass a Turing test. It might take several years of concerted effort, and multi-gigabyte database, and possibly several years of training, but I can see an Eliza type program being "good enough". The problem is, there are much better - and more useful - ways of passing the same limitations. And the other ways tend to lead to money, always a strong attractor in human behaviour. ;-]
The Turing test, as discussed on the various webpages listed, isn't really sufficient to be a description of intelligence. It's merely one in a range of tests, and is relatively easy to -fail- - even for humans. Just google for "turing test fail" and see what leaps up...
Quote from: Tezkat on November 07, 2007, 11:37:49 PMA point that superluser brought up is that it is not the man whose intelligence is being tested but rather the Chinese Room itself.
I'd just like to point out that I don't disagree with Searle, just that the issue isn't as cut and dried as some Chinese Room advocates would have you believe.
Also, unless you're rewriting the instructions based on what you're getting slipped under the door, it's not intelligence.
Quote from: llearch n'n'daCorna on November 08, 2007, 04:20:30 AMYes, as a matter of fact, I did. The default Eliza used there comes with very few sentences, and to anyone who's played with one before, is -real- obvious. jenny18 has, according to the site, over 3800 responses in it's database. That makes it a -lot- more reactive, and a lot harder to spot.
Doesn't really matter. Looking at Jenny18, I get to the following line:
tell me about you go anal? (It's in Digimortal, the first one that I clicked on)
and I instantly know that I'm dealing with an Eliza variant.
Eliza is just too poorly designed to ever work as a chatterbot AI. No offense to Joseph Weizenbaum--Eliza was a giant leap forward in 1966.
Contrast bots like ALICE (Loebner winning transcript here (http://loebner.net/Prizef/2004_Contest/Wallace.html)). I'm not sure, but I think that this is the one which searches the web based on your comments, and reacts based on the search results. It's still obviously a chatbot, but it's not nearly as obvious. In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.
Incidentally, if you ever wanted to know why I use Linux, it would have to be the special effects:
Quote[12:36:29] Judge 1: What are your interests?
[12:37:03] Alice: I am interested in all kinds of things. We can talk about anything. My
favorite subjects are robots and computers.
[12:37:17] Judge 1: What is your favorite operating system?
[12:37:30] Alice: Linux is the best operating system.
[12:37:38] Judge 1: What do you like about it?
[12:37:51] Alice: it? What do I like about it?
[12:37:59] Judge 1: What do you like about Linux?
[12:38:12] Alice: The special effects are the best.
One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.
All the computers still require some sort of input to respond to. Humans are the only ones who can be introspective, who can mentally ask questions and try to come up with answers. Curiosity itself is a basic part of intelligence present in many mammal species, yet I have never heard of a computer deciding to conduct an Internet search on a whim to find out how cheese puffs are made. I've never seen a computer even begin to wonder what it is, and why it is; which are questions humans have asked of themselves for at least thousands of years.
A computer cannot write even a simple story completely by itself. it must keep getting instructions externally, unlike a human that can create its own instructions for stories.
I'll be interested when a computer realizes, 'Hey, I'm not getting paid for this!" Which is what will actually trigger the Rise of the Machines... wage disputes. :P
I've heard of the computer 'evolution' experiments, but those aren't modelling intelligence. They're much closer to models of random mutation in DNA. It'd be more like cyber bugs from what I've seen.
Quote from: superluser on November 08, 2007, 07:53:29 AM
...In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.
... Are we getting personal here? *wink*
Quote from: llearch n'n'daCorna on November 08, 2007, 08:12:38 AM
Quote from: superluser on November 08, 2007, 07:53:29 AM
...In a chat room, it would be very hard to distinguish ALICE from a loopy semi-sarcastic pothead.
... Are we getting personal here? *wink*
I'm not a pothead... :B
Quote from: Alondro on November 08, 2007, 08:12:09 AM
One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.
Again, you seem to be talking about intelligence in a software program. That's probably not really the way to go, unless you're using it for physical modelling. An advanced neural net (probably in hardware) is more likely to give you that kind of behaviour.
QuoteI've heard of the computer 'evolution' experiments, but those aren't modelling intelligence. They're much closer to models of random mutation in DNA. It'd be more like cyber bugs from what I've seen.
This is true. However it has turned up some quite surprising stuff. I can't find a nice, easy-to-follow description of Thompson's FPGA evolution, but basically he used a genetic algorithm to 'breed' circuits in a Field-Programmable Gate Array chip. It would come up with new solutions to the problems he set that a human couldn't.
The most striking one was when he told it to build an oscillator to differentiate a 1khz signal from a 10khz one, but without allowing it sufficient components to do so. The resulting circuit worked, but no-one knows quite why. Building the designed circuit out of CMOS logic resulted in a circuit that failed to operate, as did digital simulations of the design. If the design was downloaded into another Xilinx chip of the same type, it did not work anymore, although comparatively few generations of evolution on the new target chip would have it adapt to the new environment.
What was most freaky was that the design wired a couple of cells of the chip up but didn't connect them to the rest of the circuit - but if they were taken away or wired differently it no longer functioned.
http://ehw.jpl.nasa.gov/Documents/PDFs/thompson.pdf
(Pages 178-190 in the journal, 11-20 in the PDF)
Quote from: Alondro on November 08, 2007, 08:12:09 AM
One hallmark of human intelligence that doesn't look like computers are even capable of is simply asking itself questions.
Did you see the transcript of ALICE and Barry where they started talking about Star Trek? They were asking questions there.
Quote from: llearch n'n'daCorna on November 08, 2007, 08:12:38 AM... Are we getting personal here? *wink*
We were discussing you, not me.
I think the thing is that AI bots like Eliza can't string a sentence together (e.g. Do you think you should be able to getting personal here?), while you can't hold a conversation with a bot like ALICE. It's a huge difference. If you look at any one of Eliza's responses, you can tell that it's a bot. They all fall into one of two categories, either preprogrammed responses (Are such questions on your mind often?) or grammatically awkward or syntactically incorrect sentences.
Any of ALICE's responses would look virtually indistinguishable from a human's. Also, most of the responses are appropriate for the questions asked. It's just that when you try to follow up, the conversation disintegrates.
P.S. I didn't know llearch was a pothead. :B
you see if you really want to get an ai just put a bunch of electrodes into the brain of a human and essentially download the entire brain onto the computer voila artificial intelligence... just kidding
if you really want artificial intelligence then you have to start off small say maybe a dogs brain try to replicate that. i choose a dog because it is loyal and will do anything if you can train it. then you can move to either a wolf or house cat. if you move to a wolf then you are going to be able to have a bunch of computer ai's working together as a team this is going to be harder but it will very well simulate how humans work with government. at the same time you can create a different ai based on a house cat this will simulate everything from pride, selfishness, and hording everything, curiosity (all major functions that humans go through) then we can move onto something slightly different until the final step is putting everything together to make a true sentient artificial intelligence possible of learning and living until it either becomes out dated, kills itself, or just plain out doesn't die
Quote from: gh0st on November 08, 2007, 05:30:32 PMif you really want artificial intelligence then you have to start off small say maybe a dogs brain
A dog's brain is probably harder to emulate than a human's. We know much more about how the human brain works than the canine brain.
Much easier said than done. :B
You know, after reviewing some of the computer conversations, I must wonder if in fact a good number of our Congress persons are AI, as their ramblings make about the same amount of sense.
:P
Quote from: Alondro on November 12, 2007, 11:06:22 AM
You know, after reviewing some of the computer conversations, I must wonder if in fact a good number of our Congress persons are AI, as their ramblings make about the same amount of sense.
:P
my gosh has the government secretly employed aim bots in brain dead people and programmed them to run for congress?? not really news if you ask me...
Did you know that you can make AI of every strategy game except Go. As hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better? All our thoughts are defined by something that we already understand: Time/space, more/less, yes/no etc. Perhaps we need to break our own "programming", before we can start duplicating our current level.
Quote from: gh0st on November 08, 2007, 05:30:32 PM
you see if you really want to get an ai just put a bunch of electrodes into the brain of a human and essentially download the entire brain onto the computer voila artificial intelligence... just kidding
You'd need slightly higher resolution tools than "electrodes" to map a human brain, but don't think that isn't an approach that people are seriously considering--the lure of digital immortality, and all that. Of course, we won't have the tools (computing power, brain scanners, etc.) to pull that off for a human-sized brain for many decades.
Personally, I think that's a much less interesting method of creating an "artificial" intelligence; if it worked, it could produce a thinking machine (or a machine that simulates thought, depending on which side of AI philosophy you butter your bread) without having to know
how it works. Far better to decode our neural circuitry and figure out what our what our brain is doing when it thinks--reverse engineering the algorithms that underly thought at the functional level. That's where a lot of the really exciting research in cognitive neuroscience is going right now.
Quote from: superluser on November 08, 2007, 06:11:27 PM
A dog's brain is probably harder to emulate than a human's. We know much more about how the human brain works than the canine brain.
Human and canine brains aren't all
that different. Indeed, while certain areas may be more developed than others, the basic neural architecture is remarkably similar across mammalian species.
I suspect it would be much easier to produce a believable robot dog than a believable android, at any rate. Our standards for "doglike" behaviour are much lower than they are for other humans. :3
Quote from: Omega on November 12, 2007, 05:31:46 PM
Did you know that you can make AI of every strategy game except Go.
You can make an AI out of
any game with a relatively well-defined ruleset. Go (at least in its standard 19x19 form) has a very large problem space, but it isn't fundamentally different from any other game. GnuGo plays at around the 10 kyu level with fairly simple heuristics. The top commercial are maybe a few stones stronger. With budget like Deep Blue's, one could probably produce a system that played at the pro level. Go represents a more interesting challenge for AI than, say, chess due to the need for more human-like pattern matching to prune the search space--pure brute force computing isn't feasible.
QuoteAs hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better?
I suppose that depends on what you mean by "nothing"--every system needs an underlying framework in which to operate. We evolved within a system bound by the laws of physics, for instance, and our ecosystems and social networks have rules of their own. We can apply similar principles to computing, unleashing competing self-modifying programs on a given problem and selecting the best ones. Evolutionary computing has already demonstrated that it can outperform human designers in some areas (circuit optimization, aerodynamics, etc.).
Quote from: GabrielsThoughts on November 05, 2007, 08:10:59 PM
I've been writing a paper about this subject and I'm actually disappointed with it, I've talked with chat bots Alice, Alan, and attempted to program Hal and I'm just disappointed that it hasn't really developed or progressed within the last couple of years.
You haven't been looking in the right places. Chat bots are mostly considered toy problems that receive very little research funding. The field of AI has
exploded in the past decade. Natural language processors that can respond to spoken input, visual processors that recognize spoiled fruits or wanted criminals, data mining systems that figure out what people want to buy, traffic shaping heuristics that relieve network congestion, computer-designed circuits and vehicle components... these are all commercially viable, real-world applications of AI research.
Quote from: Tezkat on November 13, 2007, 01:19:16 AM
You can make an AI out of any game with a relatively well-defined ruleset. Go (at least in its standard 19x19 form) has a very large problem space, but it isn't fundamentally different from any other game. GnuGo plays at around the 10 kyu level with fairly simple heuristics. The top commercial are maybe a few stones stronger. With budget like Deep Blue's, one could probably produce a system that played at the pro level. Go represents a more interesting challenge for AI than, say, chess due to the need for more human-like pattern matching to prune the search space--pure brute force computing isn't feasible.
really? Where can one get this program?
Quote from: Tezkat on November 13, 2007, 01:19:16 AM
QuoteAs hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better?
I suppose that depends on what you mean by "nothing"--every system needs an underlying framework in which to operate. We evolved within a system bound by the laws of physics, for instance, and our ecosystems and social networks have rules of their own. We can apply similar principles to computing, unleashing competing self-modifying programs on a given problem and selecting the best ones. Evolutionary computing has already demonstrated that it can outperform human designers in some areas (circuit optimization, aerodynamics, etc.).
By nothing, I mean thinking outside the box
Computers can outmacth humans in numbers, because they are based on numbers. "By numbers they live. By numbers they shall fall". The human mind has rather poor understanding of numbers. We see these figures, but we cannot graps the idea behind them, whit out some concentration. For example: a random figure, let's say 0,56486461 how much is that? Duh! it's obviously zeropointfivesi... No. how
much is it. Can you define that figure some how else than with numbers? Is there another word for that figure? The only way (that I can think of at the moment) one can do with this number is to tell is it more or less than the other number. The idea of this particular number is useless. We disregard and dismiss it (the idea, not the number).The mind works through these ideas, not numbers. Not my mind at least. You might be able to count the ideas, but unable to define them through math. So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.
The movie sucked, btw.
Quote from: Omega on November 13, 2007, 03:50:38 AM
So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.
with the current technology, not happening (everything is defined in numbers in a computer) but it may be possible to do so if we can expand our technology beyond current philosophies.
Google-fu FTW:
GnuGo (http://www.gnu.org/software/gnugo/)
Quote from: Omega on November 13, 2007, 03:50:38 AM
By nothing, I mean thinking outside the box
Computers can outmacth humans in numbers, because they are based on numbers. "By numbers they live. By numbers they shall fall". The human mind has rather poor understanding of numbers. We see these figures, but we cannot graps the idea behind them, whit out some concentration. For example: a random figure, let's say 0,56486461 how much is that? Duh! it's obviously zeropointfivesi... No. how much is it. Can you define that figure some how else than with numbers? Is there another word for that figure? The only way (that I can think of at the moment) one can do with this number is to tell is it more or less than the other number. The idea of this particular number is useless. We disregard and dismiss it (the idea, not the number).The mind works through these ideas, not numbers. Not my mind at least. You might be able to count the ideas, but unable to define them through math. So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.
Can
you come up with alternate representations that don't involve numbers? :mowtongue
I could definitely see a modern AI being asked "How much is 56486461?" and returning weird and arguably "creative" answers like a guy in Mexico with that as his phone number or a pretty fractal image which used that as the random number seed. Evolutionary computing applications often come up with workable solutions so far outside the box that their human designers have no clue how or why they work.
Is that the same thing as knowledge or creativity?
I guess it all depends on where you place the boundaries of the box. When someone is thinking outside the box, what they're really doing is thinking inside a larger box. :3
The problem here is one of knowledge representation. Obviously you need some operational representations of your data. That said, AI tools can be
very good at discovering relationships between certain types of data. For instance, there's a fairly strong correlation between the volume of ice cream sales and the number of home burglaries. Given the right data set, a data mining tool would pick up on that fact easily. Even humans might be puzzled by that finding. But, assuming that it also had access to the right data, our AI would also pick up on the fact that both of these also correlate to temperature. Now, could it take the next step and hypothesize that burglars are more active when it's warm because it's hard to jimmy a lock when you're freezing your ass off and not because they're high on sugary snacks? That's the sort of leap that requires a very large box. At the moment, the computing paradigms that represent things like causality are generally not the same ones that discover that hijackers prefer aisle seats. We only have the technical capacity to model small subsets of human intelligence at a time.
The development of humanlike AI (ya know, like Data or C-3P0) is still in its infancy. Quite literally, in some cases: A number of AI researchers have taken the approach that it's best to start with as blank a slate as possible and try to mirror the cognitive development of our own children. So they build baby bots. These already require small supercomputing clusters to run, so we'll need many more years of Moore's Law action before adults would be feasible.
It's important to realize that humans don't come equipped with all of our cognitive processing capabilities out of the womb. For example, infants fail at tasks requiring a notion object permanence--that the ball still exists after it rolls behind the couch, or that mommy is still there when she plays "peekaboo" and covers her face with her hands. They usually won't pass these tests until they're more than a year old. Even dogs and chimps (and yes, some bots) outperform them in many simple cognitive tasks at that stage.
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?
A very interesting finding is that our attempts to build AIs that learn the rules of their environments (mostly from scratch) frequently exhibit limitations similar to human children below a certain developmental age. For instance, language processors that study large numbers of grammatically correct/incorrect samples in order to figure out the rules of English grammar and then use that foundation to generate their own sentences tend to make the kinds of generalization errors common among first graders. These limitations could actually be a cause for optimism. They can potentially teach us a lot about human development.
And they could also indicate that AI simply has a bit of growing up to do. :kittycool
Quote
The movie sucked, btw.
I kinda liked the first part--ya know, the bit actually based on the original story. Then it got weird for no apparent reason. Then it got
long. :animesweat
Quote from: Tezkat on November 13, 2007, 07:34:22 AM
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?
Nothing. Even -adults- will pour a bigger shot into a short fat glass than into a tall thin one, unless they're either extremely anal (oo! Pick me!) or highly experienced bartenders.
Go figure.
Quote from: llearch n'n'daCorna on November 13, 2007, 08:17:27 AM
Quote from: Tezkat on November 13, 2007, 07:34:22 AM
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?
Nothing. Even -adults- will pour a bigger shot into a short fat glass than into a tall thin one, unless they're either extremely anal (oo! Pick me!) or highly experienced bartenders.
Go figure.
They usually train bartenders to pour for 4 seconds for a full shot. But what a lot of bars will do to save money is pour shots into a taller glass, but pour less liquor into the glass, thus making it LOOK like the customer is getting more liquor. It works pretty well!
Quote from: Tezkat on November 13, 2007, 07:34:22 AMIt's important to realize that humans don't come equipped with all of our cognitive processing capabilities out of the womb. For example, infants fail at tasks requiring a notion object permanence--that the ball still exists after it rolls behind the couch, or that mommy is still there when she plays "peekaboo" and covers her face with her hands. They usually won't pass these tests until they're more than a year old. Even dogs and chimps (and yes, some bots) outperform them in many simple cognitive tasks at that stage.
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?
Anyone who wants to know more should watch Discovering Psychology with Philip Zimbardo. Specifically, episodes 5 and 6. (It's freely available from learner.org)
It deals with childhood mental development (and the tall/wide and object permanence demonstrations in particular). There's another episode that deals with Piaget's theories, as well, but I can't recall which one.
What's really interesting is that it deals with where the theories are being proven to be wrong. They gave an example of a baby younger than 4 months who was getting on her mother's nerves, so the mother tied one end of a string to the baby's leg and the other end to a keychain so that whenever the baby moved, the keys would jangle. The mother noticed that the baby started kicking to get the keys to jangle. The mother, who was a psychologist, also knew that babies aren't supposed to be able to have secondary circular reactions (like this) before 4 months, according to Piaget, and that babies that young certainly don't have the concept of objects that would be necessary to understand how to such reactions work.
So something else must be happening in the brain to allow babies to do this. Fascinating stuff.
The day I was born, I could recognize my grandmother's voice. Apparently, she said something, and I lifted my head and looked for her.(we've got a video of it somewhere, but I've never seen it.)
So much for their developmental timeline. :mwaha
We are pretty much at the infancy of artificial intelligence. Maybe 30 years later we will be able to say that we have almost mastered some aspects of artificial intelligence.
There's at least one other major hurdle to creating a machine that can pass the Turing test.
English (like all languages) is a moving target. Let me start off with an example.
Did anyone have any trouble understanding what I meant by OTG in this message (http://clockworkmansion.com/forum/index.php/topic,3733.msg160086.html#msg160086)?
That was an initialism, and their very use was almost unheard of in 19th century English. Yet today, on the fly, I can come up with a nonstandard initialism intended to be used only once, and people will understand it.
What's more, you have things like Fannie Mae and Freddie Mac, which are creative names based on the pronunciation of initialisms.
And someone just told me about his nickname, nolo, which comes from his initials (NC).
The English language has changed drastically in the last hundred years, and will certainly continue to change in future years, so perhaps trying to get a dialogue simulator is necessarily a futile (or at least Sisyphean) task--by the time you get something that simulates 2007's dialogue, 30 years will have passed, and you'll have to start all over to try to get 2037's.
OTOH (heh) by the time you figure something that can generate a 2007 dialogue, you've figured out a heck of a lot about the mechanisms behind it - which gives you a heck of a big step towards making one that deals with 2037's dialogue...
So, while the target moves, you also move, and the steps you take towards the target are generally larger than the steps the target takes away from you.
Zeno all over again.
Quote from: llearch n'n'daCorna on November 21, 2007, 05:00:19 PMSo, while the target moves, you also move, and the steps you take towards the target are generally larger than the steps the target takes away from you.
While this used to be true, I don't know if it is, anymore. Bots have made great strides since the 60s, but if you look at the most recent developments, there hasn't been much noticeable movement.
This suggests that while the English Language is becoming more complex on a logarithmic scale, our development of bots is also on such a scale, and it may be a slower growth curve than our own.
Quote from: superluser on November 21, 2007, 05:17:44 PM
While this used to be true, I don't know if it is, anymore. Bots have made great strides since the 60s, but if you look at the most recent developments, there hasn't been much noticeable movement.
This suggests that while the English Language is becoming more complex on a logarithmic scale, our development of bots is also on such a scale, and it may be a slower growth curve than our own.
Actually, I think that there may have been more advances recently.
One of the things that we are learning is what is really difficult and what is really easy. If you look at the Isaac Asimov early robot stories, the first stories had robots that couldn't speak but could understand human speech. After all, if dogs could follow human speech, understanding human speech must be easier than saying it. What we learned was that the reverse was true. We have many computer programs that speak: I get undesired phone calls from them every day. Understanding spoken language is very difficult.
Many of the spectacular successes in the beginning were because the selected tasks turned out to be easy. What's being worked on now is the hard tasks.