what do you think of artificial intelligence? (not the movie)

Started by GabrielsThoughts, November 05, 2007, 08:10:59 PM

Previous topic - Next topic

gh0st

you see if you really want to get an ai just put a bunch of electrodes into the brain of a human and essentially download the entire brain onto the computer voila artificial intelligence... just kidding

if you really want artificial intelligence then you have to start off small say maybe a dogs brain try to replicate that. i choose a dog because it is loyal and will do anything if you can train it. then you can move to either a wolf or house cat. if you move to a wolf then you are going to be able to have a bunch of computer ai's working together as a team this is going to be harder but it will very well simulate how humans work with government. at the same time you can create a different ai based on a house cat this will simulate everything from pride, selfishness, and hording everything, curiosity (all major functions that humans go through) then we can move onto something slightly different until the final step is putting everything together to make a true sentient artificial intelligence  possible of learning and living until it either becomes out dated, kills itself, or just plain out doesn't die

superluser

Quote from: gh0st on November 08, 2007, 05:30:32 PMif you really want artificial intelligence then you have to start off small say maybe a dogs brain

A dog's brain is probably harder to emulate than a human's.  We know much more about how the human brain works than the canine brain.


Would you like a googolplex (gzipped 57 times)?

xHaZxMaTx


Alondro

You know, after reviewing some of the computer conversations, I must wonder if in fact a good number of our Congress persons are AI, as their ramblings make about the same amount of sense. 

:P
Three's a crowd:  One lordly leonine of the Leyjon, one cruel and cunning cubi goddess, and one utterly doomed human stuck between them.

http://www.furfire.org/art/yapcharli2.gif

gh0st

Quote from: Alondro on November 12, 2007, 11:06:22 AM
You know, after reviewing some of the computer conversations, I must wonder if in fact a good number of our Congress persons are AI, as their ramblings make about the same amount of sense. 

:P

my gosh has the government secretly employed aim bots in brain dead people and programmed them to run for congress?? not really news if you ask me...

Omega

Did you know that you can make AI of every strategy game except Go. As hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better? All our thoughts are defined by something that we already understand: Time/space, more/less, yes/no etc. Perhaps we need to break our own "programming", before we can start duplicating our current level.

Tezkat


Quote from: gh0st on November 08, 2007, 05:30:32 PM
you see if you really want to get an ai just put a bunch of electrodes into the brain of a human and essentially download the entire brain onto the computer voila artificial intelligence... just kidding

You'd need slightly higher resolution tools than "electrodes" to map a human brain, but don't think that isn't an approach that people are seriously considering--the lure of digital immortality, and all that. Of course, we won't have the tools (computing power, brain scanners, etc.) to pull that off for a human-sized brain for many decades.

Personally, I think that's a much less interesting method of creating an "artificial" intelligence; if it worked, it could produce a thinking machine (or a machine that simulates thought, depending on which side of AI philosophy you butter your bread) without having to know how it works. Far better to decode our neural circuitry and figure out what our what our brain is doing when it thinks--reverse engineering the algorithms that underly thought at the functional level. That's where a lot of the really exciting research in cognitive neuroscience is going right now.


Quote from: superluser on November 08, 2007, 06:11:27 PM
A dog's brain is probably harder to emulate than a human's.  We know much more about how the human brain works than the canine brain.

Human and canine brains aren't all that different. Indeed, while certain areas may be more developed than others, the basic neural architecture is remarkably similar across mammalian species.

I suspect it would be much easier to produce a believable robot dog than a believable android, at any rate. Our standards for "doglike" behaviour are much lower than they are for other humans. :3


Quote from: Omega on November 12, 2007, 05:31:46 PM
Did you know that you can make AI of every strategy game except Go.

You can make an AI out of any game with a relatively well-defined ruleset. Go (at least in its standard 19x19 form) has a very large problem space, but it isn't fundamentally different from any other game. GnuGo plays at around the 10 kyu level with fairly simple heuristics. The top commercial are maybe a few stones stronger. With budget like Deep Blue's, one could probably produce a system that played at the pro level. Go represents a more interesting challenge for AI than, say, chess due to the need for more human-like pattern matching to prune the search space--pure brute force computing isn't feasible.

QuoteAs hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better?

I suppose that depends on what you mean by "nothing"--every system needs an underlying framework in which to operate. We evolved within a system bound by the laws of physics, for instance, and our ecosystems and social networks have rules of their own. We can apply similar principles to computing, unleashing competing self-modifying programs on a given problem and selecting the best ones. Evolutionary computing has already demonstrated that it can outperform human designers in some areas (circuit optimization, aerodynamics, etc.).


Quote from: GabrielsThoughts on November 05, 2007, 08:10:59 PM
I've been writing a paper about this subject and I'm actually disappointed with it, I've talked with chat bots Alice, Alan, and attempted to program Hal and I'm just disappointed that it hasn't really developed or progressed within the last couple of years.

You haven't been looking in the right places. Chat bots are mostly considered toy problems that receive very little research funding. The field of AI has exploded in the past decade. Natural language processors that can respond to spoken input, visual processors that recognize spoiled fruits or wanted criminals, data mining systems that figure out what people want to buy, traffic shaping heuristics that relieve network congestion, computer-designed circuits and vehicle components... these are all commercially viable, real-world applications of AI research.

The same thing we do every night, Pinky...

Omega

Quote from: Tezkat on November 13, 2007, 01:19:16 AM



You can make an AI out of any game with a relatively well-defined ruleset. Go (at least in its standard 19x19 form) has a very large problem space, but it isn't fundamentally different from any other game. GnuGo plays at around the 10 kyu level with fairly simple heuristics. The top commercial are maybe a few stones stronger. With budget like Deep Blue's, one could probably produce a system that played at the pro level. Go represents a more interesting challenge for AI than, say, chess due to the need for more human-like pattern matching to prune the search space--pure brute force computing isn't feasible.
really? Where can one get this program?

Quote from: Tezkat on November 13, 2007, 01:19:16 AM

QuoteAs hard as we have tried, no program is still able to create anything from nothing. Then again, is human mind any better?

I suppose that depends on what you mean by "nothing"--every system needs an underlying framework in which to operate. We evolved within a system bound by the laws of physics, for instance, and our ecosystems and social networks have rules of their own. We can apply similar principles to computing, unleashing competing self-modifying programs on a given problem and selecting the best ones. Evolutionary computing has already demonstrated that it can outperform human designers in some areas (circuit optimization, aerodynamics, etc.).
By nothing, I mean thinking outside the box
Computers can outmacth humans in numbers, because they are based on numbers. "By numbers they live. By numbers they shall fall". The human mind has rather poor understanding of numbers. We see these figures, but we cannot graps the idea behind them, whit out some concentration. For example: a random figure, let's say 0,56486461 how much is that? Duh! it's obviously zeropointfivesi... No. how much is it. Can you define that figure some how else than with numbers? Is there another word for that figure? The only way (that I can think of at the moment) one can do with this number is to tell is it more or less than the other number. The idea of this particular number is useless. We disregard and dismiss it (the idea, not the number).The mind works through these ideas, not numbers. Not my mind at least. You might be able to count the ideas, but unable to define them through math. So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.

The movie sucked, btw.

Reese Tora

Quote from: Omega on November 13, 2007, 03:50:38 AM
So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.

with the current technology, not happening (everything is defined in numbers in a computer) but it may be possible to do so if we can expand our technology beyond current philosophies.

Google-fu FTW: GnuGo
<-Reese yaps by Silverfox and Animation by Tiger_T->
correlation =/= causation

Tezkat


Quote from: Omega on November 13, 2007, 03:50:38 AM
By nothing, I mean thinking outside the box
Computers can outmacth humans in numbers, because they are based on numbers. "By numbers they live. By numbers they shall fall". The human mind has rather poor understanding of numbers. We see these figures, but we cannot graps the idea behind them, whit out some concentration. For example: a random figure, let's say 0,56486461 how much is that? Duh! it's obviously zeropointfivesi... No. how much is it. Can you define that figure some how else than with numbers? Is there another word for that figure? The only way (that I can think of at the moment) one can do with this number is to tell is it more or less than the other number. The idea of this particular number is useless. We disregard and dismiss it (the idea, not the number).The mind works through these ideas, not numbers. Not my mind at least. You might be able to count the ideas, but unable to define them through math. So, unless we can teach computers do something else than count, and I mean do anything with out numbers involved, I don't see a way to create a perfect AI.

Can you come up with alternate representations that don't involve numbers? :mowtongue

I could definitely see a modern AI being asked "How much is 56486461?" and returning weird and arguably "creative" answers like a guy in Mexico with that as his phone number or a pretty fractal image which used that as the random number seed. Evolutionary computing applications often come up with workable solutions so far outside the box that their human designers have no clue how or why they work.

Is that the same thing as knowledge or creativity?

I guess it all depends on where you place the boundaries of the box. When someone is thinking outside the box, what they're really doing is thinking inside a larger box. :3

The problem here is one of knowledge representation. Obviously you need some operational representations of your data. That said, AI tools can be very good at discovering relationships between certain types of data. For instance, there's a fairly strong correlation between the volume of ice cream sales and the number of home burglaries. Given the right data set, a data mining tool would pick up on that fact easily. Even humans might be puzzled by that finding. But, assuming that it also had access to the right data, our AI would also pick up on the fact that both of these also correlate to temperature. Now, could it take the next step and hypothesize that burglars are more active when it's warm because it's hard to jimmy a lock when you're freezing your ass off and not because they're high on sugary snacks? That's the sort of leap that requires a very large box. At the moment, the computing paradigms that represent things like causality are generally not the same ones that discover that hijackers prefer aisle seats. We only have the technical capacity to model small subsets of human intelligence at a time.

The development of humanlike AI (ya know, like Data or C-3P0) is still in its infancy. Quite literally, in some cases: A number of AI researchers have taken the approach that it's best to start with as blank a slate as possible and try to mirror the cognitive development of our own children. So they build baby bots. These already require small supercomputing clusters to run, so we'll need many more years of Moore's Law action before adults would be feasible.

It's important to realize that humans don't come equipped with all of our cognitive processing capabilities out of the womb. For example, infants fail at tasks requiring a notion object permanence--that the ball still exists after it rolls behind the couch, or that mommy is still there when she plays "peekaboo" and covers her face with her hands. They usually won't pass these tests until they're more than a year old. Even dogs and chimps (and yes, some bots) outperform them in many simple cognitive tasks at that stage.

This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?

A very interesting finding is that our attempts to build AIs that learn the rules of their environments (mostly from scratch) frequently exhibit limitations similar to human children below a certain developmental age. For instance, language processors that study large numbers of grammatically correct/incorrect samples in order to figure out the rules of English grammar and then use that foundation to generate their own sentences tend to make the kinds of generalization errors common among first graders. These limitations could actually be a cause for optimism. They can potentially teach us a lot about human development.

And they could also indicate that AI simply has a bit of growing up to do. :kittycool


Quote
The movie sucked, btw.

I kinda liked the first part--ya know, the bit actually based on the original story. Then it got weird for no apparent reason. Then it got long. :animesweat

The same thing we do every night, Pinky...

llearch n'n'daCorna

Quote from: Tezkat on November 13, 2007, 07:34:22 AM
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?

Nothing. Even -adults- will pour a bigger shot into a short fat glass than into a tall thin one, unless they're either extremely anal (oo! Pick me!) or highly experienced bartenders.

Go figure.
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

DoctaMario

Quote from: llearch n'n'daCorna on November 13, 2007, 08:17:27 AM
Quote from: Tezkat on November 13, 2007, 07:34:22 AM
This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?

Nothing. Even -adults- will pour a bigger shot into a short fat glass than into a tall thin one, unless they're either extremely anal (oo! Pick me!) or highly experienced bartenders.

Go figure.



They usually train bartenders to pour for 4 seconds for a full shot. But what a lot of bars will do to save money is pour shots into a taller glass, but pour less liquor into the glass, thus making it LOOK like the customer is getting more liquor. It works pretty well!

superluser

Quote from: Tezkat on November 13, 2007, 07:34:22 AMIt's important to realize that humans don't come equipped with all of our cognitive processing capabilities out of the womb. For example, infants fail at tasks requiring a notion object permanence--that the ball still exists after it rolls behind the couch, or that mommy is still there when she plays "peekaboo" and covers her face with her hands. They usually won't pass these tests until they're more than a year old. Even dogs and chimps (and yes, some bots) outperform them in many simple cognitive tasks at that stage.

This sort of cognitive development continues throughout childhood. Show preschoolers two glasses (one tall and thin, the other short and fat) each containing equal amounts of liquid. Ask them which glass contains more, and they'll always pick the tall one--even if you pour from one glass to the other right in front of them. The taller one looks bigger. Their brains just don't have the necessary cognitive infrastructure to understand the conservation of volume. Come back a few years later, and they'll get it right. What changed?

Anyone who wants to know more should watch Discovering Psychology with Philip Zimbardo.  Specifically, episodes 5 and 6.  (It's freely available from learner.org)

It deals with childhood mental development (and the tall/wide and object permanence demonstrations in particular).  There's another episode that deals with Piaget's theories, as well, but I can't recall which one.

What's really interesting is that it deals with where the theories are being proven to be wrong.  They gave an example of a baby younger than 4 months who was getting on her mother's nerves, so the mother tied one end of a string to the baby's leg and the other end to a keychain so that whenever the baby moved, the keys would jangle.  The mother noticed that the baby started kicking to get the keys to jangle.  The mother, who was a psychologist, also knew that babies aren't supposed to be able to have secondary circular reactions (like this) before 4 months, according to Piaget, and that babies that young certainly don't have the concept of objects that would be necessary to understand how to such reactions work.

So something else must be happening in the brain to allow babies to do this.  Fascinating stuff.


Would you like a googolplex (gzipped 57 times)?

Fuyudenki

The day I was born, I could recognize my grandmother's voice.  Apparently, she said something, and I lifted my head and looked for her.(we've got a video of it somewhere, but I've never seen it.)

So much for their developmental timeline. :mwaha

haseeb

We are pretty much at the infancy of artificial intelligence. Maybe 30 years later we will be able to say that we have almost mastered some aspects of artificial intelligence.

superluser

There's at least one other major hurdle to creating a machine that can pass the Turing test.

English (like all languages) is a moving target.  Let me start off with an example.

Did anyone have any trouble understanding what I meant by OTG in this message?

That was an initialism, and their very use was almost unheard of in 19th century English.  Yet today, on the fly, I can come up with a nonstandard initialism intended to be used only once, and people will understand it.

What's more, you have things like Fannie Mae and Freddie Mac, which are creative names based on the pronunciation of initialisms.

And someone just told me about his nickname, nolo, which comes from his initials (NC).

The English language has changed drastically in the last hundred years, and will certainly continue to change in future years, so perhaps trying to get a dialogue simulator is necessarily a futile (or at least Sisyphean) task--by the time you get something that simulates 2007's dialogue, 30 years will have passed, and you'll have to start all over to try to get 2037's.


Would you like a googolplex (gzipped 57 times)?

llearch n'n'daCorna

OTOH (heh) by the time you figure something that can generate a 2007 dialogue, you've figured out a heck of a lot about the mechanisms behind it - which gives you a heck of a big step towards making one that deals with 2037's dialogue...

So, while the target moves, you also move, and the steps you take towards the target are generally larger than the steps the target takes away from you.

Zeno all over again.
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

superluser

Quote from: llearch n'n'daCorna on November 21, 2007, 05:00:19 PMSo, while the target moves, you also move, and the steps you take towards the target are generally larger than the steps the target takes away from you.

While this used to be true, I don't know if it is, anymore.  Bots have made great strides since the 60s, but if you look at the most recent developments, there hasn't been much noticeable movement.

This suggests that while the English Language is becoming more complex on a logarithmic scale, our development of bots is also on such a scale, and it may be a slower growth curve than our own.


Would you like a googolplex (gzipped 57 times)?

Naldru

Quote from: superluser on November 21, 2007, 05:17:44 PM
While this used to be true, I don't know if it is, anymore.  Bots have made great strides since the 60s, but if you look at the most recent developments, there hasn't been much noticeable movement.

This suggests that while the English Language is becoming more complex on a logarithmic scale, our development of bots is also on such a scale, and it may be a slower growth curve than our own.
Actually, I think that there may have been more advances recently.

One of the things that we are learning is what is really difficult and what is really easy.  If you look at the Isaac Asimov early robot stories, the first stories had robots that couldn't speak but could understand human speech.  After all, if dogs could follow human speech, understanding human speech must be easier than saying it.  What we learned was that the reverse was true.  We have many computer programs that speak: I get undesired phone calls from them every day.  Understanding spoken language is very difficult.

Many of the spectacular successes in the beginning were because the selected tasks turned out to be easy.  What's being worked on now is the hard tasks.
Learn to laugh at yourself, and you will never be without a source of amusement.