Is Human AI Possible?

 I was speaking with someone I had just met the other day about the possibilities of real artificial intelligence – the sort that is generally called “strong AI.”  “It’s not a matter of whether we can or can’t do it,” she said, “it’s just a matter of when.”  

This seems to be the de-facto attitude that most people have towards AI.  The line of reasoning goes something like this.  Look at how quickly technology has progressed.  Do you think that even fifteen years ago people could have imagined that we would have had cell phones that double as computers, cameras, gaming devices, and everything else that they are?  Why is it so hard to imagine that, in another ten, twenty, or thirty years, technology can have progressed to a point where we can make artificial humans that can think like real humans?  While their argument may seem plausible on the surface, it is actually a philosophically loaded thesis that is certainly not a decided point.

One reason to doubt this claim is laid out in John Searle’s famous “Chinese Room Argument.”  He says to imagine that you are in a room.  You have been given a detailed manual of how to translate Chinese symbols into English, but you have no formal training in Chinese.  There are two doors that you are in between.  Someone from outside the room slips pieces of paper with Chinese symbols under one of the doors.  Your job is to take those symbols and, with the manual given to you, translate them to English and slip them under the other door.  Certainly this is possible; it may take a long time but you could feasibly make rough translations from Chinese to English and fool people outside the room who don’t know about the manual into thinking that the person in the room is a Chinese speaker (this is similar to the basic criteria for AI known as the Turing Test).  However, now ask yourself if you could be said to understand Chinese in this scenario.  If you are given a program to follow and use only that, do you really have understanding of the material you are working with?  The answer, clearly, is no.  

The point of the argument is to apply the same line of thinking to a computer.  Computers basically work on the same structure.  They follow programs that are given to them externally and they use them to manipulate input into output.  Think of tools like Google Translate, or even Siri.  Yes, they are able to (fairly) reliably take input and turn it into successful output.  But does Siri, or your iPhone, actually consciously know English?  Clearly not.

Now apply this to something like a humanoid robot.  Imagine a robot that looks like a human being and can act generally more or less like a human being.  Imagine even that from the outside it is not distinguishable from a real human being.  As long as it relies on a program, it cannot be said to actually have intelligence.  You could claim that it looks, sounds, and acts like a human being, but saying that it has the same intelligence as a human being is flat out wrong.  As long as we think of AI in this paradigm, as something that is reliant on a programmed computer, it is impossible.

Another way to doubt the claims of AI is to attack its foundational presuppositions.  One such presupposition (whether it is done consciously or unconsciously) is that the mind is the central computer where all cognition takes place.   This is generally called the computational theory of mind.  The claim is that the brain basically acts as a computer and that thinking is just a form of computing.  “Computer” does not necessarily mean what we think of as a computer; basically what it means in this formulation is a sort of symbol manipulator that independently computes input to form output.  Only if this is the case can the sort of intelligence that we as humans have be replicated by computers in the way that artificial intelligence is generally thought of.  However, there are reasons to think that this theory is not correct.  The theory is rooted in Cartesianism and has been questioned in recent Philosophy of Mind.  Embodied Cognition, for example, is the idea that cognition is shaped not only by the brain but by various other parts of the body.  Cognition is dependant on not just the brain, but the entire body of the organism and its surroundings.  The brain needs stimuli in order for cognition to take place, and this can only come from other body parts such as the eyes, arms, feet, and so on. Likewise, in order for there to be meaningful output, there has to be a body that reacts.  Without the whole body, cognition, and self-formation, can’t take place. This is fundamentally incompatible with the idea that the brain is the central computer and the only area where cognition takes place (think brain in a vat).  Making a mock-brain is not enough to result in artificial intelligence – instead one would need an entire mock body and a body of experiences comparable to a human being.  Only then would its cognitive processes be remotely similar (any fans of Blade Runner?).  We would basically need biological engineering abilities that are way beyond our current scope.

But people continue to imagine and talk about AI as if it is inevitable.  Computer and Cognitive scientists who have an economic interest in these theories being true go around making these sorts of claims, and their sloppy ideas trickle down to the public. People are attracted to grand claims.  In his book What Computers Can’t Do, Hubert Dreyfus chronicles the early stages of AI and the claims that the people working around it made.  Although the book is in some ways outdated, it still has many interesting philosophical points, and the first part of the book where he outlines the overly ambitious claims that early proponents of AI made was particularly interesting.  It seems that not much has changed, and by looking at Hollywood science-fiction it seems that we’re still convinced by the prospects of AI.  

I certainly don’t doubt that technology will be able to do many of the things that we do.  I even think that robots should one day do much of the manual labor that we do now (as long as the proper social safety nets are put in place – but that’s a different question).  But we need to think of technology as what it is: a tool.  The question shouldn’t be: is AI possible? but rather: what kind of AI is possible?  It seems like strong human AI is not the correct answer, or at the very least won’t be for a very long time.  There’s a lot of things technology can and will bring us, but we are not computers, so computers will not become us.  It might just be that simple.  

 

Further reading/viewing:

Click to access 10.1.1.83.5248.pdf

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

https://plato.stanford.edu/entries/embodied-cognition/

https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence

http://m.imdb.com/title/tt0083658/

https://en.m.wikipedia.org/wiki/Brain_in_a_vat

 

 

Leave a comment