Chinese Room Argument

views updated

CHINESE ROOM ARGUMENT

In 1980 the philosopher John R. Searle published in the journal Behavioral and Brain Sciences a simple thought experiment that he called the "Chinese Room Argument" against "Strong Artificial Intelligence (AI)." The thesis of Strong AI has since come to be called "computationalism," according to which cognition is just computation, hence mental states are just computational states:

Computationalism

According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations arethe ones that the brain performs to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware-independent: Any hardware that is running the right program has the right computational states.

The Turing Test

How do we know which program is the right program? Although it is not strictly a tenet of computationalism, an answer that many computationalists will agree to is that the right program will be the one that can pass the Turing Test (TT), which is to be a system that is able to interact by e-mail with real people exactly the way real people doso exactly that no person can ever tell that the computer program is not another real person. Alan M. Turing (1950) had suggested that once a computer can do everything a real person can do so well that we cannot even tell them apart, it would be arbitrary to deny that that computer has a mind, that it is intelligent, that it can understand just as a real person can.

This, then, is the thesis that Searle set out to show was wrong: (1) mental states are just computational states, (2) the right computational states are the ones that can pass the TT, and (3) any and every hardware on which you run those computational states will have those mental states too.

Hardware-Independence

Searle's thought experiment was extremely simple. Normally, there is no way I can tell whether anyone or anything other than myself has mental states. The only mental states we can be sure about are our own. We cannot be someone else, to check whether they have mental states too. But computationalism has an important vulnerability in this regard: hardware-independence. Because any and every dynamical system (i.e., any physical hardware) that is executing the right computer program would have to have the right mental states, Searle himself can execute the computer program, thereby himself becoming the hardware, and then check whether he has the right mental states. In particular, Searle asks whether the computer that passes the TT really understands the e-mails it is receiving and sending.

The Chinese Room

To test this, Searle obviously cannot conduct the TT in English, for he already understands English. So in his thought-experiment the TT is conducted in Chinese: The (hypothetical) computer program he is testing in his thought-experiment is able to pass the TT in Chinese. That means it is able to receive and send e-mail in Chinese in such a way that none of its (real) Chinese pen-pals would ever suspect that they were not communicating with a real Chinese-speaking and Chinese-understanding person. (We are to imagine the e-mail exchanges going on as frequently we like, with as many people as we like, as long as we like, even for an entire lifetime. The TT is not just a short-term trick.)

Symbol Manipulation

In the original version of Searle's Chinese Room Argument he imagined himself in the Chinese Room, receiving the Chinese e-mails (a long string of Chinese symbols, completely unintelligible to Searle). He would then consult the TT-passing computer program, in the form of rules written (in English) on the wall of the room, explaining to Searle exactly how he should manipulate the symbols, based on the incoming e-mail, to generate the outgoing e-mail. It is important to understand that computation is just rule-based symbol manipulation and that the manipulation and matching is done purely on the basis of the shape of the symbols, not on the basis of their meaning.

Now the gist of Searle's argument is very simple: In doing all that, he would be doing exactly the same thing any other piece of hardware executing that TT-passing program was doing: rule-fully manipulating the input symbols on the basis of their shapes and generating output symbols that make sense to a Chinese pen-palthe kind of e-mail reply a real pen-pal would send, a pen-pal that had understood the e-mail received, as well as the e-mail sent.

Understanding

But Searle goes on to point out that in executing the program he himself would not be understanding the e-mails at all! He would just be manipulating meaningless symbols, on the basis of their shapes, according to the rules on the wall. Therefore, because of the hardware-independence of computation, if Searle would not be understanding Chinese under those conditions, neither would any other piece of hardware executing that Chinese TT-passing program. So much for computationalism and the theory that cognition is just computation.

The System Reply

Searle correctly anticipated that his computationalist critics would not be happy with the handwriting on the wall: Their "System Reply" would be that Searle was only part of the TT-passing system. That whereas Searle would not be understanding Chinese under those conditions, the system as a whole would be!

Searle rightly replied that he found it hard to believe that he plus the walls together could constitute a mental state, but, playing the game, he added: Then forget about the walls and the room. Imagine that I have memorized all the symbol manipulation rules and can conduct them from memory. Then the whole system is me: Where's the understanding?

Desperate computationalists were still ready to argue that somewhere in there, inside Searle, under those conditions, there would lurk a Chinese understanding of which Searle himself was unaware, as in multiple personality disorderbut this seems even more far-fetched than the idea that a person plus walls has a joint mental state of which the person is unaware.

Brain Power

So the Chinese Room Argument is right, such as it is, and computationalism is wrong. But if cognition is not just computation, what is it then? Here, Searle is not much help, for he first overstates what his argument has shown, concluding that it has shown (1) that cognition is not computation at allwhereas all it has shown is that cognition is not all computation. Searle also concludes that his argument has shown (2) that the Turing Test is invalid, whereas all it has shown is that the TT would be invalid if it could be passed by a purely computational system. His only positive recommendation is to turn brain-ward, trying to understand the causal powers of the brain instead of the computational powers of computers.

But it is not yet apparent what the relevant causal powers of the brain are, nor how to discover them. The TT itself is a potential guide: Surely the relevant causal power of the brain is its power to pass the TT! We know now (thanks to the Chinese Room Argument) that if a system could pass the TT via computation alone, that would not be enough. What would be missing?

The Robot Reply

One of the attempted refutations of the Chinese Room Argumentthe "Robot Reply"contained the seeds of an answer, but they were sown in the wrong soil. A robot's sensors and effectors were invoked to strengthen the System Reply: It is not Searle plus the walls of the Chinese Room that constitutes the Chinese-understanding system, it is Searle plus a robot's sensors and effectors. Searle rightly points out that it would still be him doing all the computations, and it was the computations that were on trial in the Chinese Room. But perhaps the TT itself needs to be looked at more closely here:

Behavioral Capacity

Turing's original Test was indeed the e-mail version of the TT. But there is nothing in Turing's paper or his arguments on behalf of the TT to suggest that it should be restricted to candidates that are just computers, or even that it should be restricted to e-mail! The power of the TT is the argument that if the candidate can do everything a real person can doand do it indistinguishably from the way real people do it, as judged by real peoplethen it would be mere prejudice to conclude that it lacked mental states when we were told it was a machine. We don't even really know what a machine is, or isn't!

But we do know that real people can do a lot more than just send e-mail to one another. They can see, touch, name, manipulate, and describe most of the things they talk about in their e-mail. Indeed, it is hard to imagine how either a real pen-pal or any designer of a TT-passing computer program could deal intelligibly with all the symbols in an e-mail message without also being able to do at least some of the things we can all do with the objects and events in the world that those symbols stand for.

Sensorimotor Grounding of Symbols

Computation, as noted, is symbol manipulation, by rules based on the symbols' shapes, not their meanings. Computation, like language itself, is universal, and perhaps all-powerful (in that it can encode just about anything). But surely if we want the ability to understand the symbols' meanings to be among the mental states of the TT-passing system, this calls for more than just the symbols and the ability to manipulate them. Some, at least, of those symbols must be grounded in something other than just more meaningless symbols and symbol manipulationsotherwise the system is in the same situation as someone trying to look up the meaning of a word in a languagelet us say, Chinesethat he does not understand in a Chinese-Chinese dictionary! E-mailing the definitions of the words would be intelligible enough to a pen-pal who already understood Chinese, but they would be of no use to anyone or anything that did not understand Chinese. Some of the symbols must be grounded in the capacity to recognize and manipulate the things in the world that the symbols refer to.

Mind Reading

So the TT candidate must be a robot, able to interact with the world that the symbols are aboutincluding usdirectly, not just via e-mail. And it must be able to do so indistinguishably from the way any of the rest of us interact with the world or with one another. That is the gist of the TT. The reason Turing originally formulated his test in its pen-pal form was so that we would not be biased by the candidate's appearance. But in today's cinematic sci-fi world we have, if anything, been primed to be overcredulous about robots, so much more capable are our familiar fictional on-screen cyborgs than any TT candidate yet designed in a cog-sci lab. In real life our subtle and biologically based "mind reading" skills (Frith and Frith 1999) will be all we need once cognitive science starts to catch up with science fiction and we can begin T-Testing in earnest.

The Other-Minds Problem

Could the Chinese Room Argument be resurrected to debunk a TT-passing robot? Certainly not. For Searle's argument depended crucially on the hardware-independence of computation. That was what allowed Searle to "become" the candidate and then report back to us (truthfully) that we were mistaken if we thought he understood Chinese. But we cannot become the TT-passing robot, to check whether it really understands, any more than we can become another person. It is this parity (between other people and other robots) that is at the heart of the TT. And anyone who thinks this is not an exacting enough test of having a mind need only remind himself that the Blind Watchmaker (Darwinian evolution), our "natural designer," is no more capable of mind reading than any of the rest of us is. That leaves only the robot to know for sure whether or not it really understands.

See also Artificial Intelligence; Computationalism; Functionalism; Machine Intelligence.

Bibliography

Anderson, David, and B. Jack Copeland. "Artificial Life and the Chinese Room Argument." Artificial Life 8 (4) (2002): 371378

Brown, Steven. "Peirce, Searle, and the Chinese Room Argument." Cybernetics and Human Knowing 9 (1) (2002): 2338

Dyer, Michael G. "Intentionality and Computationalism: Minds, Machines, Searle, and Harnad." Journal of Experimental and Theoretical Artificial Intelligence 2 (4) (1990): 303319.

French, Robert. "The Turing Test: The First Fifty Years." Trends in Cognitive Sciences 4 (3) (2000): 115121.

Frith, Christopher D., and Uta Frith. "Interacting MindsA Biological Basis." Science 286 (1999): 16921695.

Harnad, Stevan. "The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence." In The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer, edited by Robert Epstein and Grace Peters. Amsterdam: Kluwer Academic, 2004. http://cogprints.org/3322.

Harnad, Stevan. "Can a Machine Be Conscious? How?" Journal of Consciousness Studies 10 (45) (2003): 6975, http://cogprints.org/2460/.

Harnad, Stevan. "Minds, Machines, and Searle." Journal of Theoretical and Experimental Artificial Intelligence 1 (1989): 525, http://cogprints.org/1573/.

Harnad, Stevan. "On Searle on the Failures of Computationalism." Psycoloquy 12 (61) (2001), http://psycprints.ecs.soton.ac.uk/archive/00000190/.

Harnad, Stevan. "The Symbol Grounding Problem." Physica D 42 (1990): 335346, http://cogprints.org/3106/.

Harnad, Stevan. "Symbol-Grounding Problem." In Encyclopedia of Cognitive Science. London: Nature Publishing Group, 2003, http://cogprints.org/3018/.

Harnad, Stevan. "What's Wrong and Right about Searle's Chinese Room Argument?" In Essays on Searle's Chinese Room Argument, edited by M. Bishop and J. Preston. New York: Oxford University Press, 2001, http://cogprints.org/4023/.

Overill, Richard E. "Views into the Chinese Room: New Essays on Searle and Artificial Intelligence." Journal of Logic and Computation 14 (2) (2004): 325326.

Pylyshyn, Zenon W. Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, MA: MIT Press, 1984.

Searle, John R. "Explanatory Inversion and Cognitive Science." Behavioral and Brain Sciences 13 (1990): 585595.

Searle, John R. "The Failures of Computationalism: I." Psycoloquy 12 (62) (2001), http://psycprints.ecs.soton.ac.uk/archive/00000189/.

Searle, John. R. "Is the Brain's Mind a Computer Program?" Scientific American 262 (1) (January 1990): 2631.

Searle, John. R. "Minds and Brains without Programs." In Mindwaves: Thoughts on Intelligence, Identity, and Consciousness, edited by Colin Blakemore and Susan Greenfield. Oxford, U.K.: Blackwell, 1987.

Searle, John. R. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 (3) (1980): 417457, http://www.bbsonline.org/documents/a/00/00/04/84/index.html.

Searle, John R. Minds, Brains, and Science. Cambridge, MA: Harvard University Press, 1984.

Souder, Lawrence. "What Are We to Think about Thought Experiments?" Argumentation 17 (2) (2003): 203217.

Turing, Alan M. "Computing Machinery and Intelligence." Mind 59 (236) (1950): 433460, http://cogprints.org/499/.

Wakefield, Jerome C. "The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI." Minds and Machines 13 (2) (2003): 285319.

Stevan Harnad (2005)

About this article

Chinese Room Argument

Updated About encyclopedia.com content Print Article