Machine Intelligence

views updated

MACHINE INTELLIGENCE

Computers beat the best human chess players. Computers guide spacecraft over vast distances and direct robotic devices to explore faraway astronomical bodies. Computers outpace humans in many respects, but are they actually intelligent? Can they think? Even if one is skeptical about the mentality of today's computers, the interesting philosophical issue remains: Might computers possess significant intelligence someday? Indeed, might computers feel or even have consciousness? And, how would we know?

The Historical Debate

These issues of machine intelligence are not new to philosophy. The debate about whether a machine might think has its philosophical roots in the seventeenth and eighteenth century with the development of modern science. If the universe is fundamentally materialistic and mechanistic, as the emerging scientific paradigm suggested, it would follow that humans are nothing more than machines. Possibly, other machines might be constructed that would be capable of thought as well. Thomas Hobbes (15881679), who advocated a materialistic, mechanistic view, argued that reasoning is reckoning and nothing more. Humans reason by calculation with signs involving addition, subtraction, and other mathematical operations. Hobbes took these signs to be material objects that have significance as linguistic symbols. Julien La Mettrie (17091751), another materialist and mechanist, speculated that it might be possible to teach a language to apes and to build a mechanical man that could talk.

Not every philosopher of that era agreed with such radical predictions. René Descartes (15961650) held that animals are, indeed, complex machines but as such, necessarily lack thought and feeling. People have bodies that are in themselves nothing but complex machines, but people also have minds, nonmaterial entities that are in time but not space, that interact with their bodies. On this dualistic conception, intelligence and consciousness of people exist only as part of their minds, not as part of their bodies. Constructing a nonhuman machine that by itself had intelligence or consciousness was an impossibility for Descartes. Descartes admitted that a machine could be built that might give an impression of possessing intelligence, but it would be only a simulation of real intelligence and could be unmasked as a thoughtless machine. In fact, Descartes offered two certain tests by which a machine can be distinguished from a rational human being even if the machine resembled a human in appearance. First, although a machine may utter words, a machine will never reply appropriately to everything said in its presence in the way that a human can. Second, although a machine may perform certain actions as well as or even better than a human, a machine will not have the diversity of actions that a human has.

The Conception of Computing Machines

The contemporary debate about the possibility of machine intelligence ignited with the advent of modern electronic computers that are accurate, reliable, fast, programmable, and complex. Nobody did more in the twentieth century to construct a coherent concept of computing and to generate the contemporary debate about the intellectual possibilities of computers than Alan Turing (19121954). Turing explained computability in terms of abstract mathematical machines, now called Turing Machines. A Turing machine consists of a potentially infinite tape, divided into individual cells, on which a readwrite head travels either left or right one cell at a time. The readwrite head follows instructions that are found in a table of transition rules. The table of transition rules is the program that directs the Turing machine. Each instruction in the table specifies for a given a state of the Turing machine and a particular symbol being read on the tape, what the readwrite head should do (print a symbol, erase the symbol, move right, or move left), and which state the machine should go to next.

Turing showed how such simple, elegant machines could compute ordinary arithmetic functions, and he conjectured that anything that is effectively computable could be computed by such a machine. In addition, Turing developed the concept of a universal Turing machine that can compute what any Turing machine can compute. Turing also showed the limitations of his machines by demonstrating that some functions are not computable, even by a universal Turing machine. Turing's seminal work on computable numbers and computing machines provided much of the conceptual foundation for the development of the modern computer. During World War II Turing applied some of his theoretical insights in designing special computing equipment to decipher the German Enigma codes. After World War II Turing led efforts to design some of the earliest computers, including the Automatic Computing Engine (ACE) in 1945.

The concept of computing developed by Turing provided not only a theoretical foundation for computer science but also a theoretical framework for much of artificial intelligence and cognitive science. A central paradigm of these fields is that mental processes and, in particular, cognitive processes are fundamentally computational. Processes that constitute and demonstrate human intelligence and general mentality, such as perception, understanding, learning, reasoning, decision making, and action, are to be explained in terms of computations. On the computational view, a mind is an information processing device. In its strongest form the computational theory of the mind holds that an entity has a mind if and only if that entity has computational processes that generate mentality.

Three important aspects of the theory of computation support the possibility of machines possessing intelligence and various aspects of minds. First, computation is understood in terms of the manipulation of symbols. Symbolic manipulation can represent information inputted, information processed, information stored, and information outputted. If human intelligence depends on the ability to represent the world and to process information, then the symbolic nature of computation offers a promising environment in which to conceive and develop intelligent machines. Much, though not all, of machine intelligence work has been conducted within this framework.

Second, if intelligence and mentality are computational in nature, then it does not matter what material conducts the computations. The computational structures and processes are multiply realizable. They might be instantiated in human brains, in computers, or even in aliens comprised of a different assortment of chemicals. All may have mentality as long as they have the appropriate computational processes. Indeed, it is possible to have mixed systems comprised of different materials. Cochlear implants and bionic eyes send information to human brains from external stimuli. Humans with these implants hear and see although part of their processing channels are inorganic.

Third, the computational model suggests an account of the connection between mind and body that other theories of the mind leave mysterious. The computational model explains intelligence and overall mental activity on the basis of decreasingly complex components. A hierarchy of computational systems is hypothesized, each of which is made up of simpler computational systems, until at bottomas in a computerthere is nothing but elementary logical components, the operations of which can be explained and easily understood in terms of physical processes.

The Turing Test

For many people the phrase machine intelligence is an oxymoron. Machines by their nature are typically regarded as unintelligent and unthinking. How could a mere machine demonstrate actual intelligence? Turing believed that computing machines could be intelligent but was concerned that our judgments of the intelligence of such machines would be influenced by our biases and previous experiences with the limitations of machines. In his seminal article, "Computing Machinery and Intelligence" (1950), Turing considered the question "Can machines think?" but did so by replacing that question with another. The replacement question is explained in terms of a game that he calls "the imitation game." The imitation game is played by a man (A), a woman (B), and a human interrogator (C). The interrogator is in a room apart from the other two and tries to determine through conversation which of the other two is the man and which is the woman. Turing suggested that a teleprinter be used to communicate to avoid giving the interrogator clues through tones of voice. In the game the man may engage in deception in order to encourage the interrogator to misidentify him as the woman. The man may lie about his appearance and preferences. Turing believed that the woman's best strategy in the game is to tell the truth.

After he explained how the imitation game is played in terms of a man, a woman, and a human interrogator, Turing introduced his replacement question(s). Turing said, "We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'" (Turing, 1950, p. 434). Although his proposed version of the imitation game, now called the Turing test, may seem straightforward, many questions have been raised about how to interpret it. For example, to what extent does gender play a role in the test? Some maintain that Turing intended, or should have intended, that the computer imitate a woman just as the man did in the original imitation game. The more standard interpretation of the test is that the computer takes the part of A but that the part of B is played by a humana man or a woman. On the standard interpretation the point of the test is to determine how well a computer can match the verbal behavior of a human, not necessarily a woman. The examples of questions for the test that Turing suggested are not gender specific but rather more general inquiries about writing sonnets, doing arithmetic, and solving chess problems.

Turing neglected to elaborate on many details of his test. How many questions can be asked? How many judges or rounds of judging are there? Who is the average interrogator asking questions? What counts precisely as passing the test? And, importantly, what conclusion should be drawn from a Turing test if it were passed? Turing moved quickly to replace the initial question "Can machines think?" with questions about playing the imitation game. He suggested that the original question "Can machines think?" is "too meaningless to deserve discussion" (Turing, 1950, p. 442). He could not have been claiming that the question is literally meaningless, or his own replacement project would not make sense. What he was suggesting is that terms like machine and think are vague terms in ordinary speech, and what people typically associate with a machine is not something that has or perhaps could have intelligence. What he was proposing with his test is a way to make the overall question of machine thinking more precise so that at least in principle, an empirical test could be conducted. Still, the issue is left open as to exactly what passing the Turing test would establish. Could it ever show that a machine is intelligent or that a machine thinks or possibly even that a machine is conscious?

A widely held misconception is that Turing proposed the test as an operational definition of thinking or considered the test to give logically necessary and sufficient conditions for machine intelligence. Critics of the test frequently point out that exhibiting intelligent behavior in this test is neither a logically necessary nor logically sufficient condition for thinking. But this common objection against the test misses the mark, for Turing never said he was giving an operational definition and never argued that the test provided a logically necessary or sufficient condition for establishing machine intelligence. Indeed, Turing argued for the opposite position. He did not take his test to be a necessary condition for intelligence, for he readily admitted that a machine might have intelligence but not imitate well. He never maintained that passing the test is logically sufficient for intelligence or thinking by a machine. On the contrary, he argued that demanding certainty in knowledge of other minds would push one into solipsism, which he rejected.

A more plausible interpretation of the Turing test is to regard it as an inductive test. If a machine passed a rigorous Turing test with probing questioning on many topics, perhaps by different judges over a reasonably extended period of time, then good inductive evidence for attributing intelligence or thinking to the machine might exist. Behavioral evidence is used routinely to make inductive judgments about the intelligence of other humans and animals. It would seem appropriate to use behavioral evidence to evaluate machines as well. In judging human-like intelligence linguistic behavior seems particularly salient. There would be no logical certainty in such a judgment any more than there is logical certainty in scientific testing in general, and revision of judgments in light of new evidence might be required. Regrettably, other evidence like relevant to a judgment of machine intelligence, such as evidence from non-linguistic behavior and evidence about the internal operation of the machine, cannot be directly gathered within the Turing test. Turing realized this, but thought it more important to eliminate bias so that a machine would not be excluded as intelligent simply because the person making the judgment knew it was a machine.

Criticisms of the Turing Test

Turing himself considered and replied to a variety of criticisms of his test ranging from a theological objection to an extrasensory perception objection. At least two of the objections he discussed remain popular. One is the Lady Lovelace objection based on a remark by Ada Lovelace that Charles Babbage's Analytical Engine, a nineteenth-century mechanical computer, had no pretension to originate anything. A similar point is often made by claiming that computers only do what they are programmed to do. The objection is difficult to defend in detail because computers can surprise even their programmers, are affected by their input as well as their programming, and can learn. Of course, one might argue that, at bottom, computers are merely following rules and therefore are not creative. But to firmly establish this objection, one would need to show that, at bottom, humans are not merely following rules and that anything merely following rules cannot be creative.

Another objection that Turing considered is the mathematical objection that utilizes results in mathematical logic, such as Kurt Gödel's incompleteness theorem. This argument, later developed by J. R. Lucas (1961) and by Roger Penrose (1989), maintains that fundamental limits of logical systems are limits of computers but not of human minds. But, as Turing himself pointed out, it has not been established that these logical limits do not apply equally well to humans.

In addition to these classical criticisms, a number of contemporary objections to the Turing test have been advanced. Robert French (1990) has maintained that the test is virtually useless because there will always be subtle subcognitive behavior that will allow an interrogator to identify humans from machines. If true, the Turing test would be more difficult to pass and possibly not very useful, but this outcome would also enhance the potential inductive sufficiency of the Turing test if it were passed.

In another criticism of the Turing test, Ned Block (1981) has suggested that a computer program that worked as a conversation jukebox so that it gave a stored but appropriate response to every possible remark by an interrogator throughout a conversation would pass the test. Because the test occurs during a finite period and in that period only a finite, though very large, number of responses can be made, such a program seems logically possible. Whether such a program could exist in practice given the complexity of semantic relations in a conversation and the changing facts of the world is unclear, but even taken as a thought experiment, the success of the jukebox program would at most show that the Turing test does not provide a logically sufficient condition for the possession of intelligence, a position to which Turing agreed.

John Searle (1980) developed one of the most popular contemporary objections against machine intelligence: the Chinese Room Argument. Simply put, a computer program running on a digital machine is only manipulating symbols syntactically and necessarily lacks semantics. Thus, even if a machine passed a Turing test, it would not understand anything. A digital computer might simulate intelligence, but on Searle's view it would not have a mind. Some critics of this argument have suggested that humans acquire semantics through interaction with the environment, and possibly, machines equipped with sensory inputs and motor outputs could acquire semantics in this way as well. More telling, the Chinese Room Argument does not validly establish what it claims. Searle has maintained that a human brain has the causal powers to produce a mind; the Chinese Room Argument does not demonstrate that computer programs, once loaded and running on a physical machine, could not have similar causal powers.

The Turing test is a possible test for machine intelligence and one that has received enormous philosophical discussion, but it is not the only test. Normally, the intelligence of animals and other humans is tested and inferred by examining an entity's relevant behavior in various situations. Similarly, machine intelligence can be tested based on its ability to demonstrate such processes as understanding, reasoning, and learning regardless of how well it can imitate a human. Human intelligence is not the only kind of intelligence. Along these lines Patrick Hayes and Kenneth Ford (1995) have argued that too much emphasis on passing the Turing test has actually been detrimental to progress in artificial intelligence.

The Future of Machine Intelligence

Turing believed that human language and understanding of machines and mentality would shift by the year 2000, and indeed, the notion of a machine being intelligent is not as outlandish as it once was. In his 1950 article (p. 442) Turing also made a very famous specific prediction that has not fared as well. He said: "I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." No computer has come close to meeting this standard in a rigorously conducted Turing test. But it should be noted that behind Turing's prophecy was a plan that has not come to pass. He imagined that one day a computer would learn just as a child does. And like a human the computer gradually would obtain a larger and larger understanding of the world. Machine learning in specific contexts has been a reality for decades, but general learning by a machine remains an elusive goal, and without it, the intelligence of machines will be limited.

The long-term future of machine intelligence is a matter of considerable philosophical debate. Here are four visions of the future that have been suggested. On the android vision some intelligent machines of the future will look like humans or at least resemble humans in their intellectual capacities. Because humans are the most intelligent creatures known, human intelligence is taken as the obvious standard. Turing's own proposals employed much of this vision. From this viewpoint it is sensible to ask whether robots someday will be the intellectual peers of humans and might deserve rights as rational beings. But some critics argue that computers will never be much like humans without similar emotional needs and desires. On the slave vision intelligent machines of the future will give humans increasingly sophisticated assistance but, like their not-so-intelligent predecessors, they will be slaves, possibly held in check by Isaac Asimov's well-known three laws of robotics (1991). On the successor vision machine intelligence will become increasingly sophisticated and machines will evolve beyond humans. Hans Moravec (1999) has argued that humans will be surpassed by machines in terms of intelligence within a relatively short time. Such machines might evolve and progress rapidly through a Lamarckian transmission of culture to the next generation. Finally, on the cyborg vision, advanced by Rodney Brooks (2002) and others, machine intelligence will increasingly be embedded in us. Machine intelligence will be used to augment our abilities and will blend into our nature. Machine intelligence will become part of our intelligence, and we will become, at least in part, intelligent machines.

See also Artificial Intelligence; Chinese Room Argument; Computationalism; Descartes, René; Gödel's Theorem; Hobbes, Thomas; Induction; La Mettrie, Julien Offray de; Solipsism; Turing, Alan M.

Bibliography

Asimov, Isaac. Robot Visions. New York: Penguin Books, 1991.

Block, Ned. "Psychologism and Behaviorism." Philosophical Review 90 (1981): 543.

Boden, Margaret. The Creative Mind: Myths and Mechanisms. London: Routledge, 2004.

Brooks, Rodney. Flesh and Machines: How Robots Will Change Us. New York: Pantheon, 2002.

Copeland, Jack, ed. The Essential Turing. New York: Oxford University Press, 2004.

Dennett, Daniel. The Intentional Stance. Cambridge, MA: MIT Press, 1987.

Descartes, René. Discourse on Method. Translated by Donald Cress. Indianapolis, IN: Hackett, 1980.

Dreyfus, Hubert, and Stuart Dreyfus. Mind over Machine. New York: Free Press, 1986.

French, Robert. "Subcognition and the Limits of the Turing Test." Mind 99 (1990): 5365.

Gunderson, Keith. Mentality and Machines. 2nd ed. Minneapolis: University of Minnesota Press, 1985.

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press, 1985.

Hayes, Partick, and Kenneth Ford. "Turing Test Considered Harmful." Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (1995): 972977.

Hobbes, Thomas. Leviathan: With Selected Variants from the Latin Edition of 1668, edited by Edwin Curley. Indianapolis, IN: Hackett, 1994.

Hodges, Andrew. Alan Turing: The Enigma. New York: Simon and Schuster, 1983.

La Mettrie, Julien. L'homme Machine: A Study in the Origins of an Idea. Princeton, NJ: Princeton University Press, 1960.

Lucas, J. R. "Minds, Machines, and Gödel." Philosophy 36 (1961): 1204.

Moor, James. "The Status and Future of the Turing Test." Minds and Machines 11 (2001) 7793.

Moravec, Hans. Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press, 1999.

Penrose, Roger. The Emperor's New Mind. Oxford: Oxford University Press, 1989.

Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 (1980): 417457.

Sterrett, Susan. "Turing's Two Tests for Intelligence." Minds and Machines 10 (2000): 541559.

Turing, Alan. "Computing Machinery and Intelligence." Mind 59 (1950): 433460.

James H. Moor (2005)