Skip to main content

Artificial Intelligence

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include the playing of games such as checkers or chess, natural language understanding and synthesis, computer vision, problem solving, machine learning, and robotics.

The above is a general description of the field; there is no agreed-upon definition of artificial intelligence, primarily because there is little agreement as to what constitutes intelligence. Interpretations of what it means to say an agent is intelligent vary, yet most can be categorized in one of three ways. Intelligence can be thought of as a quality, an individually held property that is separable from all other properties of the human person. Intelligence is also seen in the functions one performs, in one's actions or the ability to carry out certain tasks. Finally, some researchers see intelligence as something primarily acquired and demonstrated through relationship with other intelligent beings. Each of these understandings of intelligence has been used as the basis of an approach to developing computer programs with intelligent characteristics.

First Attempts: Symbolic AI

The field of AI is considered to have its origin in the publication of Alan Turing's paper "Computing Machinery and Intelligence" (1950). John McCarthy coined the term artificial intelligence six years later at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI, which is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometers begin with a finite set of axioms and primitive objects such as points, so symbolicists, following such rationalist philosophers as Ludwig Wittgenstein and Alfred North Whitehead, predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects. Simple concepts or objects are directly expressed by a single symbol, while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought.

Symbolic AI met with immediate success in areas in which problems could be easily described using a small set of objects that operate in a limited domain in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four-square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and countermoves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains, such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving. These early successes led to a number of remarkably optimistic predictions of the prospects for symbolic AI.

Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two-year-old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human "experts" possess in the form of what is often called common sense. Humans make use of general knowledge, millions of things we know and apply to a situation, both consciously and subconsciously. Should such a set exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.

Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores (Understanding Computers and Cognition, 1986), is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. When a human being learns to ride a bicycle, he or she does not do so by calculating equations of trajectory or force. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a "degenerating research project," by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. His prediction has proven to be fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat's Cyc project. Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts. Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has yet to show convincing evidence of extended independent learning.

Symbolic AI is not completely dead, however. The primacy of primitive objects representable by some system of encoding is a basic assumption underlying the worldview that everything can be thought of in terms of information, a view that has been advanced by several physicists, including Freeman Dyson, Frank Tipler, and Stephen Wolfram.

Functional or Weak AI

In 1980, John Searle, in the paper "Minds, Brains, and Programs," introduced a division of the field of AI into "strong" and "weak" AI. Strong AI denoted the attempt to develop a full humanlike intelligence, while weak AI denoted the use of AI techniques to either better understand human reasoning or to solve more limited problems. Although there was little progress in developing a strong AI through symbolic programming methods, the attempt to program computers to carry out limited human functions has been quite successful. Much of what is currently labeled AI research follows a functional model, applying particular programming techniques, such as knowledge engineering, fuzzy logic, genetic algorithms, neural networking, heuristic searching, and machine learning via statistical methods, to practical problems. This view sees AI as advanced computing. It produces working programs that can take over certain human tasks, especially in situations where there is limited human control, or where the knowledge needed to solve a problem cannot be fully anticipated by human programmers. Such programs are used in manufacturing operations, transportation, education, financial markets, "smart" buildings, and even household appliances.

For a functional AI, there need be no quality labeled "intelligence" that is shared by humans and computers. All computers need do is perform a task that requires intelligence for a human to perform. It is also unnecessary, in functional AI, to model a program after the thought processes that humans use. If results are what matter, then it is possible to exploit the speed and storage capabilities of the digital computer while ignoring parts of human thought that are not understood or easily modeled, such as intuition. This is, in fact, what was done in designing the chess-playing program Deep Blue, which beat the reigning world champion, Garry Kasparov, in 1997. Deep Blue does not attempt to mimic the thought of a human chess player. Instead, it capitalizes on the strengths of the computer by examining an extremely large number of moves, more than any human could possibly examine.

There are two problems with functional AI. The first is the difficulty of determining what falls into the category of AI and what is simply a normal computer application. A definition of AI that includes any program that accomplishes some function normally done by a human being would encompass virtually all computer programs. Even among computer scientists there is little agreement as to what sorts of programs fall under the rubric of AI. Once an application is mastered, there is a tendency to no longer define that application as AI. For example, while game playing is one of the classical fields of AI, Deep Blue's design team emphatically stated that Deep Blue was not artificial intelligence, since it used standard programming and parallel processing techniques that were in no way designed to mimic human thought. The implication here is that merely programming a computer to complete a human task is not AI if the computer does not complete the task in the same way a human would.

For a functional approach to result in a full humanlike intelligence it would be necessary not only to specify which functions make up intelligence, but also to make sure those functions are suitably congruent with one another. Functional AI programs are rarely designed to be compatible with other programseach uses different techniques and methods, the sum of which is unlikely to capture the whole of human intelligence. Many in the AI community are also dissatisfied with a collection of task-oriented programs. The building of a general, humanlike intelligence, as difficult a goal as it may seem, remains the vision.

A Relational Approach

A third approach to AI builds on the assumption that intelligence is acquired, held, and demonstrated only through relationships with other intelligent agents. In "Computing Machinery and Intelligence," Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. A human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered intelligent. The Turing Test is based, not on the completion of any particular task or the solution of any particular problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000 there would be computers that could fool an interrogator at least 30 percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.

The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of background knowledge needed for a restricted form of the game would take, at a minimum, three hundred person-years to complete. This is assuming that one could identify the appropriate knowledge set at the outset. Turing suggests, rather than trying to imitate an adult mind, that one construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology (MIT), whose robotics lab has constructed several machines, the most famous of which are named Cog and Kismet, that represent a new direction in AI in that embodiedness is crucial to their design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. Cog and Kismet are no longer minds in a box, but embodied systems that depend on interaction within a complex environment. They are designed to learn those tasks associated with newborns, such as eye-hand coordination, object grasping, face recognition, and basic emotional responses, through social interaction with a team of researchers. Although they have developed such abilities as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project has so far been no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.

The Turing Test makes no assumptions as to how the computer would arrive at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers (e.g., Hans Moravec, Raymond Kurzweil) look to neural net research as a way to reverse engineer the brain. They hope that once scientists have the capability of designing nets with a complexity equal to that of the brain, they will find that the nets have the same power as the brain and will develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hopes that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.

AI in Science Fiction

While some advances have been made, a truly intelligent computer currently remains in the realm of speculation. Though researchers have continually projected that intelligent computers are imminent, progress in AI has been limited. Computers with intentionality and self-consciousness, with fully human reasoning skills or the ability to be in relationship, exist only in the realm of dreams and desires, a realm explored in fiction and fantasy.

The artificially intelligent computer in science fiction story and film is not a prop, but a character, one that has become a staple since the mid-1950s. These characters are embodied in a variety of physical forms, ranging from the wholly mechanical (computers and robots), to the partially mechanical (cyborgs), to the completely biological (androids). A general trend from the 1950s to the 1990s has been to depict intelligent computers in an increasingly anthropomorphic way. The robots and computers of early films, such as Maria in Metropolis (1926), Robby in Forbidden Planet (1956), Hal in 2001: A Space Odyssey (1968), or R2D2 and C3PO in Star Wars (1977), were clearly constructs of metal. On the other hand, early science fiction stories, such as Isaac Asimov's I, Robot (1950), explored the question of how one might distinguish between robots that looked human and actual human beings. Films and stories since the 1980s, such as Blade Runner (1982), The Terminator series (19842002), and A.I.: Artificial Intelligence (2001), depict machines with both mechanical and biological parts that are, at least superficially, practically indistinguishable from human beings.

Fiction that features AI can be classified in two general categories. The first comprises cautionary tales that explore the consequences of creating technology for the purposes of taking over human functions. In these stories the initial impulses for creating an artificial intelligence are noble: to preserve the wisdom of a race (Forbidden Planet ), to avoid nuclear war (Colossus: The Forbin Project, 1970), or to advance human knowledge (2001: A Space Odyssey ). The human characters suppose that they are completely in control, only to find that they have, in the end, abdicated too much responsibility to something that is ultimately "other" to the human species. The second category comprises tales of wish fulfillment (Star Wars; I, Robot ) in which the robots are not noted for their superior intelligence or capabilities but for the cheerful assistance and companionship they give their human masters. The computers in these stories are rooted in a relational rather than a functional view of human intelligence.

Religious and Ethical Implications

Many researchers in AI are committed physicalists and believe that the design of a truly intelligent machine will vindicate their belief that human beings are nothing but biological machines. Few would consider religious questions to be of import to their work. (One exception to this stance has been the robotics laboratory at MIT, which included a religious adviser, Anne Foerst, as part of the research team developing the robot Cog.) However, the assumptions that human beings are merely information-processing machines and that artifacts that are nonbiological can be genuinely intelligent have both anthropological and eschatological implications.

The most important questions raised by AI research are anthropological ones. What does it mean to be human? At what point would replacing some or all of our biological parts with mechanical components violate our integrity as human beings? Is our relationship to God contingent on our biological nature? What is the relationship of the soul to consciousness or intelligence? These questions are raised by the search for an artificial intelligence, irrespective of whether or not that search is ever successful.

Should that search be successful, ethical problems arise. What rights would an intelligent robot have? Would these be the same rights as a human being? Should an artificial intelligence be held to the same standards of moral responsibility as human beings? Should a robot be baptized or take part in other sacramental or covenantal acts? How one answers such questions depends largely on what one sees as central to our nature as human beingsmind, body, function, or relationship. Once again, whether AI becomes a reality or not, the debate over questions such as these is helpful in clarifying the principles on which our view of humanity rests.

AI also raises a set of ethical issues relevant to the search itself. In a controversial article in Wired (2000) Bill Joy, chief scientist at Sun Microsystems, warns that self-replicating robots and advances in nanotechnology could result, as soon as 2030, in a computer technology that may replace our species. Moravec of the AI lab at Carnegie Mellon University pushes the time back to 2040 but agrees that robots will displace humans from essential roles and could threaten our existence as a species. Joy calls for research in the possibly convergent fields of artificial intelligence, nanotechnology, and biotechnology to be suspended until researchers have greater certainty that such research would in no way threaten future human lives. On a lesser scale, the amount of responsibility the human community wishes to invest in autonomous or semi-autonomous machines remains a question.

The view of human identity as the information in one's brain has led several researchers to posit a new cybernetic form for human immortality. In The Age of Spiritual Machines (1999), Kurzweil predicts that by the end of the twenty-first century artificial intelligence will have resulted in effective immortality for humans. He expects that the merger of human and machine-based intelligences will have progressed to the point where most conscious entities will no longer have a permanent physical presence, but will move between mechanically enhanced bodies and machines in such a way that one's life expectancy will be indefinitely extended. Kurzweil is not the sole holder of this expectation, though he may be among the more optimistic in his timeline. Physicists Dyson and Tipler suggest a future in which human identity is located in the information that makes up the thoughts, memories, and experiences of each person. In The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead (1994), Tipler conjectures that the universe will cease to expand and at some point end in a contraction that he calls the "omega point." Tipler sees the omega point as the coalescence of all information, including the information that has made up every person who ever lived. This point can thus be seen as corresponding to the omniscient and omnipotent God referred to in many different religious traditions. At such a point, the information making up any given individual could be reinstantiated, resulting in a form of resurrection for that person, a cybernetic immortality. Cybernetic immortality provides one avenue for belief in a manner of human continuance that does not violate the assumption of a material basis for all existence. It is thus compatible with the most rigorous scientific theories of the natural world. However, cybernetic immortality is based on the assumptions that thoughts and memories define the human person and that consciousness is an emergent property of the complexity of the human brain. In other words, human beings are basically biological machines whose unique identity is found in the patterns that arise and are stored in the neuronal structures of the brain. If these patterns could be replicated, as in sophisticated computer technology, the defining characteristics of the person would be preserved. Such a view is not necessarily compatible with the anthropologies of most religions.

See Also

Cybernetics.

Bibliography

Daniel Crevier's AI: The Tumultuous History of the Search for Artificial Intelligence (New York, 1993) provides a clear history of the first forty years of AI research. A more critical view of the field can be found in Hubert Dreyfus's Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York, 1986). Another classic critique of AI is Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (Reading, Mass., 1986). Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 2d ed. (Cambridge, Mass., 1997) is a compilation of a variety of seminal papers on AI, including Turing's 1950 paper and John Searle's famous "Chinese Room" paper. HAL's Legacy: 2001's Computer as Dream and Reality, edited by David Stork (Cambridge, Mass., 1997), includes a good variety of papers examining the state of the various subfields that made up AI at the end of the twentieth century.

Turning from the history of the field to prognostications of its future, Mind Children: The Future of Robot and Human Intelligence (Cambridge, Mass., 1988) by Hans Moravec suggests that computers will be the next stage in human evolution, while Raymond Kurzweil, in The Age of Spiritual Machines (New York, 1999), posits a future in which human beings and computers merge. A good overview of films dealing with AI can be found in J. P. Telotte's Replications: A Robotic History of the Science Fiction Film (Urbana, Ill., 1995); fictional portrayals of AI are discussed in Patricia Warrick's The Cybernetic Imagination in Science Fiction (Cambridge, Mass., 1980). For theological implications, see Noreen L. Herzfeld, In Our Image: Artificial Intelligence and the Human Spirit (Minneapolis, 2002).

Noreen L. Herzfeld (2005)

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence." Encyclopedia of Religion. . Encyclopedia.com. 11 Aug. 2018 <http://www.encyclopedia.com>.

"Artificial Intelligence." Encyclopedia of Religion. . Encyclopedia.com. (August 11, 2018). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/artificial-intelligence

"Artificial Intelligence." Encyclopedia of Religion. . Retrieved August 11, 2018 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/artificial-intelligence

Learn more about citation styles

Citation styles

Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

http://www.mla.org/style

The Chicago Manual of Style

http://www.chicagomanualofstyle.org/tools_citationguide.html

American Psychological Association

http://apastyle.apa.org/

Notes:
  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.