Artificial intelligence

views updated May 23 2018

Artificial intelligence

What is intelligence?

Overview of AI

General problem solving

Expert systems

Natural language processing

Computer vision

Robotics

Computer-assisted instruction

Resources

Certain tasks can be performed faster and more accurately by traditionally programmed computers than by human beings, particularly numerical computation and the storage, retrieval, and sorting of large quantities of information. However, the ability of computers to interact flexibly with the real world their intelligenceremains slight. Artificial intelligence (AI) is a subfield of computer science that seeks to remedy this situation by creating software and hardware that possesses some of the behavioral flexibility shown by natural intelligences, both human and animal.

In the 1940s and 1950s, the first large, electronic, digital computers were designed to perform numerical calculations set up by a human programmer. The computers did so by completing a series of clearly defined steps, or algorithms. Programmers wrote algorithmic software that precisely specified both the problem and how to solve it. AI programmers, in contrast, have sought to program computers with flexible rules for seeking solutions to problems. An AI program may be designed to modify the rules it is given or to develop entirely new rules.

What types of problem are appropriate for traditional, algorithmic computing and what types call out for AI? Some of the tasks that are hardest for people are, fortunately, algorithmic in nature. Teachers can describe in detail the process for multiplying numbers, and accountants can state accurately the rules for completing tax forms, yet many people have difficulty performing these tasks. Straightforward, algorithmic programs can perform them easily because they can be broken down into a series of precise procedures or steps that do not vary from case to case. On the other hand, tasks that require little thought for human beings can be hard to translate into algorithms and therefore difficult or (so far) impossible for computers to perform. For example, most people know that a pot of boiling water requires careful handling. We identify hot pots by flexible recognition of many possible signs: steam rising, radiant heat felt on the skin, a glimpse of blue flame or red coils under the pot, a rattling lid, and so forth. Once we know that the pot is boiling, we plan our actions accordingly. This process seems simple, yet describing exactly to a computer how to reliably conclude this pot is hot and to take appropriate action turns out to be extremely difficult. The goal of AI is to create computers that can handle such complex, flexible situations with intelligence. One obstacle to this goal is our uncertainty or confusion about what is intelligence.

What is intelligence?

One possible definition of intelligence is the acquisition and application of knowledge. An intelligent entity, in this view, is one that learnsacquires knowledgeand is able to apply this knowledge to changing real-world situations. In this sense, a rat is intelligent, but most computers, despite their impressive number-crunching capabilities, are not. Generally, to qualify as intelligent, an AI system must use some form of information or knowledge (whether acquired from databases, sensory devices, trial and error, or all of the above) to make effective choices when confronted with data that are to some extent unpredictable. Insofar as a computer can do this, it may be said, for the purposes of AI, to display intelligence. Note that this definition is purely functional, and that in AI the question of consciousness, though intriguing, need not be considered.

This limited characterization of intelligence would, perhaps, have been considered overcautious by some AI researchers in the early days, when optimism ran high. For example, U.S. economist, AI pioneer, and Nobel Prize winner Herbert Simon (19162001) predicted in 1965 that by 1985 machines will be capable of doing any work man can do. Yet decades later, despite exponential growth in memory size and processing speed, no computer even comes close to commonplace human skills like conversing, driving a car, or diapering a baby, much less doing any work a person can do. Why has progress in AI been so slow?

One answer is that while intelligence, as defined above, requires knowledge, computers are only good at handling information, which is not the same thing. Knowledge is meaningful information, and meaning is a nonmeasurable, multivalued variable arising in the real world of things and values. Bitsbinary digits, 1s and 0shave no meaning, as such; they are meaningful only when people assign meanings to them. Consider a single bit, a 1: its information content is one bit regardless of what it means, yet it may mean nothing or anything, including The circuit is connected, We surrender, It is more likely to rain than snow, and I like apples. The question for AI is, how can information be made meaningful to a computer? Simply adding more bits does not work, for meaning arises not from information as such, but from relationships involving the real world. In one form or another, this basic problem has stymied AI research for decades. Nor is it the only such problem. Another problem is that computers, even those employing fuzzy logic and autonomous learning, function by processing symbols (e.g., 1s and 0s) according to rules (e.g., those of Boolean algebra)yet human beings do not usually think by processing symbols according to rules. Humans are able think this way, as when we do arithmetic, but more commonly we interpret situations, leap to conclusions, utter sentences, and plan actions using thought processes that do not involve symbolic computation at all. What our thought processes really arethat is, what our intelligence is, preciselyand how to translate it into (or mimic it by means of) a system of computable symbols and rules is a problem that remains unsolved, in general, by AI researchers.

In 1950, British mathematician Alan Turing (19121954) proposed a hypothetical game to help decide the issue of whether a given machine is truly intelligent. The imitation game, as Turing originally called it, consisted of a human questioner in a room typing questions on a keyboard. In another room, an unseen respondenteither a human or a computer would type back answers. The questioner could pose queries to the respondent in an attempt to determine if he or she was corresponding with a human or a computer. In Turings opinion, if the computer could fool the questioner into believing that he or she was having a dialog with a human being, then the computer could be said to be truly intelligent.

The Turing test is obviously biased toward human language prowess, which most AI programs today do not even seek to emulate due to the extreme difficulty of the problem. It is significant that even the most advanced AI programs devoted to natural language are as far as ever from passing the Turing test. Intelligence has proved a far tougher nut to crack than the pioneers of AI believed half a century ago. Computers remain, by human standards, profoundly unintelligent.

Even so, AI has made many gains since the 1950s. AI software is now present in many devices, such as automatic teller machines (ATMs), that are part of daily life, and is finding increasing commercial application in many industries.

Overview of AI

All AI computer programs are built on two basic elements: a knowledge base and an inferencing capability. (Inferencing means drawing conclusions based on logic and prior knowledge.) A knowledge base is made up of many discrete units of informationrepresenting facts, concepts, theories, procedures, and relationshipsall relevant to a particular task or aspect of the world. Programs are written to give the computer the ability to manipulate this information and to reason, make judgments, reach conclusions, and choose solutions to the problem at hand, such as guessing whether a series of credit-card transactions involves fraud or driving an automated rover across a rocky Martian landscape. Whereas conventional, deterministic software must follow a strictly logical series of steps to reach a conclusion, AI software uses the techniques of search and pattern matching; it may also, in some cases, modify its knowledge base or its own structure (learn). Pattern matching may still be algorithmic; that is, the computer must be told exactly where to look in its knowledge base and what constitutes a match. The computer searches its knowledge base for specific conditions or patterns that fit the criteria of the problem to be solved. Microchips have increased computational speed, allowing AI programs to quickly scan huge arrays of data. For example, computers can scan enough possible chess moves to provide a challenging opponent for even the best human players. Artificial intelligence has many other applications, including problem solving in mathematics and other fields, expert systems in medicine, simple natural language processing, robotics, and education.

The ability of some AI programs to solve problems based on facts rather than on a predetermined series of steps is what most closely resembles thinking and causes some in the AI field to argue that such devices are indeed intelligent.

General problem solving

Problem solving is thus something AI does very well as long as the problem is narrow in focus and clearly defined. For example, mathematicians, scientists, and engineers are often called upon to prove theorems. (A theorem is a mathematical statement that is part of a larger theory or structure of ideas.) Because the formulas involved in such tasks may be large and complex, this can take an enormous amount of time, thought, and trial and error. A specially designed AI program can reduce and simplify such formulas in a fraction of the time needed by human workers.

Artificial intelligence can also assist with problems in planning. An effective step-by-step sequence of actions that has the lowest cost and fewest steps is very important in business and manufacturing operations. An AI program can be designed that includes all possible steps and outcomes. The programmer must also set some criteria with which to judge the outcome, such as whether speed is more important than cost in accomplishing the task, or if lowest cost is the desired result, regardless of how long it takes. The plan generated by this type of AI program will take less time to generate than by traditional methods.

Expert systems

The expert system is a major application of AI today. Also known as knowledge-based systems, expert systems act as intelligent assistants to human experts or serve as a resource to people who may not have access to an expert. The major difference between an expert system and a simple database containing information on a particular subject is that the database can only give the user discrete facts about the subject, whereas an expert system uses reasoning to draw conclusions from stored information. The purpose of this AI application is not to replace human experts, but to make their knowledge and experience more widely available.

An expert system has three parts: knowledge base, inference engine, and user interface. The knowledge base contains both declarative (factual) and procedural (rules-of-usage) knowledge in a very narrow field. The inference engine runs the system by determining which procedural knowledge to access in order to obtain the appropriate declarative knowledge, then draws conclusions and decides when an applicable solution is found.

An interface is usually defined as the point where the machine and the human touch. An interface is usually a keyboard, mouse, or similar device. In an expert system, there are actually two different user interfaces: One is for the designer of the system (who is generally experienced with computers) the other is for the user (generally a computer novice). Because most users of an expert system will not be computer experts, it is important that the system be easy for them to use. All user interfaces are bi-directional; that is, are able to receive information from the user and respond to the user with recommendations. The designers user interface must also be capable of adding new information to the knowledge base.

Natural language processing

Natural language is human language. Natural-language-processing programs use artificial intelligence to allow a user to communicate with a computer in the users natural language. The computer can both understand and respond to commands given in a natural language.

Computer languages are artificial languages, invented for the sake of communicating instructions to computers and enabling them to communicate with each other. Most computer languages consist of a combination of symbols, numbers, and some words. These languages are complex and may take years to master. By programming computers (via computer languages) to respond to our natural languages, we make them easier to use.

However, there are many problems in trying to make a computer understand people. Four problems arise that can cause misunderstanding: (1) Ambiguity confusion over what is meant due to multiple meanings of words and phrases. (2) Imprecisionthoughts are sometimes expressed in vague and inexact terms. (3) Incompletenessthe entire idea is not presented, and the listener is expected to read between the lines. (4) Inaccuracyspelling, punctuation, and grammar problems can obscure meaning. When we speak to one another, furthermore, we generally expect to be understood because our common language assumes all the meanings that we share as members of a specific cultural group. To a nonnative speaker, who shares less of our cultural background, our meaning may not always be clear. It is even more difficult for computers, which have no share at all in the real-world relationships that confer meaning upon information, to correctly interpret natural language.

To alleviate these problems, natural language processing programs seek to analyze syntaxthe way words are put together in a sentence or phrase; semanticsthe derived meaning of the phrase or sentence; and contextthe meaning of distinct words within a sentence. But even this is not enough. The computer must also have access to a dictionary that contains definitions of every word and phrase it is likely to encounter and may also use keyword analysisa pattern-matching technique in which the program scans the text, looking for words that it has been programmed to recognize. If a keyword is found, the program responds by manipulating the text to form a reasonable response.

In its simplest form, a natural language processing program works like this: a sentence is typed in on the keyboard; if the program can derive meaningthat is, if it has a reference in its knowledge base for every word and phraseit will respond, more or less appropriately. An example of a computer with a natural language processor is the computerized card catalog available in many public libraries. The main menu usually offers four choices for looking up information: search by author, search by title, search by subject, or search by keyword. If you want a list of books on a specific topic or subject you type in the appropriate phrase. You are asking the computerin Englishto tell you what is available on the topic. The computer usually responds in a very short timein English with a list of books along with call numbers so you can find what you need.

Computer vision

Computer vision is the use of a computer to analyze and evaluate visual information. A camera is used to collect visual data. The camera translates the image into a series of electrical signals. This data is analog in naturethat is, it is directly measurable and quantifiable. A digital computer, however, operates using numbers expressed directly as digits. It cannot read analog signals, so the image must be digitized using an analog-to-digital converter. The image becomes a very long series of binary numbers that can be stored and interpreted by the computer. Just how long the series is depends on how densely packed the pixels are in the visual image. To get an idea of pixels and digitized images, take a close look at a newspaper photograph. If you move the paper very close to your eyes, you will notice that the image is a sequence of black and white dotscalled pixels, for picture elementsarranged in a certain pattern. When you move the picture away from your eyes, the picture becomes clearly defined and your brain is able to recognize the image.

Artificial intelligence works much the same way. Clues provided by the arrangement of pixels in the image give information as to the relative color and texture of an object, as well as the distance between objects. In this way, the computer can interpret and analyze visual images. In the field of robotics, visual analysis is very important.

Robotics

Robotics is the study of robots, which are machines that can be programmed to perform manual tasks. Most robots in use today perform various functions in an industrial setting. These robots typically are used in factory assembly lines, by the military and law enforcement agencies, or in hazardous waste facilities handling substances far too dangerous for humans to handle safely.

No useful, real-world robots resemble the human-oid creations of popular science fiction. Instead, they usually consist of a manipulator (arm), an end effector (hand), and some kind of control device. Industrial robots generally are programmed to perform repetitive tasks in a highly controlled environment. However, more research is being done in the field of intelligent robots that can learn from their environment and move about it autonomously. These robots use AI programming techniques to understand their environment and make appropriate decisions based on the information obtained. In order to learn about ones environment, one must have a means of sensing the environment. Artificial intelligence programs allow the robot to gather information about its surroundings by using one of the following techniques: contact sensing, in which a robot sensor physically touches another object; noncontact sensing, such as computer vision, in which the robot sensor does not physically touch the object but uses a camera to obtain and record information; and environmental sensing, in which the robot can sense external changes in the environment, such as temperature or radiation.

Much recent robotics research centers around mobile robots that can cope with environments that are hostile to humans, such as damaged nuclear-reactor cores, active volcanic craters, or the surfaces of other planets. The twin Mars Exploration Rovers that have been exploring opposite sides of Mars since 2004 can navigate autonomously for short intervals thanks to AI software.

Computer-assisted instruction

Intelligent computer-assisted instruction (ICAI) has three basic components: problem-solving expertise, student model, and tutoring module. The student using this type of program is presented with some information from the problem-solving expertise component. This is the knowledge base of this type of AI program. The student responds in some way to the material that was presented, either by answering questions or otherwise demonstrating his or her understanding. The student model analyzes the students responses and decides on a course of action. Typically this involves either presenting some review material or allowing the student to advance to the next level of knowledge presentation. The tutoring module may or may not be employed at this point, depending on the students level of mastery of the material. The system does not allow the student to advance further than his or her own level of mastery.

Most ICAI programs in use today operate in a set sequence: presentation of new material, evaluation of student response, and employment of tutorial material (if necessary). However, researchers at Yale University have created software that uses a more Socratic way of teaching. These programs encourage discovery and often will not respond directly to a students questions about a specific topic. The basic premise of this type of computer-assisted learning is to present new material only when a student needs it. This is when the brain is most ready to accept and retain the information. This is exactly the scenario most teachers hope for: students who become adroit self-educators, enthusiastically seeking the wisdom and truth that is meaningful to them. The cost of these programs, however, can be far beyond the means of many school districts. For this reason, these types of ICAI are used mostly in corporate training settings.

See also Automation; Computer, analog; Computer, digital; Computer software; Cybernetics.

Resources

BOOKS

Brighton, Henry. Introducing Artificial Intelligence. Cambridge, MA: Totem Books, 2004.

McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Wellesley, MA: AK Peters, Ltd., 2005.

Padhy, N. P. Artificial Intelligence and Intelligent Systems.

Oxford: Oxford University Press, 2005.

PERIODICALS

Aleksander, Igor. Beyond Artificial Intelligence. Nature. 429 (2004): 701-702.

Johanna Haaxma-Jurek

Artificial Intelligence

views updated Jun 27 2018

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include the playing of games such as checkers or chess, natural language understanding and synthesis, computer vision, problem solving, machine learning, and robotics.

The above is a general description of the field; there is no agreed-upon definition of artificial intelligence, primarily because there is little agreement as to what constitutes intelligence. Interpretations of what it means to say an agent is intelligent vary, yet most can be categorized in one of three ways. Intelligence can be thought of as a quality, an individually held property that is separable from all other properties of the human person. Intelligence is also seen in the functions one performs, in one's actions or the ability to carry out certain tasks. Finally, some researchers see intelligence as something primarily acquired and demonstrated through relationship with other intelligent beings. Each of these understandings of intelligence has been used as the basis of an approach to developing computer programs with intelligent characteristics.

First Attempts: Symbolic AI

The field of AI is considered to have its origin in the publication of Alan Turing's paper "Computing Machinery and Intelligence" (1950). John McCarthy coined the term artificial intelligence six years later at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI, which is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometers begin with a finite set of axioms and primitive objects such as points, so symbolicists, following such rationalist philosophers as Ludwig Wittgenstein and Alfred North Whitehead, predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects. Simple concepts or objects are directly expressed by a single symbol, while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought.

Symbolic AI met with immediate success in areas in which problems could be easily described using a small set of objects that operate in a limited domain in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four-square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and countermoves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains, such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving. These early successes led to a number of remarkably optimistic predictions of the prospects for symbolic AI.

Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two-year-old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human "experts" possess in the form of what is often called common sense. Humans make use of general knowledge, millions of things we know and apply to a situation, both consciously and subconsciously. Should such a set exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.

Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores (Understanding Computers and Cognition, 1986), is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. When a human being learns to ride a bicycle, he or she does not do so by calculating equations of trajectory or force. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a "degenerating research project," by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. His prediction has proven to be fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat's Cyc project. Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts. Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has yet to show convincing evidence of extended independent learning.

Symbolic AI is not completely dead, however. The primacy of primitive objects representable by some system of encoding is a basic assumption underlying the worldview that everything can be thought of in terms of information, a view that has been advanced by several physicists, including Freeman Dyson, Frank Tipler, and Stephen Wolfram.

Functional or Weak AI

In 1980, John Searle, in the paper "Minds, Brains, and Programs," introduced a division of the field of AI into "strong" and "weak" AI. Strong AI denoted the attempt to develop a full humanlike intelligence, while weak AI denoted the use of AI techniques to either better understand human reasoning or to solve more limited problems. Although there was little progress in developing a strong AI through symbolic programming methods, the attempt to program computers to carry out limited human functions has been quite successful. Much of what is currently labeled AI research follows a functional model, applying particular programming techniques, such as knowledge engineering, fuzzy logic, genetic algorithms, neural networking, heuristic searching, and machine learning via statistical methods, to practical problems. This view sees AI as advanced computing. It produces working programs that can take over certain human tasks, especially in situations where there is limited human control, or where the knowledge needed to solve a problem cannot be fully anticipated by human programmers. Such programs are used in manufacturing operations, transportation, education, financial markets, "smart" buildings, and even household appliances.

For a functional AI, there need be no quality labeled "intelligence" that is shared by humans and computers. All computers need do is perform a task that requires intelligence for a human to perform. It is also unnecessary, in functional AI, to model a program after the thought processes that humans use. If results are what matter, then it is possible to exploit the speed and storage capabilities of the digital computer while ignoring parts of human thought that are not understood or easily modeled, such as intuition. This is, in fact, what was done in designing the chess-playing program Deep Blue, which beat the reigning world champion, Garry Kasparov, in 1997. Deep Blue does not attempt to mimic the thought of a human chess player. Instead, it capitalizes on the strengths of the computer by examining an extremely large number of moves, more than any human could possibly examine.

There are two problems with functional AI. The first is the difficulty of determining what falls into the category of AI and what is simply a normal computer application. A definition of AI that includes any program that accomplishes some function normally done by a human being would encompass virtually all computer programs. Even among computer scientists there is little agreement as to what sorts of programs fall under the rubric of AI. Once an application is mastered, there is a tendency to no longer define that application as AI. For example, while game playing is one of the classical fields of AI, Deep Blue's design team emphatically stated that Deep Blue was not artificial intelligence, since it used standard programming and parallel processing techniques that were in no way designed to mimic human thought. The implication here is that merely programming a computer to complete a human task is not AI if the computer does not complete the task in the same way a human would.

For a functional approach to result in a full humanlike intelligence it would be necessary not only to specify which functions make up intelligence, but also to make sure those functions are suitably congruent with one another. Functional AI programs are rarely designed to be compatible with other programseach uses different techniques and methods, the sum of which is unlikely to capture the whole of human intelligence. Many in the AI community are also dissatisfied with a collection of task-oriented programs. The building of a general, humanlike intelligence, as difficult a goal as it may seem, remains the vision.

A Relational Approach

A third approach to AI builds on the assumption that intelligence is acquired, held, and demonstrated only through relationships with other intelligent agents. In "Computing Machinery and Intelligence," Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. A human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered intelligent. The Turing Test is based, not on the completion of any particular task or the solution of any particular problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000 there would be computers that could fool an interrogator at least 30 percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.

The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of background knowledge needed for a restricted form of the game would take, at a minimum, three hundred person-years to complete. This is assuming that one could identify the appropriate knowledge set at the outset. Turing suggests, rather than trying to imitate an adult mind, that one construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology (MIT), whose robotics lab has constructed several machines, the most famous of which are named Cog and Kismet, that represent a new direction in AI in that embodiedness is crucial to their design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. Cog and Kismet are no longer minds in a box, but embodied systems that depend on interaction within a complex environment. They are designed to learn those tasks associated with newborns, such as eye-hand coordination, object grasping, face recognition, and basic emotional responses, through social interaction with a team of researchers. Although they have developed such abilities as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project has so far been no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.

The Turing Test makes no assumptions as to how the computer would arrive at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers (e.g., Hans Moravec, Raymond Kurzweil) look to neural net research as a way to reverse engineer the brain. They hope that once scientists have the capability of designing nets with a complexity equal to that of the brain, they will find that the nets have the same power as the brain and will develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hopes that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.

AI in Science Fiction

While some advances have been made, a truly intelligent computer currently remains in the realm of speculation. Though researchers have continually projected that intelligent computers are imminent, progress in AI has been limited. Computers with intentionality and self-consciousness, with fully human reasoning skills or the ability to be in relationship, exist only in the realm of dreams and desires, a realm explored in fiction and fantasy.

The artificially intelligent computer in science fiction story and film is not a prop, but a character, one that has become a staple since the mid-1950s. These characters are embodied in a variety of physical forms, ranging from the wholly mechanical (computers and robots), to the partially mechanical (cyborgs), to the completely biological (androids). A general trend from the 1950s to the 1990s has been to depict intelligent computers in an increasingly anthropomorphic way. The robots and computers of early films, such as Maria in Metropolis (1926), Robby in Forbidden Planet (1956), Hal in 2001: A Space Odyssey (1968), or R2D2 and C3PO in Star Wars (1977), were clearly constructs of metal. On the other hand, early science fiction stories, such as Isaac Asimov's I, Robot (1950), explored the question of how one might distinguish between robots that looked human and actual human beings. Films and stories since the 1980s, such as Blade Runner (1982), The Terminator series (19842002), and A.I.: Artificial Intelligence (2001), depict machines with both mechanical and biological parts that are, at least superficially, practically indistinguishable from human beings.

Fiction that features AI can be classified in two general categories. The first comprises cautionary tales that explore the consequences of creating technology for the purposes of taking over human functions. In these stories the initial impulses for creating an artificial intelligence are noble: to preserve the wisdom of a race (Forbidden Planet ), to avoid nuclear war (Colossus: The Forbin Project, 1970), or to advance human knowledge (2001: A Space Odyssey ). The human characters suppose that they are completely in control, only to find that they have, in the end, abdicated too much responsibility to something that is ultimately "other" to the human species. The second category comprises tales of wish fulfillment (Star Wars; I, Robot ) in which the robots are not noted for their superior intelligence or capabilities but for the cheerful assistance and companionship they give their human masters. The computers in these stories are rooted in a relational rather than a functional view of human intelligence.

Religious and Ethical Implications

Many researchers in AI are committed physicalists and believe that the design of a truly intelligent machine will vindicate their belief that human beings are nothing but biological machines. Few would consider religious questions to be of import to their work. (One exception to this stance has been the robotics laboratory at MIT, which included a religious adviser, Anne Foerst, as part of the research team developing the robot Cog.) However, the assumptions that human beings are merely information-processing machines and that artifacts that are nonbiological can be genuinely intelligent have both anthropological and eschatological implications.

The most important questions raised by AI research are anthropological ones. What does it mean to be human? At what point would replacing some or all of our biological parts with mechanical components violate our integrity as human beings? Is our relationship to God contingent on our biological nature? What is the relationship of the soul to consciousness or intelligence? These questions are raised by the search for an artificial intelligence, irrespective of whether or not that search is ever successful.

Should that search be successful, ethical problems arise. What rights would an intelligent robot have? Would these be the same rights as a human being? Should an artificial intelligence be held to the same standards of moral responsibility as human beings? Should a robot be baptized or take part in other sacramental or covenantal acts? How one answers such questions depends largely on what one sees as central to our nature as human beingsmind, body, function, or relationship. Once again, whether AI becomes a reality or not, the debate over questions such as these is helpful in clarifying the principles on which our view of humanity rests.

AI also raises a set of ethical issues relevant to the search itself. In a controversial article in Wired (2000) Bill Joy, chief scientist at Sun Microsystems, warns that self-replicating robots and advances in nanotechnology could result, as soon as 2030, in a computer technology that may replace our species. Moravec of the AI lab at Carnegie Mellon University pushes the time back to 2040 but agrees that robots will displace humans from essential roles and could threaten our existence as a species. Joy calls for research in the possibly convergent fields of artificial intelligence, nanotechnology, and biotechnology to be suspended until researchers have greater certainty that such research would in no way threaten future human lives. On a lesser scale, the amount of responsibility the human community wishes to invest in autonomous or semi-autonomous machines remains a question.

The view of human identity as the information in one's brain has led several researchers to posit a new cybernetic form for human immortality. In The Age of Spiritual Machines (1999), Kurzweil predicts that by the end of the twenty-first century artificial intelligence will have resulted in effective immortality for humans. He expects that the merger of human and machine-based intelligences will have progressed to the point where most conscious entities will no longer have a permanent physical presence, but will move between mechanically enhanced bodies and machines in such a way that one's life expectancy will be indefinitely extended. Kurzweil is not the sole holder of this expectation, though he may be among the more optimistic in his timeline. Physicists Dyson and Tipler suggest a future in which human identity is located in the information that makes up the thoughts, memories, and experiences of each person. In The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead (1994), Tipler conjectures that the universe will cease to expand and at some point end in a contraction that he calls the "omega point." Tipler sees the omega point as the coalescence of all information, including the information that has made up every person who ever lived. This point can thus be seen as corresponding to the omniscient and omnipotent God referred to in many different religious traditions. At such a point, the information making up any given individual could be reinstantiated, resulting in a form of resurrection for that person, a cybernetic immortality. Cybernetic immortality provides one avenue for belief in a manner of human continuance that does not violate the assumption of a material basis for all existence. It is thus compatible with the most rigorous scientific theories of the natural world. However, cybernetic immortality is based on the assumptions that thoughts and memories define the human person and that consciousness is an emergent property of the complexity of the human brain. In other words, human beings are basically biological machines whose unique identity is found in the patterns that arise and are stored in the neuronal structures of the brain. If these patterns could be replicated, as in sophisticated computer technology, the defining characteristics of the person would be preserved. Such a view is not necessarily compatible with the anthropologies of most religions.

See Also

Cybernetics.

Bibliography

Daniel Crevier's AI: The Tumultuous History of the Search for Artificial Intelligence (New York, 1993) provides a clear history of the first forty years of AI research. A more critical view of the field can be found in Hubert Dreyfus's Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York, 1986). Another classic critique of AI is Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (Reading, Mass., 1986). Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 2d ed. (Cambridge, Mass., 1997) is a compilation of a variety of seminal papers on AI, including Turing's 1950 paper and John Searle's famous "Chinese Room" paper. HAL's Legacy: 2001's Computer as Dream and Reality, edited by David Stork (Cambridge, Mass., 1997), includes a good variety of papers examining the state of the various subfields that made up AI at the end of the twentieth century.

Turning from the history of the field to prognostications of its future, Mind Children: The Future of Robot and Human Intelligence (Cambridge, Mass., 1988) by Hans Moravec suggests that computers will be the next stage in human evolution, while Raymond Kurzweil, in The Age of Spiritual Machines (New York, 1999), posits a future in which human beings and computers merge. A good overview of films dealing with AI can be found in J. P. Telotte's Replications: A Robotic History of the Science Fiction Film (Urbana, Ill., 1995); fictional portrayals of AI are discussed in Patricia Warrick's The Cybernetic Imagination in Science Fiction (Cambridge, Mass., 1980). For theological implications, see Noreen L. Herzfeld, In Our Image: Artificial Intelligence and the Human Spirit (Minneapolis, 2002).

Noreen L. Herzfeld (2005)

Artificial Intelligence

views updated May 21 2018

ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) tries to enable computers to do the things that minds can do. These things include seeing pathways, picking things up, learning categories from experience, and using emotions to schedule one's actionswhich many animals can do, too. Thus, human intelligence is not the sole focus of AI. Even terrestrial psychology is not the sole focus, because some people use AI to explore the range of all possible minds.

There are four major AI methodologies: symbolic AI, connectionism, situated robotics, and evolutionary programming (Russell and Norvig 2003). AI artifacts are correspondingly varied. They include both programs (including neural networks) and robots, each of which may be either designed in detail or largely evolved. The field is closely related to artificial life (A-Life), which aims to throw light on biology much as some AI aims to throw light on psychology.

AI researchers are inspired by two different intellectual motivations, and while some people have both, most favor one over the other. On the one hand, many AI researchers seek solutions to technological problems, not caring whether these resemble human (or animal) psychology. They often make use of ideas about how people do things. Programs designed to aid/replace human experts, for example, have been hugely influenced by knowledge engineering, in which programmers try to discover what, and how, human experts are thinking when they do the tasks being modeled. But if these technological AI workers can find a nonhuman method, or even a mere trick (a kludge) to increase the power of their program, they will gladly use it.

Technological AI has been hugely successful. It has entered administrative, financial, medical, and manufacturing practice at countless different points. It is largely invisible to the ordinary person, lying behind some deceptively simple human-computer interface or being hidden away inside a car or refrigerator. Many procedures taken for granted within current computer science were originated within AI (pattern-recognition and image-processing, for example).

On the other hand, AI researchers may have a scientific aim. They may want their programs or robots to help people understand how human (or animal) minds work. They may even ask how intelligence in general is possible, exploring the space of possible minds. The scientific approachpsychological AIis the more relevant for philosophers (Boden 1990, Copeland 1993, Sloman 2002). It is also central to cognitive science, and to computationalism.

Considered as a whole, psychological AI has been less obviously successful than technological AI. This is partly because the tasks it tries to achieve are often more difficult. In addition, it is less clearfor philosophical as well as empirical reasonswhat should be counted as success.

Symbolic Ai

Symbolic AI is also known as classical AI and as GOFAIshort for John Haugeland's label "Good Old-Fashioned AI" (1985). It models mental processes as the step-by-step information processing of digital computers. Thinking is seen as symbol-manipulation, as (formal) computation over (formal) representations. Some GOFAI programs are explicitly hierarchical, consisting of procedures and subroutines specified at different levels. These define a hierarchically structured search-space, which may be astronomical in size. Rules of thumb, or heuristics, are typically provided to guide the searchby excluding certain areas of possibility, and leading the program to focus on others. The earliest AI programs were like this, but the later methodology of object-oriented programming is similar.

Certain symbolic programs, namely production systems, are implicitly hierarchical. These consist of sets of logically separate if-then (condition-action) rules, or productions, defining what actions should be taken in response to specific conditions. An action or condition may be unitary or complex, in the latter case being defined by a conjunction of several mini-actions or mini-conditions. And a production may function wholly within computer memory (to set a goal, for instance, or to record a partial parsing) or outside it (via input/output devices such as cameras or keyboards).

Another symbolic technique, widely used in natural language processing (NLP) programs, involves augmented transition networks, or ATNs. These avoid explicit backtracking by using guidance at each decision-point to decide which question to ask and/or which path to take.

GOFAI methodology is used for developing a wide variety of language-using programs and problem-solvers. The more precisely and explicitly a problem-domain can be defined, the more likely it is that a symbolic program can be used to good effect. Often, folk-psychological categories and/or specific propositions are explicitly represented in the system. This type of AI, and the forms of computational psychology based on it, is defended by the philosopher Jerry Fodor (1988).

GOFAI models (whether technological or scientific) include robots, planning programs, theorem-provers, learning programs, question-answerers, data-mining systems, machine translators, expert systems of many different kinds, chess players, semantic networks, and analogy machines. In addition, a host of software agentsspecialist mini-programs that can aid a human being to solve a problemare implemented in this way. And an increasingly important area of research is distributed AI, in which cooperation occurs between many relatively simple individualswhich may be GOFAI agents (or neural-network units, or situated robots).

The symbolic approach is used also in modeling creativity in various domains (Boden 2004, Holland et al. 1986). These include musical composition and expressive performance, analogical thinking, line-drawing, painting, architectural design, storytelling (rhetoric as well as plot), mathematics, and scientific discovery. In general, the relevant aesthetic/theoretical style must be specified clearly, so as to define a space of possibilities that can be fruitfully explored by the computer. To what extent the exploratory procedures can plausibly be seen as similar to those used by people varies from case to case.

Connectionist Ai

Connectionist systems, which became widely visible in the mid-1980s, are different. They compute not by following step-by-step programs but by using large numbers of locally connected (associative) computational units, each one of which is simple. The processing is bottom-up rather than top-down.

Connectionism is sometimes said to be opposed to AI, although it has been part of AI since its beginnings in the 1940s (McCulloch and Pitts 1943, Pitts and McCulloch 1947). What connectionism is opposed to, rather, is symbolic AI. Yet even here, opposed is not quite the right word, since hybrid systems exist that combine both methodologies. Moreover, GOFAI devotees such as Fodor see connectionism as compatible with GOFAI, claiming that it concerns how symbolic computation can be implemented (Fodor and Pylyshyn 1988).

Two largely separate AI communities began to emerge in the late 1950s (Boden forthcoming). The symbolic school focused on logic and Turing-computation, whereas the connectionist school focused on associative, and often probabilistic, neural networks. (Most connectionist systems are connectionist virtual machines, implemented in von Neumann computers; only a few are built in dedicated connectionist hardware.) Many people remained sympathetic to both schools. But the two methodologies are so different in practice that most hands-on AI researchers use either one or the other.

There are different types of connectionist systems. Most philosophical interest, however, has focused on networks that do parallel distributed processing, or PDP (Clark 1989, Rumelhart and McClelland 1986). In essence, PDP systems are pattern recognizers. Unlike brittle GOFAI programs, which often produce nonsense if provided with incomplete or part-contradictory information, they show graceful degradation. That is, the input patterns can be recognized (up to a point) even if they are imperfect.

A PDP network is made up of subsymbolic units, whose semantic significance cannot easily be expressed in terms of familiar semantic content, still less propositions. (Some GOFAI programs employ subsymbolic units, but most do not.) That is, no single unit codes for a recognizable concept, such as dog or cat. These concepts are represented, rather, by the pattern of activity distributed over the entire network.

Because the representation is not stored in a single unit but is distributed over the whole network, PDP systems can tolerate imperfect data. (Some GOFAI systems can do so too, but only if the imperfections are specifically foreseen and provided for by the programmer.) Moreover, a single subsymbolic unit may mean one thing in one input-context and another in another. What the network as a whole can represent depends on what significance the designer has decided to assign to the input-units. For instance, some input-units are sensitive to light (or to coded information about light), others to sound, others to triads of phonological categories and so on.

Most PDP systems can learn. In such cases, the weights on the links of PDP units in the hidden layer (between the input-layer and the output-layer) can be altered by experience, so that the network can learn a pattern merely by being shown many examples of it. (A GOFAI learning-program, in effect, has to be told what to look for beforehand, and how.) Broadly, the weight on an excitatory link is increased by every coactivation of the two units concerned: cells that fire together, wire together.

These two AI approaches have complementary strengths and weaknesses. For instance, symbolic AI is better at modeling hierarchy and strong constraints, whereas connectionism copes better with pattern recognition, especially if many conflictingand perhaps incompleteconstraints are relevant. Despite having fervent philosophical champions on both sides, neither methodology is adequate for all of the tasks dealt with by AI scientists. Indeed, much research in connectionism has aimed to restore the lost logical strengths of GOFAI to neural networkswith only limited success by the beginning of the twenty-first century.

Situated Robotics

Another, and more recently popular, AI methodology is situated robotics (Brooks 1991). Like connectionism, this was first explored in the 1950s. Situated robots are described by their designers as autonomous systems embedded in their environment (Heidegger is sometimes cited). Instead of planning their actions, as classical robots do, situated robots react directly to environmental cues. One might say that they are embodied production systems, whose if-then rules are engineered rather than programmed, and whose conditions lie in the external environment, not inside computer memory. Althoughunlike GOFAI robotsthey contain no objective representations of the world, some of them do construct temporary, subject-centered (deictic) representations.

The main aim of situated roboticists in the mid-1980s, such as Rodney Brooks, was to solve/avoid the frame problem that had bedeviled GOFAI (Pylyshyn 1987). GOFAI planners and robots had to anticipate all possible contingencies, including the side effects of actions taken by the system itself, if they were not to be defeated by unexpectedperhaps seemingly irrelevantevents. This was one of the reasons given by Hubert Dreyfus (1992) in arguing that GOFAI could not possibly succeed: Intelligence, he said, is unformalizable. Several ways of implementing nonmonotonic logics in GOFAI were suggested, allowing a conclusion previously drawn by faultless reasoning to be negated by new evidence. But because the general nature of that new evidence had to be foreseen, the frame problem persisted.

Brooks argued that reasoning shouldn't be employed at all: the system should simply react appropriately, in a reflex fashion, to specific environmental cues. This, he said, is what insects doand they are highly successful creatures. (Soon, situated robotics was being used, for instance, to model the six-legged movement of cockroaches.) Some people joked that AI stood for artificial insects, not artificial intelligence. But the joke carried a sting: Many argued that much human thinking needs objective representations, so the scope for situated robotics was strictly limited.

Evolutionary Programming

In evolutionary programming, genetic algorithms (GAs) are used by a program to make random variations in its own rules. The initial rules, before evolution begins, either do not achieve the task in question or do so only inefficiently; sometimes, they are even chosen at random.

The variations allowed are broadly modeled on biological mutations and crossovers, although more unnatural types are sometimes employed. The most successful rules are automatically selected, and then varied again. This is more easily said than done: The breakthrough in GA methodology occurred when John Holland (1992) defined an automatic procedure for recognizing which rules, out of a large and simultaneously active set, were those most responsible for whatever level of success the evolving system had just achieved.

Selection is done by some specific fitness criterion, predefined in light of the task the programmer has in mind. Unlike GOFAI systems, a GA program contains no explicit representation of what it is required to do: its task is implicit in the fitness criterion. (Similarly, living things have evolved to do what they do without knowing what that is.) After many generations, the GA system may be well-adapted to its task. For certain types of tasks, it can even find the optimal solution.

This AI method is used to develop both symbolic and connectionist AI systems. And it is applied both to abstract problem-solving (mathematical optimization, for instance, or the synthesis of new pharmaceutical molecules) and to evolutionary roboticswherein the brain and/or sensorimotor anatomy of robots evolve within a specific task-environment.

It is also used for artistic purposes, in the composition of music or the generation of new visual forms. In these cases, evolution is usually interactive. That is, the variation is done automatically but the selection is done by a human beingwho does not need to (and usually could not) define, or even name, the aesthetic fitness criteria being applied.

Artificial Life

AI is a close cousin of A-Life (Boden 1996). This is a form of mathematical biology, which employs computer simulation and situated robotics to study the emergence of complexity in self-organizing, self-reproducing, adaptive systems. (A caveat: much as some AI is purely technological in aim, so is some A-Life; the research of most interest to philosophers is the scientifically oriented type.)

The key concepts of A-Life date back to the early 1950s. They originated in theoretical work on self-organizing systems of various kinds, including diffusion equations and cellular automata (by Alan Turing and John von Neumann respectively), and in early self-equilibrating machines and situated robots (built by W. Ross Ashby and W. Grey Walter). But A-Life did not flourish until the late 1980s, when computing power at last sufficed to explore these theoretical ideas in practice.

Much A-Life work focuses on specific biological phenomena, such as flocking, cooperation in ant colonies, or morphogenesisfrom cell-differentiation to the formation of leopard spots or tiger stripes. But A-Life also studies general principles of self-organization in biology: evolution and coevolution, reproduction, and metabolism. In addition, it explores the nature of life as suchlife as it could be, not merely life as it is.

A-Life workers do not all use the same methodology, but they do eschew the top-down methods of GOFAI. Situated and evolutionary robotics, and GA-generated neural networks, too, are prominent approaches within the field. But not all A-Life systems are evolutionary. Some demonstrate how a small number of fixed, and simple, rules can lead to self-organization of an apparently complex kind.

Many A-Lifers take pains to distance themselves from AI. But besides their close historical connections, AI and A-Life are philosophically related in virtue of the linkage between life and mind. It is known that psychological properties arise in living things, and some people argue (or assume) that they can arise only in living things. Accordingly, the whole of AI could be regarded as a subarea of A-Life. Indeed, some people argue that success in AI (even in technological AI) must await, and build on, success in A-Life.

Why ai Is a Misleading Label

Whichever of the two AI motivationstechnological or psychologicalis in question, the name of the field is misleading in three ways. First, the term intelligence is normally understood to cover only a subset of what AI workers are trying to do. Second, intelligence is often supposed to be distinct from emotion, so that AI is assumed to exclude work on that. And third, the name implies that a successful AI system would really be intelligenta philosophically controversial claim that AI researchers do not have to endorse (though some do).

As for the first point, people do not normally regard vision or locomotion as examples of intelligence. Many people would say that speaking one's native language is not a case of intelligence either, except in comparison with nonhuman species; and common sense is sometimes contrasted with intelligence. The term is usually reserved for special cases of human thought that show exceptional creativity and subtlety, or which require many years of formal education. Medical diagnosis, scientific or legal reasoning, playing chess, and translating from one language to another are typically regarded as difficult, thus requiring intelligence. And these tasks were the main focus of research when AI began. Vision, for example, was assumed to be relatively straightforwardnot least, because many nonhuman animals have it too. It gradually became clear, however, that everyday capacities such as vision and locomotion are vastly more complex than had been supposed. The early definition of AI as programming computers to do things that involve intelligence when done by people was recognized as misleading, and eventually dropped.

Similarly, intelligence is often opposed to emotion. Many people assume that AI could never model that. However, crude examples of such models existed in the early 1960s, and emotion was recognized by a high priest of AI, Herbert Simon, as being essential to any complex intelligence. Later, research in the computational philosophy (and modeling) of affect showed that emotions have evolved as scheduling mechanisms for systems with many different, and potentially conflicting, purposes (Minsky 1985, and Web site). When AI began, it was difficult enough to get a program to follow one goal (with its subgoals) intelligentlyany more than that was essentially impossible. For this reason, among others, AI modeling of emotion was put on the back burner for about thirty years. By the 1990s, however, it had become a popular focus of AI research, and of neuroscience and philosophy too.

The third point raises the difficult questionwhich many AI practitioners leave open, or even ignoreof whether intentionality can properly be ascribed to any conceivable program/robot (Newell 1980, Dennett 1987, Harnad 1991).

Ai and Intentionality

Could some NLP programs really understand the sentences they parse and the words they translate? Or can a visuo-motor circuit evolved within a robot's neural-network brain truly be said to represent the environmental feature to which it responds? If a program, in practice, could pass the Turing Test, could it truly be said to think? More generally, does it even make sense to say that AI may one day achieve artificially produced (but nonetheless genuine) intelligence?

For the many people in the field who adopt some form of functionalism, the answer in each case is: In principle, yes. This applies for those who favor the physical symbol system hypothesis or intentional systems theory. Others adopt connectionist analyses of concepts, and of their development from nonconceptual content. Functionalism is criticized by many writers expert in neuroscience, who claim that its core thesis of multiple realizability is mistaken. Others criticize it at an even deeper level: a growing minority (especially in A-Life) reject neo-Cartesian approaches in favor of philosophies of embodiment, such as phenomenology or autopoiesis.

Part of the reason why such questions are so difficult is that philosophers disagree about what intentionality is, even in the human case. Practitioners of psychological AI generally believe that semantic content, or intentionality, can be naturalized. But they differ about how this can be done.

For instance, a few practitioners of AI regard computation and intentionality as metaphysically inseparable (Smith 1996). Others ascribe meaning only to computations with certain causal consequences and provenance, or grounding. John Searle argues that AI cannot capture intentionality, becauseat baseit is concerned with the formal manipulation of formal symbols. And for those who accept some form of evolutionary semantics, only evolutionary robots could embody meaning (Searle, 1980).

See also Computationalism; Machine Intelligence.

Bibliography

Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 2nd ed. London: Routledge, 2004.

Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Oxford: Oxford University Press, forthcoming. See especially chapters 4, 7.i, 1013, and 14.

Boden, Margaret A., ed. The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990.

Boden, Margaret A., ed. The Philosophy of Artificial Life. Oxford: Oxford University Press, 1996.

Brooks, Rodney A. "Intelligence without Representation." Artificial Intelligence 47 (1991): 139159.

Clark, Andy J. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, MA: MIT Press, 1989.

Copeland, B. Jack. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell, 1993.

Dennett, Daniel C. The Intentional Stance. Cambridge, MA: MIT Press, 1987.

Dreyfus, Hubert L. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.

Fodor, Jerome A., and Zenon W. Pylyshyn. "Connectionism and Cognitive Architecture: A Critical Analysis." Cognition 28 (1988): 371.

Harnad, Stevan. "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem." Minds and Machines 1 (1991): 4354.

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press, 1985.

Holland, John H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA: MIT Press, 1992.

Holland, John H., Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT Press, 1986.

McCulloch, Warren S., and Walter H. Pitts. "A Logical Calculus of the Ideas Immanent in Nervous Activity." In The Philosoophy of Artificial Intelligence, edited by Margaret A. Boden. Oxford: Oxford University Press, 1990. First published in 1943.

Minsky, Marvin L. The Emotion Machine. Available from http://web.media.mit.edu/~minsky/E1/eb1.html. Web site only.

Minsky, Marvin L. The Society of Mind. New York: Simon & Schuster, 1985.

Newell, Allen. "Physical Symbol Systems." Cognitive Science 4 (1980): 135183.

Pitts, Walter H., and Warren S. McCulloch. "How We Know Universals: The Perception of Auditory and Visual Forms." In Embodiments of Mind, edited by Warren S. McCulloch. Cambridge, MA: MIT Press, 1965. First published in 1947.

Pylyshyn, Zenon W. The Robot's Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex, 1987.

Rumelhart, David E., and James L. McClelland, eds. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 2 vols. Cambridge, MA: MIT Press, 1986.

Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2003.

Searle, John R. "Minds, Brains, and Programs," The Behavioral and Brain Sciences 3 (1980), 417424. Reprinted in M. A. Boden, ed., The Philosophy of Artificial Intelligence (Oxford: Oxford University Press 1990), pp. 6788.

Sloman, Aaron. "The Irrelevance of Turing Machines to Artificial Intelligence." In Computationalism: New Directions, edited by Matthias Scheutz. Cambridge, MA: MIT Press, 2002.

Smith, Brian C. On the Origin of Objects. Cambridge, MA: MIT Press, 1996.

Margaret A. Boden (1996, 2005)

Artificial Intelligence

views updated May 21 2018

Artificial intelligence

Certain tasks can be performed faster and more accurately by traditionally programmed computers than by human beings, particularly numerical computation and the storage, retrieval, and sorting of large quantities of information. However, the ability of computers to interact flexibly with the real world—their "intelligence"—remains slight. Artificial intelligence (AI) is a subfield of computer science that seeks to remedy this situation by creating software and hardware that possess some of the behavioral flexibility shown by natural intelligences (people and animals).

In the 1940s and 1950s, the first large, electronic, digital computers were designed to accomplish specific tasks (e.g., a numerical calculation set up by a human programmer) by completing a series of clearly defined steps, an algorithm . Programmers wrote algorithmic software that precisely specified both the problem and how to solve it. AI programmers, in contrast, seek to program computers not with rigid algorithms but with flexible rules for seeking solutions. An AI program may even be designed to modify the rules it is given or to develop entirely new rules.

What types of problem are appropriate for traditional, algorithmic computing and what types call out for AI? Some of the tasks that are hardest for people are, fortunately, algorithmic in nature. Teachers can describe in detail the process for multiplying numbers, and accountants can state accurately the rules for completing tax forms, yet many people have difficulty performing such tasks. Straightforward, algorithmic programs can perform them easily because they can be broken down into a series of precise procedures or steps that do not vary from case to case. On the other hand, tasks that require little thought for human beings can be hard to translate into algorithms and therefore difficult for computers to perform. For example, most people know that a pot of boiling water requires careful handling. We identify hot pots by flexible recognition of many possible signs: steam rising, radiant heat felt on the skin, a glimpse of blue flame or red coils under the pot, a rattling lid, and so forth. Once we know that the pot is boiling, we plan our actions accordingly. This process seems simple, yet describing exactly to a computer how to reliably conclude "this pot is hot" and to take appropriate action turns out to be extremely difficult. The goal of AI is to create computers that can handle such complex, flexible situations. One obstacle to this goal is uncertainty or confusion about what is intelligence.


What is intelligence?

One possible definition of intelligence is the acquisition and application of knowledge. An intelligent entity, on this view, is one that learns—acquires knowl edge—and is able to apply this knowledge to changing real-world situations. In this sense, a rat is intelligent, but most computers, despite their impressive numbercrunching capabilities, are not. To qualify as intelligent, an AI system must use knowledge (whether acquired from databases, sensory devices, trial and error, or all of the above) to make effective choices when confronted with data that are to some extent unpredictable Insofar as a computer can do this, it may be said, for the purposes of AI, to display intelligence. Note that this definition is purely functional, and that in AI the question of consciousness, though intriguing, need not be considered.

This limited characterization of intelligence would, perhaps, have been considered overcautious by some AI researchers in the early days, when optimism ran high. For example, U.S. economist, AI pioneer, and Nobel Prize winner Herbert Simon (1916–2001) predicted in 1965 that by 1985 "machines will be capable of doing any work man can do." Yet over 35 years later, despite exponential growth in memory size and processing speed, no computer even comes close to commonplace human skills like conversing, driving a car, or diapering a baby, much less "doing any work" a human being can do. Why has progress in AI been so slow?

One answer is that while intelligence, as defined above, requires knowledge, computers are only good at handling information, which is not the same thing. Knowledge is meaningful information, and "meaning" is a nonmeasurable, multivalued variable arising in the real world of things and values. Bits—"binary digits," 1s and 0s—have no meaning, as such; they are meaningful only when people assign meanings to them. Consider a single bit, a "1": its information content is one bit regardless of what it means, yet it may mean nothing or anything, including "The circuit is connected," "We surrender," "It is more likely to rain than snow," and "I like apples." The question for AI is, how can information be made meaningful to a computer? Simply adding more bits does not work, for meaning arises not from information as such, but from relationships involving the real world. In one form or another, this basic problem has been stymieing AI research for decades. Nor is it the only such problem. Another problem is that computers, even those employing "fuzzy logic" and autonomous learning , function by processing symbols (e.g., 1s and 0s) according to rules (e.g., those of Boolean algebra)—yet human beings do not usually think by processing symbols according to rules. Humans are able think this way, as when we do arithmetic , but more commonly we interpret situations, leap to conclusions, utter sentences, and plan actions using thought processes that do not involve symbolic computation at all. What our thought processes really are—that is, what our intelligence is, preicsely—and how to translate it into (or mimic it by means of) a system of computable symbols and rules is a problem that remains unsolved, in general, by AI researchers.

In 1950, British mathematician Alan Turing (1912–1954) proposed a hypothetical game to help decide the issue of whether a given machine is truly intelligent. The "imitation game," as Turing originally called it, consisted of a human questioner in a room typing questions on a keyboard. In another room, an unseen respondents—either a human or a computer—would type back answers. The questioner could pose queries to the respondent in an attempt to determine if he or she was corresponding with a human or a computer. In Turing's opinion, if the computer could fool the questioner into believing that he or she was having a dialog with a human being then the computer could be said to be truly intelligent.

The Turing test is obviously biased toward human language prowess, which most AI programs do not even seek to emulate. Nevertheless, it is significant that even the most advanced AI programs devoted to natural language are as far as ever from passing the Turing test. "Intelligence" has proved a far tougher nut to crack than the pioneers of AI believed, half a century ago. Computers remain unintelligent.

Even so, AI has made many gains since the 1950s. AI software is now present in many devices, such as automatic teller machines (ATMs), that are part of daily life, and is finding increasing commercial application in many industries.


Overview of AI

All AI computer programs are built on two basic elements: a knowledge base and an inferencing capability. (Inferencing means drawing conclusions based on logic and prior knowledge.) A knowledge base is made up of many discrete units of information—representing facts, concepts, theories, procedures, and relationships—all relevant to a particular task or aspect of the world. Programs are written to give the computer the ability to manipulate this information and to reason, make judgments, reach conclusions, and choose solutions to the problem at hand, such as guessing whether a series of credit-card transactions involves fraud or driving an automated rover across a rocky Martian landscape. Whereas conventional, deterministic software must follow a strictly logical series of steps to reach a conclusion, AI software uses the techniques of search and pattern matching; it may also, in some cases, modify its knowledge base or its own structure ("learn"). Pattern matching may still be algorithmic; that is, the computer must be told exactly where to look in its knowledge base and what constitutes a match. The computer searches its knowledge base for specific conditions or patterns that fit the criteria of the problem to be solved. Microchip technology has increased computational speed, allowing AI programs to quickly scan huge arrays of data. For example, computers can scan enough possible chess moves to provide a challenging opponent for even the best human players. Artificial intelligence has many other applications, including problem solving in mathematics and other fields, expert systems in medicine, natural language processing, robotics , and education.

The ability of some AI programs to solve problems based on facts rather than on a predetermined series of steps is what most closely resembles "thinking" and causes some in the AI field to argue that such devices are indeed intelligent.


General problem solving

Problem solving is thus, something AI does very well as long as the problem is narrow in focus and clearly defined. For example, mathematicians, scientists, and engineers are often called upon to prove theorems. (A theorem is a mathematical statement that is part of a larger theory or structure of ideas.) Because the formulas involved in such tasks may be large and complex, this can take an enormous amount of time, thought, and trial and error. A specially designed AI program can reduce and simplify such formulas in a fraction of the time needed by human workers.

Artificial intelligence can also assist with problems in planning. An effective step-by-step sequence of actions that has the lowest cost and fewest steps is very important in business and manufacturing operations. An AI program can be designed that includes all possible steps and outcomes. The programmer must also set some criteria with which to judge the outcome, such as whether speed is more important than cost in accomplishing the task, or if lowest cost is the desired result, regardless of how long it takes. The plan generated by this type of AI program will take less time to generate than by traditional methods.


Expert systems

The expert system is a major application of AI today. Also known as knowledge-based systems, expert systems act as intelligent assistants to human experts or serve as a resource to people who may not have access to an expert. The major difference between an expert system and a simple database containing information on a particular subject is that the database can only give the user discrete facts about the subject, whereas an expert system uses reasoning to draw conclusions from stored information. The purpose of this AI application is not to replace our human experts, but to make their knowledge and experience more widely available.

An expert system has three parts: knowledge base, inference engine, and user interface. The knowledge base contains both declarative (factual) and procedural (rules-of-usage) knowledge in a very narrow field. The inference engine runs the system by determining which procedural knowledge to access in order to obtain the appropriate declarative knowledge, then draws conclusions and decides when an applicable solution is found.

An interface is usually defined as the point where the machine and the human "touch." An interface is usually a keyboard, mouse, or similar devices. In an expert system, there are actually two different user interfaces: One is for the designer of the system (who is generally experienced with computers) the other is for the user (generally a computer novice). Because most users of an expert system will not be computer experts, it is important that system be easy for them to use. All user interfaces are bi-directional; that is, are able to receive information from the user and respond to the user with its recommendations. The designer's user interface must also be capable of adding new information to the knowledge base.


Natural language processing

Natural language is human language. Natural-language-processing programs use artificial intelligence to allow a user to communicate with a computer in the user's natural language. The computer can both understand and respond to commands given in a natural language.

Computer languages are artificial languages, invented for the sake of communicating instructions to computers and enabling them to communicate with each other. Most computer languages consist of a combination of symbols, numbers, and some words. These languages are complex and may take years to master. By programming computers (via computer languages) to respond to our natural languages, we make them easier to use.

However, there are many problems in trying to make a computer understand people. Four problems arise that can cause misunderstanding: (1) Ambiguity—confusion over what is meant due to multiple meanings of words and phrases. (2) Imprecision—thoughts are sometimes expressed in vague and inexact terms. (3) Incompleteness—the entire idea is not presented, and the listener is expected to "read between the lines." (4) Inaccuracy—spelling, punctuation, and grammar problems can obscure meaning. When we speak to one another, furthermore, we generally expect to be understood because our common language assumes all the meanings that we share as members of a specific cultural group. To a nonnative speaker, who shares fewer of our cultural background, our meaning may not always be clear. It is even more difficult for computers, which have no share at all in the real-world relationships that confer meaning upon information, to correctly interpret natural language.

To alleviate these problems, natural language processing programs seek to analyze syntax—the way words are put together in a sentence or phrase; semantics—the derived meaning of the phrase or sentence; and context—the meaning of distinct words within a sentence. But even this is not enough. The computer must also have access to a dictionary which contains definitions of every word and phrase it is likely to encounter, and may also use keyword analysis—a pattern-matching technique in which the program scans the text, looking for words that it has been programmed to recognize. If a keyword is found, the program responds by manipulating the text to form a reasonable response.

In its simplest form, a natural language processing program works like this: a sentence is typed in on the keyboard; if the program can derive meaning—that is, if it has a reference in its knowledge base for every word and phrase—it will respond, more or less appropriately. An example of a computer with a natural language processor is the computerized card catalog available in many public libraries. The main menu usually offers four choices for looking up information: search by author, search by title, search by subject, or search by keyword. If you want a list of books on a specific topic or subject you type in the appropriate phrase. You are asking the computer—in English—to tell you what is available on the topic. The computer usually responds in a very short time—in English—with a list of books along with call numbers so you can find what you need.


Computer vision

Computer vision is the use of a computer to analyze and evaluate visual information. A camera is used to collect visual data. The camera translates the image into a series of electrical signals. This data is analog in nature—that is, it is directly measurable and quantifiable. A digital computer, however, operates using numbers expressed directly as digits. It cannot read analog signals, so the image must be digitized using an analog-to-digital converter. The image becomes a very long series of binary numbers that can be stored and interpreted by the computer. Just how long the series is depends on how densely packed the pixels are in the visual image. To get an idea of pixels and digitized images, take a close look at a newspaper photograph. If you move the paper very close to your eyes, you will notice that the image is a sequence of black and white dots—called pixels, for picture elements—arranged in a certain pattern. When you move the picture away from your eyes, the picture becomes clearly defined and your brain is able to recognize the image.

Artificial intelligence works much the same way. Clues provided by the arrangement of pixels in the image give information as to the relative color and texture of an object, as well as the distance between objects. In this way, the computer can interpret and analyze visual images. In the field of robotics, visual analysis is very important.


Robotics

Robotics is the study of robots, which are machines that can be programmed to perform manual tasks. Most robots in use today perform various functions in an industrial setting. These robots typically are used in factory assembly lines, by the military and law enforcement agencies, or in hazardous waste facilities handling substances far too dangerous for humans to handle safely.

Most robots do not resemble the humanoid creations of popular science fiction. Instead, they usually consist of a manipulator (arm), an end effector (hand), and some kind of control device. Industrial use robots generally are programmed to perform repetitive tasks in a highly controlled environment. However, more research is being done in the field of intelligent robots that can learn from their environment and move about it autonomously. These robots use AI programming techniques to understand their environment and make appropriate decisions based on the information obtained. In order to learn about one's environment, one must have a means of sensing the environment. Artificial intelligence programs allow the robot to gather information about its surroundings by using one of the following techniques: contact sensing, in which a robot sensor physically touches another object; noncontact sensing, such as computer vision, in which the robot sensor does not physically touch the object but uses a camera to obtain and record information; and environmental sensing, in which the robot can sense external changes in the environment, such as temperature or radiation .

The most recent robotics research centers around mobile robots that can cope with environments that are hostile to humans, such as damaged nuclear-reactor cores, active volcanic craters, or the surfaces of other planets.


Computer-assisted instruction

Intelligent computer-assisted instruction (ICAI) has three basic components: problem-solving expertise, student model, and tutoring module. The student using this type of program is presented with some information from the problem-solving expertise component. This is the knowledge base of this type of AI program. The student responds in some way to the material that was presented, either by answering questions or otherwise demonstrating his or her understanding. The student model analyzes the student's responses and decides on a course of action. Typically this involves either presenting some review material or allowing the student to advance to the next level of knowledge presentation. The tutoring module may or may not be employed at this point, depending on the student's level of mastery of the material. The system does not allow the student to advance further than his or her own level of mastery.

Most ICAI programs in use today operate in a set sequence of presentation of new material, evaluation of student response, and employment of tutorial (if necessary). However, researchers at Yale University have created software that uses a more Socratic way of teaching. These programs encourage discovery and often will not respond directly to a student's questions about a specific topic. The basic premise of this type of computer-assisted learning is to present new material only when a student needs it. This is when the brain is most ready to accept and retain the information. This is exactly the scenario most teachers hope for: students who become adroit self-educators, enthusiastically seeking the wisdom and truth that is meaningful to them. The cost of these programs, however, can be far beyond the means of many school districts. For this reason, these types of ICAI are used mostly in corporate training settings.

See also Automation; Computer, analog; Computer, digital; Computer software; Cybernetics.


Resources

books

Caudill, Maureen. In Our Own Image: Building an Artificial Person. New York: Oxford University Press, 1992.

Kelly, Derek. A Layman's Introduction to Robotics. Princeton: Petrocelli Books, 1986.

Periodicals

Feder, Barnaby J., "Artificial Intelligence For the New Millennium; A Revolution More Bland Than Kubrick's '2001'." New York Times. June 30, 2001.

Travis, John, "Building a Baby Brain in a Robot." Science (20 May 1994): 1080–1082.

"An Encounter with A.I." Popular Science (June 1994).

Weng, Juyang, et al. "Autonomous Mental Development by Robots and Animals." Science. (January 26, 2001):599–00.


Johanna Haaxma-Jurek

Artificial Intelligence

views updated May 29 2018

Artificial Intelligence


Artificial intelligence (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include game playing, natural language understanding and synthesis, computer vision, problem solving, learning, and robotics.

The above is a general description of the field; there is no agreed upon definition of artificial intelligence, primarily because there is little agreement as to what constitutes intelligence. Interpretations of what it means to be intelligent vary, yet most can be categorized in one of three ways. Intelligence can be thought of as a quality, an individually held property that is separable from all other properties of the human person. Intelligence is also seen in the functions one performs, in actions or the ability to carry out certain tasks. Finally, some researchers see intelligence as a quality that can only be acquired and demonstrated through relationship with other intelligent beings. Each of these understandings of intelligence has been used as the basis of an approach to developing computer programs with intelligent characteristics.


First attempts: symbolic AI

The field of AI is considered to have its origin in the publication of British mathematician Alan Turing's (19121954) paper "Computing Machinery and Intelligence" (1950). The term itself was coined six years later by mathematician and computer scientist John McCarthy (b. 1927) at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI and is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometry can be built from a finite set of axioms and primitive objects such as points and lines, so symbolicists, following rationalist philosophers such as Ludwig Wittgenstein (18891951) and Alfred North Whitehead (18611947), predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects. Simple concepts or objects are directly expressed by a single symbol while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought.

Symbolic AI met with immediate success in areas in which problems could be easily described using a limited domain of objects that operate in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and countermoves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving.

Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two year old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human "experts" possess in the form of what is known as common sense. Humans make use of general knowledgethe millions of things that are known and applied to a situationboth consciously and subconsciously. Should it exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.

Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that human experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a "degenerating research project," by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. This prediction has proven fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat's Cyc (pronounced "psych"). Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts. Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has not yet shown convincing evidence of extended independent learning.

Functional or weak AI

In 1980, John Searle, in the paper "Minds, Brains, and Programs," introduced a division of the field of AI into "strong" and "weak" AI. Strong AI denoted the attempt to develop a full human-like intelligence, while weak AI denoted the use of AI techniques to either better understand human reasoning or to solve more limited problems. Although there was little progress in developing a strong AI through symbolic programming methods, the attempt to program computers to carry out limited human functions has been quite successful. Much of what is currently labeled AI research follows a functional model, applying particular programming techniques, such as knowledge engineering, fuzzy logic, genetic algorithms, neural networking, heuristic searching, and machine learning via statistical methods, to practical problems. This view sees AI as advanced computing. It produces working programs that can take over certain human tasks. Such programs are used in manufacturing operations, transportation, education, financial markets, "smart" buildings, and even household appliances.

For a functional AI, there need be no quality labeled "intelligence" that is shared by humans and computers. All computers need do is perform a task that requires intelligence for a human to perform. It is also unnecessary, in functional AI, to model a program after the thought processes that humans use. If results are what matters, then it is possible to exploit the speed and storage capabilities of the digital computer while ignoring parts of human thought that are not understood or easily modeled, such as intuition. This is, in fact, what was done in designing the chess-playing program Deep Blue, which in 1997 beat the reigning world chess champion, Gary Kasparov. Deep Blue does not attempt to mimic the thought of a human chess player. Instead, it capitalizes on the strengths of the computer by examining an extremely large number of moves, more moves than any human player could possibly examine.

There are two problems with functional AI. The first is the difficulty of determining what falls into the category of AI and what is simply a normal computer application. A definition of AI that includes any program that accomplishes some function normally done by a human being would encompass virtually all computer programs. Nor is there agreement among computer scientists as to what sorts of programs should fall under the rubric of AI. Once an application is mastered, there is a tendency to no longer define that application as AI. For example, while game playing is one of the classical fields of AI, Deep Blue's design team emphatically states that Deep Blue is not artificial intelligence, since it uses standard programming and parallel processing techniques that are in no way designed to mimic human thought. The implication here is that merely programming a computer to complete a human task is not AI if the computer does not complete the task in the same way a human would.

For a functional approach to result in a full human-like intelligence it would be necessary not only to specify which functions make up intelligence, but also to make sure those functions are suitably congruent with one another. Functional AI programs are rarely designed to be compatible with other programs; each uses different techniques and methods, the sum of which is unlikely to capture the whole of human intelligence. Many in the AI community are also dissatisfied with a collection of task-oriented programs. The building of a general human-like intelligence, as difficult a goal as it may seem, remains the vision.


A relational approach

A third approach is to consider intelligence as acquired, held, and demonstrated only through relationships with other intelligent agents. In "Computing Machinery and Intelligence" (1997), Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. An human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered as having intelligence. The Turing Test is not based on the completion of tasks or the solution of problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000, there would be computers that could fool an interrogator at least thirty percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.

The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of the background knowledge needed for a restricted form of the game would take at a minimum three hundred person-years to complete. This is assuming that the appropriate knowledge set could be identified at the outset. Turing suggests that rather than trying to imitate an adult mind, computer scientists should attempt to construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology, whose lab has constructed several robots, including Cog and Kismet, that represent a new direction in AI in which embodiedness is crucial to the robot's design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. These robots are designed to learn tasks associated with human infants, such as eye-hand coordination, grasping an object, and face recognition through social interaction with a team of researchers. Although the robots have developed abilities such as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project is too new to be assessed. It may be no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.

The Turing Test makes no assumptions as to how the computer arrives at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers, including Hans Moravec and Raymond Kurzweil, see neural net research as a way to reverse engineer the brain. They hope that once scientists can design nets with a complexity equal to the human brain, the nets will have the same power as the brain and develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hope that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.


AI in science fiction

A truly intelligent computer remains in the realm of speculation. Though researchers have continually projected that intelligent computers are immanent, progress in AI has been limited. Computers with intentionality and self consciousness, with fully human reasoning skills, or the ability to be in relationship, exist only in the realm of dreams and desires, a realm explored in fiction and fantasy.

The artificially intelligent computer in science fiction story and film is not a prop, but a character, one that has become a staple since the mid-1950s. These characters are embodied in a variety of physical forms, ranging from the wholly mechanical (computers and robots) to the partially mechanical (cyborgs) and the completely biological (androids). A general trend from the 1950s to the 1990s has been to depict intelligent computers in an increasingly anthropomorphic way. The robots and computers of early films, such as Maria in Fritz Lang's Metropolis (1926), Robby in Fred Wilcox's Forbidden Planet (1956), Hal in Stanley Kubrick's 2001: A Space Odyssey (1968), or R2D2 and C3PO in George Lucas's Star Wars (1977), were clearly constructs of metal. On the other hand, early science fiction stories, such as Isaac Asimov's I, Robot (1950), explored the question of how one might distinguish between robots that looked human and actual human beings. Films and stories from the 1980s through the early 2000s, including Ridley Scott's Blade Runner (1982) and Stephen Spielberg's A.I. (2001), pick up this question, depicting machines with both mechanical and biological parts that are far less easily distinguished from human beings.

Fiction that features AI can be classified in two general categories: cautionary tales (A.I., 2001 ) or tales of wish fulfillment (Star Wars ; I, Robot ). These present two differing visions of the artificially intelligent being, as a rival to be feared or as a friendly and helpful companion.


Philosophical and theological questions

What rights would an intelligent robot have? Will artificially intelligent computers eventually replace human beings? Should scientists discontinue research in fields such as artificial intelligence or nanotechnology in order to safeguard future lives? When a computer malfunctions, who is responsible? These are only some of the ethical and theological questions that arise when one considers the possibility of success in the development of an artificial intelligence. The prospect of an artificially intelligent computer also raises questions about the nature of human beings. Are humans simply machines themselves? At what point would replacing some or all human biological parts with mechanical components violate one's integrity as a human being? Is a human being's relationship to God at all contingent on human biological nature? If humans are not the end point of evolution, what does this say about human nature? What is the relationship of the soul to consciousness or intelligence? While most of these questions are speculative in nature, regarding a future that may or may not come to be, they remain relevant, for the way people live and the ways in which they view their lives stand to be critically altered by technology. The quest for artificial intelligence reveals much about how people view themselves as human beings and the spiritual values they hold.

See also Algorithm; Artificial Life; Cybernetics; Cyborg; Imago Dei; Thinking Machines; Turing Test

Bibliography

asimov, isaac. i, robot. new york: doubleday, 1950.

brooks, rodney. "intelligence without representation." in mind design ii: philosophy, psychology, artificial intelligence, rev. edition, ed. john haugeland. cambridge, mass.: mit press, 1997.

crevier, daniel. ai: the tumultuous history of the search for artificial intelligence. new york: basic books, 1993.

dreyfus, hubert. mind over machine: the power of human intuition and expertise in the era of the computer. new york: free press, 1986.

kurzweil, raymond. the age of spiritual machines. new york: viking, 1999.

lenat, douglas. "cyc: a large-scale investment in knowledge infrastructure." communications of the acm 38 (1995): 3338.

minsky, marvin. the society of mind. new york: simon and schuster, 1986.

moravec, hans. mind children: the future of robot and human intelligence. cambridge, mass.: harvard university press, 1988.

searle, john. "minds, brains, and programs." the behavioral and brain sciences 3 (1980): 417424.

stork, david, ed. hal's legacy: 2001's computer as dream and reality. cambridge, mass.: mit press, 1997.

turing, alan. "computing machinery and intelligence." in mind design ii: philosophy, psychology, artificial intelligence, rev. edition, ed. john haugeland.cambridge, mass.: mit press, 1997.

telotte, j. p. replications: a robotic history of the science fiction film. urbana: university of illinois press, 1995.

turkel, sherry. the second self: computers and the human spirit. new york: simon and schuster, 1984.

warrick, patricia. the cybernetic imagination in science fiction. cambridge, mass.: mit press, 1980.

winograd, terry, and flores, fernando. understanding computers and cognition: a new foundation for design. norwood, n.j.: ablex, 1986. reprint, reading, mass.: addison-wesley, 1991.


other resources

2001: a space odyssey. directed by stanley kubrick. metro-goldwyn-mayer; polaris, 1968.

ai. directed by steven spielberg. amblin entertainment; dreamworks skg; stanley kubrick productions; warner bros., 2001.

blade runner. directed by ridley scott. blade runner partnership; the ladd comany, 1982.


forbidden planet. directed by fred wilcox. metro- goldwyn-mayer, 1956.


metropolis. directed by fritz lang. universum film, a.g., 1926.


star wars. directed by george lucas. lucasfilm ltd., 1977.

noreen l. herzfeld

Artificial Intelligence

views updated Jun 11 2018

ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) is the science and technology that seeks to create intelligent computational systems. Researchers in AI use advanced techniques in computer science, logic, and mathematics to build computers and robots that can mimic or duplicate the intelligent behavior found in humans and other thinking things. The desire to construct thinking artifacts is very old and is reflected in myths and legends as well as in the creation of lifelike art and clockwork automatons during the Renaissance. But it was not until the invention of programmable computers in the mid-twentieth century that serious work in this field could begin.


AI Research Programs

The computer scientist John McCarthy organized a conference at Dartmouth College in 1956 where the field of AI was first defined as a research program. Since that time a large number of successful AI programs and robots have been built. Robots routinely explore the depths of the ocean and distant planets, and the AI program built by International Business Machines (IBM) called Deep Blue was able to defeat the grand master chess champion Garry Kasparov after a series of highly publicized matches. As impressive as these accomplishments are, critics still maintain that AI has yet to achieve the goal of creating a program or robot that can truly operate on its own (autonomously) for any significant length of time.

AI programs and autonomous robots are not yet advanced enough to survive on their own, or interact with the world in the same way that a natural creature might. So far AI programs have not been able to succeed in solving problems outside of narrowly defined domains. For instance, Deep Blue can play chess with the greatest players on the planet but it cannot do anything else. The dream of AI is to create programs that not only play world-class chess but also hold conversations with people, interact with the outside world, plan and coordinate goals and projects, have independent personalities, and perhaps exhibit some form of consciousness.

Critics claim that AI will not achieve these latter goals. One major criticism is that traditional AI focused too much on intelligence as a process that can be completely replicated in software, and ignored the role played by the lived body that all natural intelligent beings possess (Dreyfus 1994). Alternative fields such as Embodied Cognition and Dynamic Systems Theory have been formed as a reply to this criticism (Winograd and Flores 1987). Yet researchers in traditional AI maintain that the only thing needed for traditional AI to succeed is simply more time and increased computing power.

While AI researchers have not yet created machines with human intelligence, there are many lesser AI applications in daily use in industry, the military, and even in home electronics. In this entry, the use of AI to replicate human intelligence in a machine will be called strong AI, and any other use of AI will be referred to as weak AI.


Ethical Issues of Strong AI

AI has and will continue to pose a number of ethical issues that researchers in the field and society at large must confront. The word computer predates computer technology and originally referred to a person employed to do routine mathematical calculations. People no longer do these jobs because computing technology is so much better at routine calculations both in speed and accuracy (Moravec 1999). Over time this trend continued and automation by robotic and AI technologies has caused more and more jobs to disappear. One might argue, however, that many other important jobs have been created by AI technology, and that those jobs lost were not fulfilling to the workers who had them.

This is true enough, but assuming strong AI is possible, not only would manufacturing and assembly line jobs become fully automated, but upper management and strategic planning positions may be computerized as well. Just as the greatest human chess masters cannot compete with AI, so too might it become impossible for human CEOs to compete with their AI counterparts. If AI becomes sufficiently advanced, it might then radically alter the kinds of jobs available, with the potential to permanently remove a large segment of the population from the job market. In a fully automated world people would have to make decisions about the elimination of entire categories of human work and find ways of supporting the people who were employed in those industries.

Other ethical implications of AI technology also exist. From the beginning AI raised questions about what it means to be human. In 1950 the mathematician and cryptographer Alan Turing (1912–1954) proposed a test to determine whether an intelligent machine had indeed been created. If a person can have a normal conversation with a machine, without the person being able to identify the interlocutor as a machine, according to the Turing test the machine is intelligent (Boden 1990). In the early twenty-first century people regularly communicate with machines over the phone, and Turing tests are regularly held with successful results—as long as the topic of discussion is limited. In the past special status as expert thinkers has been proposed as the quality that distinguishes humans from other creatures, but with robust AI that would no longer be the case. One positive effect might be that this technology could help to better explain the place of humans in nature and what it means for something to be considered a person (Foerst 1999).

The ethical responsibility that people have toward any strong AI application is a matter that must be taken into consideration. It does not seem moral to create thinking minds and then force them to do work humans do not want to do themselves.

Finally because AI technology deals directly with human operators, people must make decisions regarding what kind of ethics and morality are going to be programmed into these thinking machines. The scientist and fiction writer Isaac Asimov proposed in his writings three moral imperatives that should be programmed into robots and other AI creations:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders conflict with the first law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second law.

These imperatives make for good reading but are sadly lacking as a solution to the problems presented by fully autonomous robotic technologies. Asimov wrote many stories and novels (The Robot Series [1940–1976] and I,Robot [1950]) that used the unforeseen loopholes in the logic of these laws, which occasionally allowed for fatal encounters between humans and robots. For instance, what should a robot do if, in order to protect a large number of people, it must harm one human who is threatening others? It can also be argued that AI technologies have already begun to harm people in various ways and that these laws are hopelessly naïve (Idier 2000). Other researchers in the field nevertheless argue that Asimov's laws are actually relevant and at least suggest a direction to explore while designing a computational morality (Thompson 1999).

This problem is more pressing than it may seem, because many industrial countries are working to create autonomous fighting vehicles to augment the capabilities of their armed forces. Such machines will have to be programmed so that they make appropriate life and death choices. More subtle and nuanced solutions are needed, and this topic remains wide-open—widely discussed in fiction but not adequately addressed by AI and robotics researchers.


Ethical Issues of Weak AI

Even if robust AI is not possible, or the technology turns out to be a long way off, there remain a number of vexing ethical problems to be confronted by researchers and technologists in the weak AI field. Instead of trying to create a machine that mimics or replicates exactly human-like intelligence, scientists may instead try to imbed smaller, subtler levels of intelligence and automation into all day-to-day technologies. In 1991 Mark Weiser (1952–1999) coined the term ubiquitous computing to refer to this form of AI, but it is also sometimes called the digital revolution (Gershenfeld 1999).

Ubiquitous computing and the digital revolution involve adding computational power to everyday objects that, when working together with other semismart objects, help to automate human surroundings and hopefully make life easier (Gershenfeld 1999). For instance, scientists could imbed very small computers into the packaging of food items that would network with a computer in the house and, through the Internet perhaps, remind people that they need to restock the refrigerator even when they are away from home. The system could be further automated so that it might even order the items so that one was never without them. In this way the world would be literally at the service of human beings, and the everyday items with which they interact would react intelligently to assist in their endeavors. Some form of this more modest style of AI is very likely to come into existence. Technologies are already moving in these directions through the merger of such things as mobile phones and personal data assistants.

Again this trend is not without ethical implications. In order for everyday technologies to operate in an intelligent manner they must take note of the behaviors, wants, and desires of their owners. This means they will collect a large amount of data about each individual who interacts with them. This data might include sensitive or embarrassing information about the user that could become known to anyone with the skill to access such information. Additionally these smart technologies will help increase the trend in direct marketing that is already taking over much of the bandwidth of the Internet. Aggressive advertisement software, spying software, and computer viruses would almost certainly find their way to this new network. These issues must be thoroughly considered and public policy enacted before such technology becomes widespread.

In addition, Weiser (1999) argues that in the design of ubiquitous computing people should work with a sense of humility and reverence to make sure these devices enhance the humanness of the world, advancing fundamental values and even spirituality, rather then just focusing on efficiency. Simply put, people should make their machines more human rather then letting the technology transform human beings into something more machine-like.

A last ethical consideration is the possibility that AI may strengthen some forms of gender bias. Women in general, and women's ways of knowing in particular, have not played a large role in the development of AI technology, and it has been argued that AI is the fruit of a number of social and philosophical movements that have not been friendly to the interests of women (Adam 1998). Women are not equally represented as researchers in the field of AI, and finding a way to reverse this trend is a pressing concern. The claim that AI advances the interests of males over those of females is a more radical, yet intriguing claim that deserves further study.

AI continues to grow in importance. Even though researchers have not yet been able to create a mechanical intelligence rivaling or exceeding that of human beings, AI has provided an impressive array of technologies in the fields of robotics and automation. Computers are becoming more powerful in both the speed and number of operations they can achieve in any given amount of time. If humans can solve the problem of how to program machines and other devices to display advanced levels of intelligence, as well as address the many ethical issues raised by this technology, then AI may yet expand in astonishing new directions.


JOHN P. SULLINS III

SEE ALSO Artificial Morality;Artificiality;Asimov, Isaac;Automation;Computer Ethics;Robots and Robotics;Turing, Alan;Turing Tests.

BIBLIOGRAPHY

Adam, Alison. (1998). Artificial Knowing: Gender and the Thinking Machine. New York: Routledge. Adam is an AI researcher who writes from experience about the difficulties encountered by women working in AI. She also argues that if women were more involved in AI it would significantly alter and improve the results of this science. Accessible, informative, and includes a select bibliography.

Asimov, Isaac. (2004). I, Robot. New York: Bantam Books. Classic science fiction account of the social impact of robotics. The stories are very entertaining while at the same time exploring deeply philosophical and ethical questions.

Boden, Margret A., ed. (1990). The Philosophy of Artificial Intelligence. New York: Oxford University Press. A collection of the best papers written on the philosophical debate regarding the possibility of creating AI. The research included stops at the early 1990s but this is a good place to start one's study of the subject. Includes a select bibliography.

Dreyfus, Hubert L. (1994). What Computers Still Can't Do. Cambridge MA: MIT Press. This is a famous counter argument to the very possibility of achieving any results in strong AI. Largely dismissed by the AI community, it is still useful as an example of how one might argue that AI misses the important insights into human intelligence and consciousness that phenomenological psychology and Heideggerian philosophy might add.

Dreyfus, Hubert L., and Stuart E Dreyfus. (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: Free Press. This book was written with the author's brother who worked with the AI technology questioned in the book. They claim that human intelligence is not completely captured by AI technology and give numerous examples to illustrate their claims. Many of their claims, such as the inability of AI to program a computer to beat a chess master, seem to have been incorrect but this book is an important first step in Dreyfus's critique of AI.

Foerst, Anne. (1999). "Artificial Sociability: from Embodied AI toward New Understandings of Personhood." Technology in Society 21(4): 373–386. Good discussion of the impact AI and robotics has on complicating the notion of personhood, or what are the conditions under which something is understood to be a person.

Gershenfeld, Neil. (1999). When Things Start to Think. New York: Henry Holt. Good general overview of the attempt to add intelligence to everyday technology. Gershenfeld is involved in this research and speaks with authority on the subject without being overly technical.

Idier, Dominic. (2000). "Science Fiction and Technology Scenarios: Comparing Asimov's Robots and Gibson's Cyberspace." Technology in Society 22: 255–272. A more up-to-date analysis of Asimov's prognostications about robotics in light of the more dystopian ideas of the science fiction author William Gibson.

Luger, George F., ed. (1995). Computation and Intelligence. Cambridge, MA: AAAI Press/MIT Press. A collection of many of the best early papers written on AI by the pioneers of the field. These papers have shaped the field of AI and continue to be important. Includes an extensive bibliography.

Moravec, Hans. (1999). Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press. One of the most successful AI researchers unabashedly predicts the future of humanity and the emergence of its robotic "mind children." Deftly presents very technical arguments for the general reader.

Nilsson, Nils J. (1986). Principles of Artificial Intelligence. Los Altos, CA: Morgan Kaufmann. Good introduction to the more technical issues of AI as a science.

Searle, John R. (1992). The Rediscovery of the Mind. Cambridge, MA: MIT Press. The American philosopher John Searle argues for the inability of AI to capture semantics (the meaning of words), with machines that can only manipulate syntax (the grammar and symbols of a language). If there is no mechanical way to create semantic understanding then there is no way to create thinking machines. This argument has not gone unchallenged by AI researchers and many of the entries in this bibliography attempt to refute Searle.

Thompson, Henry S. (1999). "Computational Systems, Responsibility, and Moral Sensibility." Technology in Society 21(4): 409–415. A discussion of how humans should be more sensitive to the moral implications of the computational systems they build.

Weiser, Mark. (1993). "Hot Topics: Ubiquitous Computing." IEEE Computer 26(10): 71–72. Also available from http://www.ubiq.com/hypertext/weiser/UbiCompHotTopics.html. Weiser introduces the concept of ubiquitous computing. This remains a very important defining moment for the field.

Weiser, Mark. (1999). "The Spirit of the Engineering Quest." Technology in Society 21: 355–361. Weiser argues that those who engineer devices that mediate the human-to-world interface should be especially careful that their work enhances human life rather than detracts from it.

Winograd, Terry, and Fernando Flores. (1987). Understanding Computers and Cognition: A New Foundation for Design. Boston: Addison-Wesley Publishing. Winograd and Flores argue for a form of AI that takes into account the important counterarguments to traditional AI, and instead of ignoring them, incorporates them into a better science that takes the embodiment of cognitive agents seriously.

Artificial Intelligence

views updated May 18 2018

Artificial Intelligence

Artificial intelligence (AI) refers to computer software that exhibits intelligent behavior. The term intelligence is difficult to define and has been the subject of heated debate by philosophers, educators, and psychologists for ages. Nevertheless, it is possible to enumerate many important characteristics of intelligent behavior. Intelligence includes the capacity to learn, maintain a large storehouse of knowledge, utilize commonsense reasoning, apply analytical abilities, discern relationships between facts, communicate ideas to others and understand communications from others, and perceive and make sense of the world around us. Thus, artificial intelligence systems are computer programs that exhibit one or more of these behaviors.

AI systems can be divided into two broad categories: knowledge representation systems and machine learning systems. Knowledge representation systems, also known as expert systems, provide a structure for capturing and encoding the knowledge of a human expert in a particular domain. For example, the knowledge of medical doctors might be captured in a computerized model that can be used to help diagnose patient illnesses.

MACHINE LEARNING SYSTEMS

The second category of AI, machine learning systems, creates new knowledge by finding previously unknown patterns in data. In contrast to knowledge representation approaches, which model the problem-solving structure of human experts, machine learning systems derive solutions by learning patterns in data, with little or no intervention by an expert. There are three main machine learning techniques: neural networks, induction algorithms, and genetic algorithms.

Neural Networks. Neural networks simulate the human nervous system. The concepts that guide neural network research and practice stem from studies of biological systems. These systems model the interaction between nerve cells. Components of a neural network include neurons (sometimes called processing elements), input lines to the neurons (called dendrites), and output lines from the neurons (called axons).

Neural networks are composed of richly connected sets of neurons forming layers. The neural network architecture consists of an input layer, which inputs data to the network; an output layer, which produces the resulting guess of the network; and a series of one or more hidden layers, which assist in propagating. This is illustrated in Figure 1.

During processing, each neuron performs a weighted sum of inputs from the neurons connecting to it; this is called activation. The neuron chooses to fire if the sum of inputs exceeds some previously set threshold value; this is called transfer.

Inputs with high weights tend to give greater activation to a neuron than inputs with low weights. The weight of an input is analogous to the strength of a synapse in a biological system. In biological systems, learning occurs by strengthening or weakening the synaptic connections between nerve cells. An artificial neural network simulates synaptic connection strength by increasing or decreasing the weight of input lines into neurons.

Neural networks are trained with a series of data points. The networks guess which response should be given, and the guess is compared against the correct answer for each data point. If errors occur, the weights into the neurons are adjusted and the process repeats itself. This learning approach is called backpropagation and is similar to statistical regression.

Neural networks are used in a wide variety of business problems, including optical character recognition, financial forecasting, market demographics trend assessment, and various robotics applications.

Induction Algorithms. Induction algorithms form another approach to machine learning. In contrast to neural networks (which are highly mathematical in nature), induction approaches tend to involve symbolic data. As the name implies, these algorithms work by implementing inductive reasoning approaches. Induction is a reasoning method that can be characterized as learning by example. Unlike rulebased deduction, induction begins with a set of observations and constructs rules to account for these observations. Inductive reasoning attempts to find general patterns that can fully explain the observations. The system is presented with a large set of data consisting of several input variables and one decision variable. The system constructs a decision tree by recursively partitioning data sets based on the variables that best distinguish between the data elements. That is, it attempts to partition the data so that each partition contains data with the same value for a decision variable. It does this by selecting the input variables that do the best job of dividing the data set into homogeneous partitions. For example, consider Figure 2, which contains the data set pertaining to decisions that were made on credit loan applications.

An induction algorithm would infer the rules in Figure 3 to explain this data.

As this example illustrates, an induction algorithm is able to induce rules that identify the general patterns in data. In doing so, these algorithms can prune out irrelevant or unnecessary attributes. In the example above, salary was irrelevant in terms of explaining the loan decision of the data set.

Induction algorithms are often used for data mining applications, such as marketing problems that help companies decide on the best market strategies for new product lines. Data mining is a common service included in data warehouses, which are frequently used as decision support tools.

Genetic Algorithms. Genetic algorithms use an evolutionary approach to solve optimization problems. These are based on Darwin's theory of evolution, and in particular the notion of survival of the fittest. Concepts such as reproduction, natural selection, mutation, chromosome, and gene are all included in the genetic algorithm approach.

Genetic algorithms are useful in optimization problems that must select from a very large number of possible solutions to a problem. A classic example of this is the traveling salesperson problem. Consider a salesman who must visit n cities. The salesperson's problem is to find the shortest route by which to visit each of these n cities exactly once, so that the salesman will tour all the cities

Figure 2
Artificial Intelligence & Expert Systems
 SalaryCredit HistoryCurrent AssetsLoan Decision
a)HighPoorHighAccept
b)HighPoorLowReject
c)LowPoorLowReject
d)LowGoodLowAccept
e)LowGoodHighAccept
f)HighPoorLowAccept

and return to the origin. For such a problem there are (n 1)! possible solutions, or (n 1) factorial. For six cities, this would mean 5 × 4 × 3 × 2 × 1 = 120 possible solutions. Suppose that the salesman must travel to 100 cities. This would involve 99! possible solutions, an astronomically high number.

Obviously, for this type of problem, a brute strength method of exhaustively comparing all possible solutions

Figure 3

If the credit history is good, then accept the loan application

If the credit history is poor and current assets are high, then accept the loan application

If the credit history is poor and current assets are low, then reject the loan application

will not work. This requires the use of heuristic methods, of which the genetic algorithm is a prime example. For the traveling salesperson problem, a chromosome would be one possible route through the cities, and a gene would be a city in a particular sequence on the chromosome. The genetic algorithm would start with an initial population of chromosomes (routes) and measure each according to a fitness function (the total distance traveled in the route). Those with the best fitness functions would be selected and those with the worst would be discarded. Then random pairs of surviving chromosomes would mate, a process called crossover. This involves swapping city positions between the pair of chromosomes, resulting in a pair of child chromosomes. In addition, some random subset of the population would be mutated, such that some portion of the sequence of cities would be altered. The process of selection, crossover, and mutation results in a new population for the next generation. This procedure is repeated through as many generations as necessary in order to obtain an optimal solution.

Genetic algorithms are very effective at finding good solutions to optimization problems. Scheduling, configuration, and routing problems are good candidates for a genetic algorithm approach. Although genetic algorithms do not guarantee the absolute best solution, they do consistently arrive at very good solutions in a relatively short period of time.

AI IN THE TWENTY-FIRST CENTURY

Artificial intelligence systems provide a key component in many computer applications that serve the world of business. In fact, AI is so prevalent that many people encounter such applications on a daily basis without even being aware of it.

One of the most ubiquitous uses of AI can be found in network servers that route electronic mail and in email spam-filtering devices. Expert systems are routinely utilized in the medical field, where they take the place of doctors to assess the results of tests like mammograms or electrocardiograms; credit card companies, banks, and insurance firms commonly use neural networks to help detect fraud. These AI systems can, for example, monitor consumer-spending habits, detect patterns in the data, and alert the company when uncharacteristic patterns arise. Genetic algorithms serve logistics planning functions in airports, factories, and even military operations, where they are used to help solve incredibly complex resource-allocation problems. And perhaps most familiar, many companies employ AI systems to help monitor calls in their customer service call centers. These systems can analyze the emotional tones of callers' voices or listen for specific words, and route those calls to human supervisors for follow-up attention.

Artificial Intelligence is routinely used by enterprises in supply chain management through the use of a set of intelligent software agents that are responsible for one or more aspects of the supply chain. These agents interact with one another in the planning and execution of their tasks. For instance, a logistics agent is responsible for coordinating the factories, suppliers, and distribution centers. This agent provides inputs to the transportation agent, which is responsible for assignment and scheduling transportation resources. The agents coordinate their activities with the optimization of the supply chain as the common goal.

Customer Relationship Management uses artificial intelligence to connect product offers and promotions with consumer desires. AI software profiles customer behavior by finding patterns in transaction data. The software generates algorithms for evaluating different data characteristics, such as what products are frequently bought together or the time of year a product sells the most. Thus, the software is able to use historical data to predict customer behavior in the future.

Artificial intelligence is also used on Wall Street in the selection of stocks. Analysts use AI software to discover trading patterns. For instance, an algorithm could find that the price movements of two stocks are similar; when the stocks diverge a trader might buy one stock and sell the other on the assumption that their prices will return to the historical norm. As the use of trading algorithms becomes more commonplace, there is less potential for profit.

Although computer scientists have thus far failed to create machines that can function with the complex intelligence of human beings, they have succeeded in creating a wide range of AI applications that make people's lives simpler and more convenient.

SEE ALSO Expert Systems

BIBLIOGRAPHY

Chokshi, Kaustubh. Artificial Intelligence Enters the Mainstream. Domainb April, 2007. Available from: http://www.domainb.com/infotech/itfeature/20070430_Intelligence.htm.

Dhar, V., and R. Stein. Seven Methods for Transforming Corporate Data into Business Intelligence. Upper Saddle River, NJ: Prentice Hall, 1997.

Duhigg, Charles. Artificial Intelligence Applied Heavily to Picking Stocks. International Herald Tribune 23, November 2006. Available from: www.iht.com/articles/2006/11/23/business/trading.php.

Hot Topics: Artificial Intelligence. BBC Online. Available from: http://www.bbc.co.uk/science/hottopics/ai/.

Kahn, Jennifer. It's Alive! From Airport Tarmacs to Online Job Banks to Medical Labs, Artificial Intelligence Is Everywhere. Wired March 2002. Available from: http://www.wired.com/wired/archive/10.03/everywhere.html.

Menzies, Tim. 21st Century AI: Proud, Not Smug. IEEE Intelligent Systems May/June 2003.

Norvig, P., and S. Russell. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall, 2002.

Rigby, Darrell. Management Tools and Trends. Boston: Bain & Company, 2007.

Sabariraian, A. Scope of Artificial Intelligence in Business. International Herald Tribune September 2008. Available from: http://www.articlesbase.com/management-articles/scope-of-artificial-intelligence-in-business328608.html.

Van, Jon. Computers Gain Power, But It's Not What You Think. Chicago Tribune 20 March 2005.

Artificial Intelligence

views updated Jun 11 2018

ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) is a scientific field whose goal is to understand intelligent thought processes and behavior and to develop methods for building computer systems that act as if they are "thinking" and can learn from themselves. Although the study of intelligence is the subject of other disciplines such as philosophy, physiology, psychology, and neuroscience, people in those disciplines have begun to work with computational scientists to build intelligent machines. The computers offer a vehicle for testing theories of intelligence, which in turn enable further exploration and understanding of the concept of intelligence.

The growing information needs of the electronic age require sophisticated mechanisms for information processing. As Richard Forsyth and Roy Rada (1986) point out, AI can enhance information processing applications by enabling the computer systems to store and represent knowledge, to apply that knowledge in problem solving through reasoning mechanisms, and finally to acquire new knowledge through learning.

History

The origin of AI can be traced to the end of World War II, when people started using computers to solve nonnumerical problems. The first attempt to create intelligent machines was made by Warren McCulloh and Walter Pitts in 1943 when they proposed a model of artificial networked neurons and claimed that properly defined networks could learn, thus laying the foundation for neural networks.

In 1950, Alan Turing published "Computer Machinery and Intelligence," where he explored the question of whether machines can think. He also proposed the Turing Test as an operational measure of intelligence for computers. The test requires that a human observer interrogates (i.e., interacts with) a computer and a human through a Teletype. Both the computer and the human try to persuade the observer that she or he is interacting with a human at the other end of the line. The computer is considered intelligent if the observer cannot tell the difference between the computer responses and the human responses.

In 1956, John McCarthy coined the term "artificial intelligence" at a conference where the participants were researchers interested in machine intelligence. The goal of the conference was to explore whether intelligence can be precisely defined and specified in order for a computer system to simulate it. In 1958, McCarthy also invented LISP, a high-level AI programming language that continues to be used in AI programs. Other languages used for writing AI programs include Prolog, C, and Java.

Approaches

Stuart Russell and Peter Norvig (1995) have identified the following four approaches to the goals of AI: (1) computer systems that act like humans, (2) programs that simulate the human mind, (3) knowledge representation and mechanistic reasoning, and (4) intelligent or rational agent design. The first two approaches focus on studying humans and how they solve problems, while the latter two approaches focus on studying real-world problems and developing rational solutions regardless of how a human would solve the same problems.

Programming a computer to act like a human is a difficult task and requires that the computer system be able to understand and process commands in natural language, store knowledge, retrieve and process that knowledge in order to derive conclusions and make decisions, learn to adapt to new situations, perceive objects through computer vision, and have robotic capabilities to move and manipulate objects. Although this approach was inspired by the Turing Test, most programs have been developed with the goal of enabling computers to interact with humans in a natural way rather than passing the Turing Test.

Some researchers focus instead on developing programs that simulate the way in which the human mind works on problem-solving tasks. The first attempt to imitate human thinking was the Logic Theorist and the General Problem Solver programs developed by Allen Newell and Herbert Simon. Their main interest was in simulating human thinking rather than solving problems correctly. Cognitive science is the interdisciplinary field that studies the human mind and intelligence. The basic premise of cognitive science is that the mind uses representations that are similar to computer data structures and computational procedures that are similar to computer algorithms that operate on those structures.

Other researchers focus on developing programs that use logical notation to represent a problem and use formal reasoning to solve a problem. This is called the "logicist approach" to developing intelligent systems. Such programs require huge computational resources to create vast knowledge bases and to perform complex reasoning algorithms. Researchers continue to debate whether this strategy will lead to computer problem solving at the level of human intelligence.

Still other researchers focus on the development of "intelligent agents" within computer systems. Russell and Norvig (1995, p. 31) define these agents as "anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors." The goal for computer scientists working in this area is to create agents that incorporate information about the users and the use of their systems into the agents' operations.

Fundamental System Issues

A robust AI system must be able to store knowledge, apply that knowledge to the solution of problems, and acquire new knowledge through experience. Among the challenges that face researchers in building AI systems, there are three that are fundamental: knowledge representation, reasoning and searching, and learning.

Knowledge Representation

What AI researchers call "knowledge" appears as data at the level of programming. Data becomes knowledge when a computer program represents and uses the meaning of some data. Many knowledge-based programs are written in the LISP programming language, which is designed to manipulate data as symbols.

Knowledge may be declarative or procedural. Declarative knowledge is represented as a static collection of facts with a set of procedures for manipulating the facts. Procedural knowledge is described by executable code that performs some action. Procedural knowledge refers to "how-to" do something. Usually, there is a need for both kinds of knowledge representation to capture and represent knowledge in a particular domain.

First-order predicate calculus (FOPC) is the best-understood scheme for knowledge representation and reasoning. In FOPC, knowledge about the world is represented as objects and relations between objects. Objects are real-world things that have individual identities and properties, which are used to distinguish the things from other objects. In a first-order predicate language, knowledge about the world is expressed in terms of sentences that are subject to the language's syntax and semantics.

Reasoning and Searching

Problem solving can be viewed as searching. One common way to deal with searching is to develop a production-rule system. Such systems use rules that tell the computer how to operate on data and control mechanisms that tell the computer how to follow the rules. For example, a very simple production-rule system has two rules: "if A then B" and "if B then C." Given the fact (data) A, an algorithm can chain forward to B and then to C. If C is the solution, the algorithm halts.

Matching techniques are frequently an important part of a problem-solving strategy. In the above example, the rules are activated only if A and B exist in the data. The match between the A and B in the data and the A and B in the rule may not have to be exact, and various deductive and inductive methods may be used to try to ascertain whether or not an adequate match exists.

Generate-and-test is another approach to searching for a solution. The user's problem is represented as a set of states, including a start state and a goal state. The problem solver generates a state and then tests whether it is the goal state. Based on the results of the test, another state is generated and then tested. In practice, heuristics, or problem-specific rules of thumb, must be found to expedite and reduce the cost of the search process.

Learning

The advent of highly parallel computers in the late 1980s enabled machine learning through neural networks and connectionist systems, which simulate the structure operation of the brain. Parallel computers can operate together on the task with each computer doing only part of the task. Such systems use a network of interconnected processing elements called "units." Each unit corresponds to a neuron in the human brain and can be in an "on" or "off" state. In such a network, the input to one unit is the output of another unit. Such networks of units can be programmed to represent short-term and long-term working memory and also to represent and perform logical operations (e.g., comparisons between numbers and between words).

A simple model of a learning system consists of four components: the physical environment where the learning system operates, the learning element, the knowledge base, and the performance element. The environment supplies some information to the learning element, the learning element uses this information to make improvements in an explicit knowledge base, and the performance element uses the knowledge base to perform its task (e.g., play chess, prove a theorem). The learning element is a mechanism that attempts to discover correct generalizations from raw data or to determine specific facts using general rules. It processes information using induction and deduction. In inductive information processing, the system determines general rules and patterns from repeated exposure to raw data or experiences. In deductive information processing, the system determines specific facts from general rules (e.g., theorem proving using axioms and other proven theorems). The knowledge base is a set of facts about the world, and these facts are expressed and stored in a computer system using a special knowledge representation language.

Applications

There are two types of AI applications: stand-alone AI programs and programs that are embedded in larger systems where they add capabilities for knowledge representation, reasoning, and learning. Some examples of AI applications include robotics, computer vision, natural-language processing; and expert systems.

Robotics

Robotics is the intelligent connection of perception by the computer to its actions. Programs written for robots perform functions such as trajectory calculation, interpretation of sensor data, executions of adaptive control, and access to databases of geometric models. Robotics is a challenging AI application because the software has to deal with real objects in real time. An example of a robot guided by humans is the Sojourner surface rover that explored the area of the Red Planet where the Mars Pathfinder landed in 1997. It was guided in real time by NASA controllers. Larry Long and Nancy Long (2000) suggest that other robots can act autonomously, reacting to changes in their environment without human intervention. Military cruise missiles are an example of autonomous robots that have intelligent navigational capabilities.

Computer Vision

The goal of a computer vision system is to interpret visual data so that meaningful action can be based on that interpretation. The problem, as John McCarthy points out (2000), is that the real world has three dimensions while the input to cameras on which computer action is based represents only two dimensions. The three-dimensional characteristics of the image must be determined from various two-dimensional manifestations. To detect motion, a chronological sequence of images is studied, and the image is interpreted in terms of high-level semantic and pragmatic units. More work is needed in order to be able to represent three-dimensional data (easily perceived by the human eye) to the computer. Advancements in computer vision technology will have a great effect on creating mobile robots. While most robots are stationary, some mobile robots with primitive vision capability can detect objects on their path but cannot recognize them.

Natural-Language Processing

Language understanding is a complex problem because it requires programming to extract meaning from sequences of words and sentences. At the lexical level, the program uses words, prefixes, suffixes, and other morphological forms and inflections. At the syntactic level, it uses a grammar to parse a sentence. Semantic interpretation (i.e., deriving meaning from a group of words) depends on domain knowledge to assess what an utterance means. For example, "Let's meet by the bank to get a few bucks" means one thing to bank robbers and another to weekend hunters. Finally, to interpret the pragmatic significance of a conversation, the computer needs a detailed understanding of the goals of the participants in the conversation and the context of the conversation.

Expert Systems

Expert systems consist of a knowledge base and mechanisms/programs to infer meaning about how to act using that knowledge. Knowledge engineers and domain experts often create the knowledge base. One of the first expert systems, MYCIN, was developed in the mid-1970s. MYCIN employed a few hundred if-then rules about meningitis and bacteremia in order to deduce the proper treatment for a patient who showed signs of either of those diseases. Although MYCIN did better than students or practicing doctors, it did not contain as much knowledge as physicians routinely need to diagnose the disease.

Although Alan Turing's prediction that computers would be able to pass the Turing Test by the year 2000 was not realized, much progress has been made and novel AI applications have been developed, such as industrial robots, medical diagnostic systems, speech recognition in telephone systems, and chess playing (where IBM's Deep Blue supercomputer defeated world champion Gary Kasparov).

Conclusion

The success of any computer system depends on its being integrated into the workflow of those who are to use it and on its meeting of user needs. A major future direction for AI concerns the integration of AI with other systems (e.g., database management, real-time control, or user interface management) in order to make those systems more usable and adaptive to changes in user behavior and in the environment where they operate.

See also:Computer Software; Computing; Human-Computer Interaction; Language Acquisition; Language Structure; Symbols.

Bibliography

Forsyth, Richard, and Rada, Roy. (1986). Machine Learning: Expert Systems and Information Retrieval. London: Horwood.

Gardner, Howard. (1993). Multiple Intelligences: The Theory in Practice. New York: Basic Books.

Long, Larry, and Long, Nancy. (2000). Computers.Upper Saddle River, NJ: Prentice-Hall.

McCarthy, John. (2000). What is Artificial Intelligence?<http://www-formal.stanford.edu/jmc/whatisai/whatisai.html>.

Russell, Stuart, and Norvig, Peter. (1995). Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall.

Turing, Alan M. (1950). "Computing Machinery and Intelligence." Mind 59:433-460.

Antonios Michailidis

Roy Rada

Artificial Intelligence

views updated May 14 2018

ARTIFICIAL INTELLIGENCE

In simplest terms, artificial intelligence (AI) is manufactured thinking. It is a machine's ability to think. This process is deemed "artificial" because once it is programmed it occurs without human intervention. AI is generally applied to the theory and practical application of a computer's ability to think like humans do. AI capability is designated as either strong AI or weak AI. Strong AI is a computer system that actively employs consciousness, a machine that can truly reason and solve problems independently. Critics of AI systems argue that such a machine is unrealistic, and even if it were possible, a true artificially intelligent machine is unwanted.

Popular perceptions of AI have been dramatized in movies such as 2001: A Space Odyssey (1968), in which a starship computer named HAL 9000 is capable of speech and facial recognition, natural language processing, interpreting emotions, and expressing reason. Another famous make-believe computer was the star of WarGames (1983). In this movie, the line, "Do you want to play a game?" allows the teenage hero to persuade the computer to play a game rather than start World War III. In both examples, the computers undertook independent actions that were potentially harmful to their human creators. This is the reason most often given for not creating strong AI machines.

Modern working applications of AI are examples of weak AI. Current AI research focuses on developing computers that use intelligent programming to automate routine human tasks. For example, many customer service telephone banks are automated by AI. When a recorded voice asks for a "yes" or "no" response or for the caller to choose a menu item by saying specific words, the computer on the other end of the telephone is using weak AI to make a decision and select the appropriate response based on caller input. These computers are trained to recognize speech patterns, dialects, accents, and replacement words such as "oh"rather than "zero"for the number 0.

Long before the development of computers, the notion that thinking was a form of computation motivated the formalization of logic as a type of rational thought. These efforts continue today. Graph theory provided the architecture for searching a solution space for a problem. Operations research, with its focus on optimization algorithms, uses graph theory to solve complex decision-making problems.

PIONEERS OF AI

AI uses syllogistic logic, which was first postulated by Aristotle. This logic is based on deductive reasoning. For example, if A equals B, and B equals C, then A must also equal C. Throughout history, the nature of syllogistic logic and deductive reasoning was shaped by grammarians, mathematicians, and philosophers. When computers were developed, programming languages used similar logical patterns to support software applications. Terms such as cybernetics and robotics were used to describe collective intelligence approaches and led to the development of AI as an experimental field in the 1950s.

Allen Newell and Herbert Simon pioneered the first AI laboratory at Carnegie Mellon University in the 1950s. John McCarthy and Marvin Minsky of the Massachusetts Institute of Technology opened their original AI lab in 1959 to write AI decision-making software. The best-known name in the AI community, however, is Alan Turing (19121954). Alan Turing was a mathematician, philosophy, and cryptographer and is often credited as the founder of computer science as a discipline separate from mathematics. He contributed to the debate of whether a machine could think by developing the Turing test. The Turing test uses a human judge engaged in remote conversation with two parties: another human and a machine. If the judge cannot tell which party is the human, the machine passes the test.

Originally, teletype machines were used to maintain the anonymity of the parties; today, IRC (Internet relay chat) is used to test the linguistic capability of AI engines. Linguistic robots called Chatterbots (such as Jabberwacky) are very popular programs that allow an individual to converse with a machine and demonstrate machine intelligence and reasoning.

The Defense Advanced Research Projects Agency, which played a significant role in the birth of the Internet by funding ARPANET, also funded AI research in the early 1980s. Nevertheless, when results were not immediately useful for military application, funding was cut.


Since then, AI research has moved to other areas including robotics, computer vision, and other practical engineering tasks.

AN EVOLUTION OF APPLICATIONS

One of the early milestones in AI was Newell and Simon's General Problem Solver (GPS). The program was designed to imitate human problem-solving methods. This and other developments such as Logic Theorist and the Geometry Theorem Prover generated enthusiasm for the future of AI. Simon went so far as to assert that in the near-term future the problems that computers could solve would be coextensive with the range of problems to which the human mind has been applied.

Difficulties in achieving this objective soon began to manifest themselves. New research based on earlier successes encountered problems of intractability. A search for alternative approaches led to attempts to solve typically occurring cases in narrow areas of expertise. This prompted the development of expert systems, which reach conclusions by applying reasoning techniques based on sets of rules. A seminal model was MYCIN, developed to diagnose blood infections. Having about 450 rules, MYCIN was able to outperform many experts. This and other expert systems research led to the first commercial expert system, R1, implemented at Digital Equipment Corporation (DEC) to help configure client orders for new mainframe and minicomputer systems. R1's implementation was estimated to save DEC about $40 million per year.

Other classic systems include the PROSPECTOR program for determining the probable location and type of ore deposits and the INTERNIST program for performing patient diagnosis in internal medicine.

THE ROLE OF AI IN COMPUTER SCIENCE

While precise definitions are still the subject of debate, AI may be usefully thought of as the branch of computer science that is concerned with the automation of intelligent behavior. The intent of AI is to develop systems that have the ability to perceive and to learn, to accomplish physical tasks, and to emulate human decision making. AI seeks to design and develop intelligent agents as well as to understand them.

AI research has proven to be the breeding ground for computer science subdisciplines such as pattern recognition, image processing, neural networks, natural language processing, and game theory. For example, optical character recognition software that transcribes handwritten characters into typed text (notably with tablet personal computers and personal digital assistants) was initially a focus of AI research.

Additionally, expert systems used in business applications owe their existence to AI. Manufacturing companies use inventory applications that track both production levels and sales to determine when and how much of specific supplies are needed to produce orders in the pipeline. Genetic algorithms are employed by financial planners to assess the best combination of investment opportunities for their clients. Other examples include data mining applications, surveillance programs, and facial recognition applications.

Multiagent systems are also based on AI research. Use of these systems has been driven by the recognition that intelligence may be reflected by the collective behaviors of large numbers of very simple interacting members of a community of agents. These agents can be computers, software modules, or virtually any object that can perceive aspects of its environment and proceed in a rational way toward accomplishing a goal.

Four types of systems will have a substantial impact on applications: intelligent simulation, information-resource specialists, intelligent project coaches, and robot teams.

Intelligent simulations generate realistic simulated worlds that enable extensive affordable training and education which can be made available any time and anywhere. Examples might be hurricane crisis management, exploration of the impacts of different economic theories, tests of products on simulated customers, and technological design testing features through simulation that would cost millions of dollars to test using an actual prototype.

Information-resource specialist systems (IRSS) will enable easy access to information related to a specific problem. For instance, a rural doctor whose patient presents with a rare condition might use IRSS to assess competing treatments or identify new ones. An educator might find relevant background materials, including information about similar courses taught elsewhere.

Intelligent project coaches (IPCs) could function as coworkers, assisting and collaborating with design or operations teams for complex systems. Such systems could recall the rationale of previous decisions and, in times of crisis, explain the methods and reasoning previously used to handle that situation. An IPC for aircraft design could enhance collaboration by keeping communication flowing among the large, distributed design staff, the program managers, the customer, and the subcontractors.

Robot teams could contribute to manufacturing by operating in a dynamic environment with minimal instrumentation, thus providing the benefits of economies of scale. They could also participate in automating sophisticated laboratory procedures that require sensing, manipulation, planning, and transport. The AI robots could work in dangerous environments with no threat to their human builders.

SUMMARY

A variety of disciplines have influenced the development of AI. These include philosophy (logic), mathematics (computability, algorithms), psychology (cognition), engineering (computer hardware and software), and linguistics (knowledge representation and natural-language processing). As AI continues to redefine itself, the practical application of the field will change.

AI supports national competitiveness as it depends increasingly on capacities for accessing, processing, and analyzing information. The computer systems used for such purposes must also be intelligent. Health-care providers require easy access to information systems so they can track health-care delivery and identify the most effective medical treatments for their patients' conditions. Crisis management teams must be able to explore alternative courses of action and make critical decisions. Educators need systems that adapt to a student's individual needs and abilities. Businesses require flexible manufacturing and software design aids to maintain their leadership position in information technology, and to regain it in manufacturing. AI will continue to evolve toward a rational, logical machine presence that will support and enhance human endeavors.

see also Information Processing; Interactive Technology

Mark J. Snyder

Lisa E. Gueldenzoph

Artificial Intelligence

views updated May 14 2018

Artificial intelligence

Artificial intelligence (AI) is a subfield of computer science that focuses on creating computer software that imitates human learning and reasoning. Computers can out-perform people when it comes to storing information, solving numerical problems, and doing repetitive tasks. Computer programmers originally designed software that accomplished these tasks by completing algorithms, or clearly defined sets of instructions. In contrast, programmers design AI software to give the computer only the problem, not the steps necessary to solve it.

Overview of artificial intelligence

All AI programs are built on two foundations: a knowledge base and an inferencing capability (inferencing means to draw a conclusion based on facts and prior knowledge). A knowledge base is made up of many different pieces of information: facts, concepts, theories, procedures, and relationships. Where conventional computer software must follow a strictly logical series of steps to reach a conclusion (algorithm), AI software uses the techniques of search and pattern matching. The computer is given some initial information and then searches the knowledge base for specific conditions or patterns that fit the problem to be solved. This special ability of AI programsto reach a solution based on facts rather than on a preset series of stepsis what most closely resembles the thinking function of the human brain. In addition to problem solving, AI has many applications, including expert systems, natural language processing, and robotics.

Expert systems. The expert system is an AI program that contains the essential knowledge of a particular specialty or field, such as medicine, law, or finance. A simple database containing information on a particular subject can only give the user independent facts about the subject. An expert system, on the other hand, uses reasoning to draw conclusions from stored information. Expert systems are intended to act as intelligent assistants to human experts.

Natural language processing. Most conventional computer languages consist of a combination of symbols, numbers, and some words. These complex languages may take several years for a computer user to master. Computers programmed to respond to our natural languageour everyday speechare easier and more effective to use. In its simplest form, a natural language processing program works like this: a computer user types a sentence, phrase, or words on the keyboard. After searching its knowledge base for references to every word, the program then responds appropriately.

An example of a computer with a natural language processor is the computerized card catalog available in many public libraries. If you want a list of books on a specific topic or subject, you type in the appropriate phrase. You are asking the computerin Englishto tell you what is available on the topic. The computer usually responds in a very short timein Englishwith a list of books along with call numbers so you can find what you need.

Words to Know

Algorithm: Clearly defined set of instructions for solving a problem in a fixed number of steps.

Expert system: AI program that contains the essential knowledge of a particular specialty or field such as medicine or law.

Natural language: Language first learned as a child; native tongue.

Robotics: Study of robots, machines that can be programmed to perform manual duties.

Software: Set of programs or instructions controlling a computer's functions.

Robotics. Robotics is the study of robots, which are machines that can be programmed to perform manual duties. Most robots in use today perform various repetitive tasks in an industrial setting. These robots typically are used in factory assembly lines or in hazardous waste facilities to handle substances far too dangerous for humans to handle safely.

Research is being conducted in the field of intelligent robotsthose that can understand their environment and respond to any changes. AI programs allow a robot to gather information about its surroundings by using a contact sensor to physically touch an object, a camera to record visual observations, or an environmental sensor to note changes in temperature or radiation (energy in the form of waves or particles).

Intelligent machines?

The question of whether computers can really think is still being debated. Some machines seem to mirror human intelligence, like I.B.M.'s chess-playing computer, Deep Blue, or the robotic artist named Aaron that produces paintings that could easily pass for human work. But most researchers in the field of artificial intelligence admit that at the beginning of the twenty-first century, machines do not have the subtlety, depth, richness, and range of human intelligence. Even with the most sophisticated software, a computer can only use the information it is given in the way it is told to use it. The real question is how this technology can best serve the interests of people.

Algorithms

An algorithm is a set of instructions that indicate a method for accomplishing a task in mathematics or some other field. People use algorithms every day, usually without even thinking about it. When you multiply two numbers with a hand calculator, for example, the first step is to enter one number on the keyboard. The next step is to press the multiplication sign (×) on the keyboard. Then you enter the second number on the keyboard. Finally you press the equals sign (=) to obtain the answer. This series of four steps constitutes an algorithm for multiplying two numbers. Many algorithms are much more complicated than this one. They may involve dozens or even hundreds of steps.

[See also Automation; Computer, analog; Computer, digital; Computer software; Cybernetics; Robotics ]

About this article

artificial intelligence

All Sources -
Updated Aug 13 2018 About encyclopedia.com content Print Topic