artificial intelligence
artificial intelligence. (Image by Vev, GFDL)

Entries

Encyclopedia of Science and Religion Encyclopedia of ManagementEncyclopedia of Business and Finance, 2nd ed.Computer Sciences Further reading

NON JS

Artificial Intelligence

Artificial Intelligence


Artificial intelligence (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include game playing, natural language understanding and synthesis, computer vision, problem solving, learning, and robotics.

The above is a general description of the field; there is no agreed upon definition of artificial intelligence, primarily because there is little agreement as to what constitutes intelligence. Interpretations of what it means to be intelligent vary, yet most can be categorized in one of three ways. Intelligence can be thought of as a quality, an individually held property that is separable from all other properties of the human person. Intelligence is also seen in the functions one performs, in actions or the ability to carry out certain tasks. Finally, some researchers see intelligence as a quality that can only be acquired and demonstrated through relationship with other intelligent beings. Each of these understandings of intelligence has been used as the basis of an approach to developing computer programs with intelligent characteristics.


First attempts: symbolic AI

The field of AI is considered to have its origin in the publication of British mathematician Alan Turing's (19121954) paper "Computing Machinery and Intelligence" (1950). The term itself was coined six years later by mathematician and computer scientist John McCarthy (b. 1927) at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI and is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometry can be built from a finite set of axioms and primitive objects such as points and lines, so symbolicists, following rationalist philosophers such as Ludwig Wittgenstein (18891951) and Alfred North Whitehead (18611947), predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects. Simple concepts or objects are directly expressed by a single symbol while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought.

Symbolic AI met with immediate success in areas in which problems could be easily described using a limited domain of objects that operate in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and countermoves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving.

Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two year old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human "experts" possess in the form of what is known as common sense. Humans make use of general knowledgethe millions of things that are known and applied to a situationboth consciously and subconsciously. Should it exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.

Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that human experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a "degenerating research project," by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. This prediction has proven fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat's Cyc (pronounced "psych"). Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts. Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has not yet shown convincing evidence of extended independent learning.

Functional or weak AI

In 1980, John Searle, in the paper "Minds, Brains, and Programs," introduced a division of the field of AI into "strong" and "weak" AI. Strong AI denoted the attempt to develop a full human-like intelligence, while weak AI denoted the use of AI techniques to either better understand human reasoning or to solve more limited problems. Although there was little progress in developing a strong AI through symbolic programming methods, the attempt to program computers to carry out limited human functions has been quite successful. Much of what is currently labeled AI research follows a functional model, applying particular programming techniques, such as knowledge engineering, fuzzy logic, genetic algorithms, neural networking, heuristic searching, and machine learning via statistical methods, to practical problems. This view sees AI as advanced computing. It produces working programs that can take over certain human tasks. Such programs are used in manufacturing operations, transportation, education, financial markets, "smart" buildings, and even household appliances.

For a functional AI, there need be no quality labeled "intelligence" that is shared by humans and computers. All computers need do is perform a task that requires intelligence for a human to perform. It is also unnecessary, in functional AI, to model a program after the thought processes that humans use. If results are what matters, then it is possible to exploit the speed and storage capabilities of the digital computer while ignoring parts of human thought that are not understood or easily modeled, such as intuition. This is, in fact, what was done in designing the chess-playing program Deep Blue, which in 1997 beat the reigning world chess champion, Gary Kasparov. Deep Blue does not attempt to mimic the thought of a human chess player. Instead, it capitalizes on the strengths of the computer by examining an extremely large number of moves, more moves than any human player could possibly examine.

There are two problems with functional AI. The first is the difficulty of determining what falls into the category of AI and what is simply a normal computer application. A definition of AI that includes any program that accomplishes some function normally done by a human being would encompass virtually all computer programs. Nor is there agreement among computer scientists as to what sorts of programs should fall under the rubric of AI. Once an application is mastered, there is a tendency to no longer define that application as AI. For example, while game playing is one of the classical fields of AI, Deep Blue's design team emphatically states that Deep Blue is not artificial intelligence, since it uses standard programming and parallel processing techniques that are in no way designed to mimic human thought. The implication here is that merely programming a computer to complete a human task is not AI if the computer does not complete the task in the same way a human would.

For a functional approach to result in a full human-like intelligence it would be necessary not only to specify which functions make up intelligence, but also to make sure those functions are suitably congruent with one another. Functional AI programs are rarely designed to be compatible with other programs; each uses different techniques and methods, the sum of which is unlikely to capture the whole of human intelligence. Many in the AI community are also dissatisfied with a collection of task-oriented programs. The building of a general human-like intelligence, as difficult a goal as it may seem, remains the vision.


A relational approach

A third approach is to consider intelligence as acquired, held, and demonstrated only through relationships with other intelligent agents. In "Computing Machinery and Intelligence" (1997), Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. An human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered as having intelligence. The Turing Test is not based on the completion of tasks or the solution of problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000, there would be computers that could fool an interrogator at least thirty percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.

The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of the background knowledge needed for a restricted form of the game would take at a minimum three hundred person-years to complete. This is assuming that the appropriate knowledge set could be identified at the outset. Turing suggests that rather than trying to imitate an adult mind, computer scientists should attempt to construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology, whose lab has constructed several robots, including Cog and Kismet, that represent a new direction in AI in which embodiedness is crucial to the robot's design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. These robots are designed to learn tasks associated with human infants, such as eye-hand coordination, grasping an object, and face recognition through social interaction with a team of researchers. Although the robots have developed abilities such as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project is too new to be assessed. It may be no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.

The Turing Test makes no assumptions as to how the computer arrives at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers, including Hans Moravec and Raymond Kurzweil, see neural net research as a way to reverse engineer the brain. They hope that once scientists can design nets with a complexity equal to the human brain, the nets will have the same power as the brain and develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hope that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.


AI in science fiction

A truly intelligent computer remains in the realm of speculation. Though researchers have continually projected that intelligent computers are immanent, progress in AI has been limited. Computers with intentionality and self consciousness, with fully human reasoning skills, or the ability to be in relationship, exist only in the realm of dreams and desires, a realm explored in fiction and fantasy.

The artificially intelligent computer in science fiction story and film is not a prop, but a character, one that has become a staple since the mid-1950s. These characters are embodied in a variety of physical forms, ranging from the wholly mechanical (computers and robots) to the partially mechanical (cyborgs) and the completely biological (androids). A general trend from the 1950s to the 1990s has been to depict intelligent computers in an increasingly anthropomorphic way. The robots and computers of early films, such as Maria in Fritz Lang's Metropolis (1926), Robby in Fred Wilcox's Forbidden Planet (1956), Hal in Stanley Kubrick's 2001: A Space Odyssey (1968), or R2D2 and C3PO in George Lucas's Star Wars (1977), were clearly constructs of metal. On the other hand, early science fiction stories, such as Isaac Asimov's I, Robot (1950), explored the question of how one might distinguish between robots that looked human and actual human beings. Films and stories from the 1980s through the early 2000s, including Ridley Scott's Blade Runner (1982) and Stephen Spielberg's A.I. (2001), pick up this question, depicting machines with both mechanical and biological parts that are far less easily distinguished from human beings.

Fiction that features AI can be classified in two general categories: cautionary tales (A.I., 2001 ) or tales of wish fulfillment (Star Wars ; I, Robot ). These present two differing visions of the artificially intelligent being, as a rival to be feared or as a friendly and helpful companion.


Philosophical and theological questions

What rights would an intelligent robot have? Will artificially intelligent computers eventually replace human beings? Should scientists discontinue research in fields such as artificial intelligence or nanotechnology in order to safeguard future lives? When a computer malfunctions, who is responsible? These are only some of the ethical and theological questions that arise when one considers the possibility of success in the development of an artificial intelligence. The prospect of an artificially intelligent computer also raises questions about the nature of human beings. Are humans simply machines themselves? At what point would replacing some or all human biological parts with mechanical components violate one's integrity as a human being? Is a human being's relationship to God at all contingent on human biological nature? If humans are not the end point of evolution, what does this say about human nature? What is the relationship of the soul to consciousness or intelligence? While most of these questions are speculative in nature, regarding a future that may or may not come to be, they remain relevant, for the way people live and the ways in which they view their lives stand to be critically altered by technology. The quest for artificial intelligence reveals much about how people view themselves as human beings and the spiritual values they hold.

See also Algorithm; Artificial Life; Cybernetics; Cyborg; Imago Dei; Thinking Machines; Turing Test

Bibliography

asimov, isaac. i, robot. new york: doubleday, 1950.

brooks, rodney. "intelligence without representation." in mind design ii: philosophy, psychology, artificial intelligence, rev. edition, ed. john haugeland. cambridge, mass.: mit press, 1997.

crevier, daniel. ai: the tumultuous history of the search for artificial intelligence. new york: basic books, 1993.

dreyfus, hubert. mind over machine: the power of human intuition and expertise in the era of the computer. new york: free press, 1986.

kurzweil, raymond. the age of spiritual machines. new york: viking, 1999.

lenat, douglas. "cyc: a large-scale investment in knowledge infrastructure." communications of the acm 38 (1995): 3338.

minsky, marvin. the society of mind. new york: simon and schuster, 1986.

moravec, hans. mind children: the future of robot and human intelligence. cambridge, mass.: harvard university press, 1988.

searle, john. "minds, brains, and programs." the behavioral and brain sciences 3 (1980): 417424.

stork, david, ed. hal's legacy: 2001's computer as dream and reality. cambridge, mass.: mit press, 1997.

turing, alan. "computing machinery and intelligence." in mind design ii: philosophy, psychology, artificial intelligence, rev. edition, ed. john haugeland.cambridge, mass.: mit press, 1997.

telotte, j. p. replications: a robotic history of the science fiction film. urbana: university of illinois press, 1995.

turkel, sherry. the second self: computers and the human spirit. new york: simon and schuster, 1984.

warrick, patricia. the cybernetic imagination in science fiction. cambridge, mass.: mit press, 1980.

winograd, terry, and flores, fernando. understanding computers and cognition: a new foundation for design. norwood, n.j.: ablex, 1986. reprint, reading, mass.: addison-wesley, 1991.


other resources

2001: a space odyssey. directed by stanley kubrick. metro-goldwyn-mayer; polaris, 1968.

ai. directed by steven spielberg. amblin entertainment; dreamworks skg; stanley kubrick productions; warner bros., 2001.

blade runner. directed by ridley scott. blade runner partnership; the ladd comany, 1982.


forbidden planet. directed by fred wilcox. metro- goldwyn-mayer, 1956.


metropolis. directed by fritz lang. universum film, a.g., 1926.


star wars. directed by george lucas. lucasfilm ltd., 1977.

noreen l. herzfeld

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

HERZFELD, NOREEN L.. "Artificial Intelligence." Encyclopedia of Science and Religion. 2003. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

HERZFELD, NOREEN L.. "Artificial Intelligence." Encyclopedia of Science and Religion. 2003. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3404200033.html

HERZFELD, NOREEN L.. "Artificial Intelligence." Encyclopedia of Science and Religion. 2003. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3404200033.html

Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) refers to computer software that exhibits intelligent behavior. The term intelligence is difficult to define and has been the subject of heated debate by philosophers, educators, and psychologists for ages. Nevertheless, it is possible to enumerate many important characteristics of intelligent behavior. Intelligence includes the capacity to learn, maintain a large storehouse of knowledge, utilize commonsense reasoning, apply analytical abilities, discern relationships between facts, communicate ideas to others and understand communications from others, and perceive and make sense of the world around us. Thus, artificial intelligence systems are computer programs that exhibit one or more of these behaviors.

AI systems can be divided into two broad categories: knowledge representation systems and machine learning systems. Knowledge representation systems, also known as expert systems, provide a structure for capturing and encoding the knowledge of a human expert in a particular domain. For example, the knowledge of medical doctors might be captured in a computerized model that can be used to help diagnose patient illnesses.

MACHINE LEARNING SYSTEMS

The second category of AI, machine learning systems, creates new knowledge by finding previously unknown patterns in data. In contrast to knowledge representation approaches, which model the problem-solving structure of human experts, machine learning systems derive solutions by learning patterns in data, with little or no intervention by an expert. There are three main machine learning techniques: neural networks, induction algorithms, and genetic algorithms.

Neural Networks. Neural networks simulate the human nervous system. The concepts that guide neural network research and practice stem from studies of biological systems. These systems model the interaction between nerve cells. Components of a neural network include neurons (sometimes called processing elements), input lines to the neurons (called dendrites), and output lines from the neurons (called axons).

Neural networks are composed of richly connected sets of neurons forming layers. The neural network architecture consists of an input layer, which inputs data to the network; an output layer, which produces the resulting guess of the network; and a series of one or more hidden layers, which assist in propagating. This is illustrated in Figure 1.

During processing, each neuron performs a weighted sum of inputs from the neurons connecting to it; this is called activation. The neuron chooses to fire if the sum of inputs exceeds some previously set threshold value; this is called transfer.

Inputs with high weights tend to give greater activation to a neuron than inputs with low weights. The weight of an input is analogous to the strength of a synapse in a biological system. In biological systems, learning occurs by strengthening or weakening the synaptic connections between nerve cells. An artificial neural network simulates synaptic connection strength by increasing or decreasing the weight of input lines into neurons.

Neural networks are trained with a series of data points. The networks guess which response should be given, and the guess is compared against the correct answer for each data point. If errors occur, the weights into the neurons are adjusted and the process repeats itself. This learning approach is called backpropagation and is similar to statistical regression.

Neural networks are used in a wide variety of business problems, including optical character recognition, financial forecasting, market demographics trend assessment, and various robotics applications.

Induction Algorithms. Induction algorithms form another approach to machine learning. In contrast to neural networks (which are highly mathematical in nature), induction approaches tend to involve symbolic data. As the name implies, these algorithms work by implementing inductive reasoning approaches. Induction is a reasoning method that can be characterized as learning by example. Unlike rulebased deduction, induction begins with a set of observations and constructs rules to account for these observations. Inductive reasoning attempts to find general patterns that can fully explain the observations. The system is presented with a large set of data consisting of several input variables and one decision variable. The system constructs a decision tree by recursively partitioning data sets based on the variables that best distinguish between the data elements. That is, it attempts to partition the data so that each partition contains data with the same value for a decision variable. It does this by selecting the input variables that do the best job of dividing the data set into homogeneous partitions. For example, consider Figure 2, which contains the data set pertaining to decisions that were made on credit loan applications.

An induction algorithm would infer the rules in Figure 3 to explain this data.

As this example illustrates, an induction algorithm is able to induce rules that identify the general patterns in data. In doing so, these algorithms can prune out irrelevant or unnecessary attributes. In the example above, salary was irrelevant in terms of explaining the loan decision of the data set.

Induction algorithms are often used for data mining applications, such as marketing problems that help companies decide on the best market strategies for new product lines. Data mining is a common service included in data warehouses, which are frequently used as decision support tools.

Genetic Algorithms. Genetic algorithms use an evolutionary approach to solve optimization problems. These are based on Darwin's theory of evolution, and in particular the notion of survival of the fittest. Concepts such as reproduction, natural selection, mutation, chromosome, and gene are all included in the genetic algorithm approach.

Genetic algorithms are useful in optimization problems that must select from a very large number of possible solutions to a problem. A classic example of this is the traveling salesperson problem. Consider a salesman who must visit n cities. The salesperson's problem is to find the shortest route by which to visit each of these n cities exactly once, so that the salesman will tour all the cities

Figure 2
Artificial Intelligence & Expert Systems
  Salary Credit History Current Assets Loan Decision
a) High Poor High Accept
b) High Poor Low Reject
c) Low Poor Low Reject
d) Low Good Low Accept
e) Low Good High Accept
f) High Poor Low Accept

and return to the origin. For such a problem there are (n 1)! possible solutions, or (n 1) factorial. For six cities, this would mean 5 × 4 × 3 × 2 × 1 = 120 possible solutions. Suppose that the salesman must travel to 100 cities. This would involve 99! possible solutions, an astronomically high number.

Obviously, for this type of problem, a brute strength method of exhaustively comparing all possible solutions

Figure 3

If the credit history is good, then accept the loan application

If the credit history is poor and current assets are high, then accept the loan application

If the credit history is poor and current assets are low, then reject the loan application

will not work. This requires the use of heuristic methods, of which the genetic algorithm is a prime example. For the traveling salesperson problem, a chromosome would be one possible route through the cities, and a gene would be a city in a particular sequence on the chromosome. The genetic algorithm would start with an initial population of chromosomes (routes) and measure each according to a fitness function (the total distance traveled in the route). Those with the best fitness functions would be selected and those with the worst would be discarded. Then random pairs of surviving chromosomes would mate, a process called crossover. This involves swapping city positions between the pair of chromosomes, resulting in a pair of child chromosomes. In addition, some random subset of the population would be mutated, such that some portion of the sequence of cities would be altered. The process of selection, crossover, and mutation results in a new population for the next generation. This procedure is repeated through as many generations as necessary in order to obtain an optimal solution.

Genetic algorithms are very effective at finding good solutions to optimization problems. Scheduling, configuration, and routing problems are good candidates for a genetic algorithm approach. Although genetic algorithms do not guarantee the absolute best solution, they do consistently arrive at very good solutions in a relatively short period of time.

AI IN THE TWENTY-FIRST CENTURY

Artificial intelligence systems provide a key component in many computer applications that serve the world of business. In fact, AI is so prevalent that many people encounter such applications on a daily basis without even being aware of it.

One of the most ubiquitous uses of AI can be found in network servers that route electronic mail and in email spam-filtering devices. Expert systems are routinely utilized in the medical field, where they take the place of doctors to assess the results of tests like mammograms or electrocardiograms; credit card companies, banks, and insurance firms commonly use neural networks to help detect fraud. These AI systems can, for example, monitor consumer-spending habits, detect patterns in the data, and alert the company when uncharacteristic patterns arise. Genetic algorithms serve logistics planning functions in airports, factories, and even military operations, where they are used to help solve incredibly complex resource-allocation problems. And perhaps most familiar, many companies employ AI systems to help monitor calls in their customer service call centers. These systems can analyze the emotional tones of callers' voices or listen for specific words, and route those calls to human supervisors for follow-up attention.

Artificial Intelligence is routinely used by enterprises in supply chain management through the use of a set of intelligent software agents that are responsible for one or more aspects of the supply chain. These agents interact with one another in the planning and execution of their tasks. For instance, a logistics agent is responsible for coordinating the factories, suppliers, and distribution centers. This agent provides inputs to the transportation agent, which is responsible for assignment and scheduling transportation resources. The agents coordinate their activities with the optimization of the supply chain as the common goal.

Customer Relationship Management uses artificial intelligence to connect product offers and promotions with consumer desires. AI software profiles customer behavior by finding patterns in transaction data. The software generates algorithms for evaluating different data characteristics, such as what products are frequently bought together or the time of year a product sells the most. Thus, the software is able to use historical data to predict customer behavior in the future.

Artificial intelligence is also used on Wall Street in the selection of stocks. Analysts use AI software to discover trading patterns. For instance, an algorithm could find that the price movements of two stocks are similar; when the stocks diverge a trader might buy one stock and sell the other on the assumption that their prices will return to the historical norm. As the use of trading algorithms becomes more commonplace, there is less potential for profit.

Although computer scientists have thus far failed to create machines that can function with the complex intelligence of human beings, they have succeeded in creating a wide range of AI applications that make people's lives simpler and more convenient.

SEE ALSO Expert Systems

BIBLIOGRAPHY

Chokshi, Kaustubh. Artificial Intelligence Enters the Mainstream. Domainb April, 2007. Available from: http://www.domainb.com/infotech/itfeature/20070430_Intelligence.htm.

Dhar, V., and R. Stein. Seven Methods for Transforming Corporate Data into Business Intelligence. Upper Saddle River, NJ: Prentice Hall, 1997.

Duhigg, Charles. Artificial Intelligence Applied Heavily to Picking Stocks. International Herald Tribune 23, November 2006. Available from: www.iht.com/articles/2006/11/23/business/trading.php.

Hot Topics: Artificial Intelligence. BBC Online. Available from: http://www.bbc.co.uk/science/hottopics/ai/.

Kahn, Jennifer. It's Alive! From Airport Tarmacs to Online Job Banks to Medical Labs, Artificial Intelligence Is Everywhere. Wired March 2002. Available from: http://www.wired.com/wired/archive/10.03/everywhere.html.

Menzies, Tim. 21st Century AI: Proud, Not Smug. IEEE Intelligent Systems May/June 2003.

Norvig, P., and S. Russell. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall, 2002.

Rigby, Darrell. Management Tools and Trends. Boston: Bain & Company, 2007.

Sabariraian, A. Scope of Artificial Intelligence in Business. International Herald Tribune September 2008. Available from: http://www.articlesbase.com/management-articles/scope-of-artificial-intelligence-in-business328608.html.

Van, Jon. Computers Gain Power, But It's Not What You Think. Chicago Tribune 20 March 2005.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence." Encyclopedia of Management. 2009. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"Artificial Intelligence." Encyclopedia of Management. 2009. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3273100015.html

"Artificial Intelligence." Encyclopedia of Management. 2009. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3273100015.html

Artificial Intelligence

ARTIFICIAL INTELLIGENCE

In simplest terms, artificial intelligence (AI) is manufactured thinking. It is a machine's ability to think. This process is deemed "artificial" because once it is programmed it occurs without human intervention. AI is generally applied to the theory and practical application of a computer's ability to think like humans do. AI capability is designated as either strong AI or weak AI. Strong AI is a computer system that actively employs consciousness, a machine that can truly reason and solve problems independently. Critics of AI systems argue that such a machine is unrealistic, and even if it were possible, a true artificially intelligent machine is unwanted.

Popular perceptions of AI have been dramatized in movies such as 2001: A Space Odyssey (1968), in which a starship computer named HAL 9000 is capable of speech and facial recognition, natural language processing, interpreting emotions, and expressing reason. Another famous make-believe computer was the star of WarGames (1983). In this movie, the line, "Do you want to play a game?" allows the teenage hero to persuade the computer to play a game rather than start World War III. In both examples, the computers undertook independent actions that were potentially harmful to their human creators. This is the reason most often given for not creating strong AI machines.

Modern working applications of AI are examples of weak AI. Current AI research focuses on developing computers that use intelligent programming to automate routine human tasks. For example, many customer service telephone banks are automated by AI. When a recorded voice asks for a "yes" or "no" response or for the caller to choose a menu item by saying specific words, the computer on the other end of the telephone is using weak AI to make a decision and select the appropriate response based on caller input. These computers are trained to recognize speech patterns, dialects, accents, and replacement words such as "oh"rather than "zero"for the number 0.

Long before the development of computers, the notion that thinking was a form of computation motivated the formalization of logic as a type of rational thought. These efforts continue today. Graph theory provided the architecture for searching a solution space for a problem. Operations research, with its focus on optimization algorithms, uses graph theory to solve complex decision-making problems.

PIONEERS OF AI

AI uses syllogistic logic, which was first postulated by Aristotle. This logic is based on deductive reasoning. For example, if A equals B, and B equals C, then A must also equal C. Throughout history, the nature of syllogistic logic and deductive reasoning was shaped by grammarians, mathematicians, and philosophers. When computers were developed, programming languages used similar logical patterns to support software applications. Terms such as cybernetics and robotics were used to describe collective intelligence approaches and led to the development of AI as an experimental field in the 1950s.

Allen Newell and Herbert Simon pioneered the first AI laboratory at Carnegie Mellon University in the 1950s. John McCarthy and Marvin Minsky of the Massachusetts Institute of Technology opened their original AI lab in 1959 to write AI decision-making software. The best-known name in the AI community, however, is Alan Turing (19121954). Alan Turing was a mathematician, philosophy, and cryptographer and is often credited as the founder of computer science as a discipline separate from mathematics. He contributed to the debate of whether a machine could think by developing the Turing test. The Turing test uses a human judge engaged in remote conversation with two parties: another human and a machine. If the judge cannot tell which party is the human, the machine passes the test.

Originally, teletype machines were used to maintain the anonymity of the parties; today, IRC (Internet relay chat) is used to test the linguistic capability of AI engines. Linguistic robots called Chatterbots (such as Jabberwacky) are very popular programs that allow an individual to converse with a machine and demonstrate machine intelligence and reasoning.

The Defense Advanced Research Projects Agency, which played a significant role in the birth of the Internet by funding ARPANET, also funded AI research in the early 1980s. Nevertheless, when results were not immediately useful for military application, funding was cut.


Since then, AI research has moved to other areas including robotics, computer vision, and other practical engineering tasks.

AN EVOLUTION OF APPLICATIONS

One of the early milestones in AI was Newell and Simon's General Problem Solver (GPS). The program was designed to imitate human problem-solving methods. This and other developments such as Logic Theorist and the Geometry Theorem Prover generated enthusiasm for the future of AI. Simon went so far as to assert that in the near-term future the problems that computers could solve would be coextensive with the range of problems to which the human mind has been applied.

Difficulties in achieving this objective soon began to manifest themselves. New research based on earlier successes encountered problems of intractability. A search for alternative approaches led to attempts to solve typically occurring cases in narrow areas of expertise. This prompted the development of expert systems, which reach conclusions by applying reasoning techniques based on sets of rules. A seminal model was MYCIN, developed to diagnose blood infections. Having about 450 rules, MYCIN was able to outperform many experts. This and other expert systems research led to the first commercial expert system, R1, implemented at Digital Equipment Corporation (DEC) to help configure client orders for new mainframe and minicomputer systems. R1's implementation was estimated to save DEC about $40 million per year.

Other classic systems include the PROSPECTOR program for determining the probable location and type of ore deposits and the INTERNIST program for performing patient diagnosis in internal medicine.

THE ROLE OF AI IN COMPUTER SCIENCE

While precise definitions are still the subject of debate, AI may be usefully thought of as the branch of computer science that is concerned with the automation of intelligent behavior. The intent of AI is to develop systems that have the ability to perceive and to learn, to accomplish physical tasks, and to emulate human decision making. AI seeks to design and develop intelligent agents as well as to understand them.

AI research has proven to be the breeding ground for computer science subdisciplines such as pattern recognition, image processing, neural networks, natural language processing, and game theory. For example, optical character recognition software that transcribes handwritten characters into typed text (notably with tablet personal computers and personal digital assistants) was initially a focus of AI research.

Additionally, expert systems used in business applications owe their existence to AI. Manufacturing companies use inventory applications that track both production levels and sales to determine when and how much of specific supplies are needed to produce orders in the pipeline. Genetic algorithms are employed by financial planners to assess the best combination of investment opportunities for their clients. Other examples include data mining applications, surveillance programs, and facial recognition applications.

Multiagent systems are also based on AI research. Use of these systems has been driven by the recognition that intelligence may be reflected by the collective behaviors of large numbers of very simple interacting members of a community of agents. These agents can be computers, software modules, or virtually any object that can perceive aspects of its environment and proceed in a rational way toward accomplishing a goal.

Four types of systems will have a substantial impact on applications: intelligent simulation, information-resource specialists, intelligent project coaches, and robot teams.

Intelligent simulations generate realistic simulated worlds that enable extensive affordable training and education which can be made available any time and anywhere. Examples might be hurricane crisis management, exploration of the impacts of different economic theories, tests of products on simulated customers, and technological design testing features through simulation that would cost millions of dollars to test using an actual prototype.

Information-resource specialist systems (IRSS) will enable easy access to information related to a specific problem. For instance, a rural doctor whose patient presents with a rare condition might use IRSS to assess competing treatments or identify new ones. An educator might find relevant background materials, including information about similar courses taught elsewhere.

Intelligent project coaches (IPCs) could function as coworkers, assisting and collaborating with design or operations teams for complex systems. Such systems could recall the rationale of previous decisions and, in times of crisis, explain the methods and reasoning previously used to handle that situation. An IPC for aircraft design could enhance collaboration by keeping communication flowing among the large, distributed design staff, the program managers, the customer, and the subcontractors.

Robot teams could contribute to manufacturing by operating in a dynamic environment with minimal instrumentation, thus providing the benefits of economies of scale. They could also participate in automating sophisticated laboratory procedures that require sensing, manipulation, planning, and transport. The AI robots could work in dangerous environments with no threat to their human builders.

SUMMARY

A variety of disciplines have influenced the development of AI. These include philosophy (logic), mathematics (computability, algorithms), psychology (cognition), engineering (computer hardware and software), and linguistics (knowledge representation and natural-language processing). As AI continues to redefine itself, the practical application of the field will change.

AI supports national competitiveness as it depends increasingly on capacities for accessing, processing, and analyzing information. The computer systems used for such purposes must also be intelligent. Health-care providers require easy access to information systems so they can track health-care delivery and identify the most effective medical treatments for their patients' conditions. Crisis management teams must be able to explore alternative courses of action and make critical decisions. Educators need systems that adapt to a student's individual needs and abilities. Businesses require flexible manufacturing and software design aids to maintain their leadership position in information technology, and to regain it in manufacturing. AI will continue to evolve toward a rational, logical machine presence that will support and enhance human endeavors.

see also Information Processing; Interactive Technology

Mark J. Snyder

Lisa E. Gueldenzoph

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

Snyder, Mark; Gueldenzoph, Lisa. "Artificial Intelligence." Encyclopedia of Business and Finance, 2nd ed.. 2007. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

Snyder, Mark; Gueldenzoph, Lisa. "Artificial Intelligence." Encyclopedia of Business and Finance, 2nd ed.. 2007. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-1552100025.html

Snyder, Mark; Gueldenzoph, Lisa. "Artificial Intelligence." Encyclopedia of Business and Finance, 2nd ed.. 2007. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-1552100025.html

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence (AI) is a field of study based on the premise that intelligent thought can be regarded as a form of computationone that can be formalized and ultimately mechanized. To achieve this, however, two major issues need to be addressed. The first issue is knowledge representation, and the second is knowledge manipulation. Within the intersection of these two issues lies mechanized intelligence.

History

The study of artificial intelligence has a long history, dating back to the work of British mathematician Charles Babbage (17911871) who developed a special-purpose "Difference Engine" for mechanically computing the values of certain polynomial functions. Similar work was also done by German mathematician Gottfried Wilhem von Leibniz (16461716), who introduced the first system of formal logic and constructed machines for automating calculation. George Boole, Ada Byron King, Countess of Lovelace, Gottlob Frege, and Alfred Tarski have all significantly contributed to the advancement of the field of artificial intelligence.

Knowledge Representation

It has long been recognized that the language and models used to represent reality profoundly impact one's understanding of reality itself. When humans think about a particular system, they form a mental model of that system and then proceed to discover truths about the system. These truths lead to the ability to make predictions or general statements about the system. However, when a model does not sufficiently match the actual problem, the discovery of truths and the ability to make predictions becomes exceedingly difficult.

A classic example of this is the pre-Copernican model in which the Sun and planets revolved around the Earth. In such a model, it was prohibitively difficult to predict the position of planets. However, in the Copernican revolution this Earth-centric model was replaced with a model where the Earth and other planets revolved around the Sun. This new model dramatically increased the ability of astronomers to predict celestial events.

Arithmetic with Roman numerals provides a second example of how knowledge representation can severely limit the ability to manipulate that knowledge. Both of these examples stress the important relationship between knowledge representation and thought.

In AI, a significant effort has gone into the development of languages that can be used to represent knowledge appropriately. Languages such as LISP, which is based on the lambda calculus , and Prolog, which is based on formal logic, are widely used for knowledge representation. Variations of predicate calculus are also common languages used by automated reasoning systems. These languages have well-defined semantics and provide a very general framework for representing and manipulating knowledge.

Knowledge Manipulation

Many problems that humans are confronted with are not fully understood. This partial understanding is reflected in the fact that a rigid algorithmic solutiona routine and predetermined number of computational steps cannot be applied. Rather, the concept of search is used to solve such problems. When search is used to explore the entire solution space, it is said to be exhaustive. Exhaustive search is not typically a successful approach to problem solving because most interesting problems have search spaces that are simply too large to be dealt with in this manner, even by the fastest computers. Therefore, if one hopes to find a solution (or a reasonably good approximation of a solution) to such a problem, one must selectively explore the problem's search space.

The difficulty here is that if part of the search space is not explored, one runs the risk that the solution one seeks will be missed. Thus, in order to ignore a portion of a search space, some guiding knowledge or insight must exist so that the solution will not be overlooked. Heuristics is a major area of AI that concerns itself with how to limit effectively the exploration of a search space. Chess is a classic example where humans routinely employ sophisticated heuristics in a search space. A chess player will typically search through a small number of possible moves before selecting a move to play. Not every possible move and countermove sequence is explored. Only reasonable sequences are examined. A large part of the intelligence of chess players resides in the heuristics they employ.

A heuristic-based search results from the application of domain or problem-specific knowledge to a universal search function. The success of heuristics has led to focusing the application of general AI techniques to specific problem domains. This has led to the development of expert systems capable of sophisticated reasoning in narrowly defined domains within fields such as medicine, mathematics, chemistry, robotics, and aviation.

Another area that is profoundly dependent on domain-specific knowledge is natural language processing. The ability to understand a natural language such as English is one of the most fundamental aspects of human intelligence, and presents one of the core challenges for the AI community. Small children routinely engage in natural language processing, yet it appears to be almost beyond the reach of mechanized computation. Over the years, significant progress has been made in the ability to parse text to discover its syntactic structure. However, much of the meaning in natural language is context-dependent as well as culture-dependent, and capturing such dependencies has proved highly resistant to automation.

The Turing Test

At what point does the behavior of a machine display intelligence? The answer to this question has raised considerable debate over the definition of intelligence itself. Is a computer capable of beating the world chess champion considered intelligent? Fifty years ago, the answer to this question would most likely have been yes. Today, it is disputed whether or not the behavior of such a machine is intelligent. One reason for this shift in the definition of intelligence is the massive increase in computational power that has occurred over the past fifty years, allowing the chess problem space to be searched in an almost exhaustive manner.

Two key ingredients are seen as essential to intelligent behavior: the ability to learn and thereby change one's behavior over time, and synergy, or the idea that the whole is somehow greater than the sum of its parts.

In 1950 British mathematician Alan Turing proposed a test for intelligence that has, to some extent, withstood the test of time and still serves as a litmus test for intelligent behavior. Turing proposed that the behavior of a machine could be considered intelligent if it was indistinguishable from the behavior of a human. In this imitation game, a human interrogator would hold a dialogue via a terminal with both a human and a computer. If, based solely on the content of the dialogue, the interrogator could not distinguish between the human and the computer, Turing argued that the behavior of the computer could be assumed to be intelligent.

Opponents of this definition of intelligence argue that the Turing Test defines intelligence solely in terms of human intelligence. For example, the ability to carry out complex numerical computation correctly and quickly is something that a computer can do easily but a human cannot. Given that, is it reasonable to use this ability to distinguish between the behavior of a human and a computer and conclude that the computer is not intelligent?

see also Assistive Computer Technology for Persons with Disabilities; Lisp; Optical Character Recognition; Robotics; Robots.

Victor L. Winter

Bibliography

Luger, George F., and William A. Stubblefield. Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Redwood City, CA: Benjamin/Cummings Publishing Company, 1993.

Mueller, Robert A., and Rex L. Page. Symbolic Computing with LISP and Prolog. New York: Wiley and Sons, 1988.

Russel, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall, 1994.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

Winter, Victor L.. "Artificial Intelligence." Computer Sciences. 2002. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

Winter, Victor L.. "Artificial Intelligence." Computer Sciences. 2002. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3401200022.html

Winter, Victor L.. "Artificial Intelligence." Computer Sciences. 2002. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3401200022.html

Artificial Intelligence

Artificial intelligence

Artificial intelligence (AI) is a subfield of computer science that focuses on creating computer software that imitates human learning and reasoning. Computers can out-perform people when it comes to storing information, solving numerical problems, and doing repetitive tasks. Computer programmers originally designed software that accomplished these tasks by completing algorithms, or clearly defined sets of instructions. In contrast, programmers design AI software to give the computer only the problem, not the steps necessary to solve it.

Overview of artificial intelligence

All AI programs are built on two foundations: a knowledge base and an inferencing capability (inferencing means to draw a conclusion based on facts and prior knowledge). A knowledge base is made up of many different pieces of information: facts, concepts, theories, procedures, and relationships. Where conventional computer software must follow a strictly logical series of steps to reach a conclusion (algorithm), AI software uses the techniques of search and pattern matching. The computer is given some initial information and then searches the knowledge base for specific conditions or patterns that fit the problem to be solved. This special ability of AI programsto reach a solution based on facts rather than on a preset series of stepsis what most closely resembles the thinking function of the human brain. In addition to problem solving, AI has many applications, including expert systems, natural language processing, and robotics.

Expert systems. The expert system is an AI program that contains the essential knowledge of a particular specialty or field, such as medicine, law, or finance. A simple database containing information on a particular subject can only give the user independent facts about the subject. An expert system, on the other hand, uses reasoning to draw conclusions from stored information. Expert systems are intended to act as intelligent assistants to human experts.

Natural language processing. Most conventional computer languages consist of a combination of symbols, numbers, and some words. These complex languages may take several years for a computer user to master. Computers programmed to respond to our natural languageour everyday speechare easier and more effective to use. In its simplest form, a natural language processing program works like this: a computer user types a sentence, phrase, or words on the keyboard. After searching its knowledge base for references to every word, the program then responds appropriately.

An example of a computer with a natural language processor is the computerized card catalog available in many public libraries. If you want a list of books on a specific topic or subject, you type in the appropriate phrase. You are asking the computerin Englishto tell you what is available on the topic. The computer usually responds in a very short timein Englishwith a list of books along with call numbers so you can find what you need.

Words to Know

Algorithm: Clearly defined set of instructions for solving a problem in a fixed number of steps.

Expert system: AI program that contains the essential knowledge of a particular specialty or field such as medicine or law.

Natural language: Language first learned as a child; native tongue.

Robotics: Study of robots, machines that can be programmed to perform manual duties.

Software: Set of programs or instructions controlling a computer's functions.

Robotics. Robotics is the study of robots, which are machines that can be programmed to perform manual duties. Most robots in use today perform various repetitive tasks in an industrial setting. These robots typically are used in factory assembly lines or in hazardous waste facilities to handle substances far too dangerous for humans to handle safely.

Research is being conducted in the field of intelligent robotsthose that can understand their environment and respond to any changes. AI programs allow a robot to gather information about its surroundings by using a contact sensor to physically touch an object, a camera to record visual observations, or an environmental sensor to note changes in temperature or radiation (energy in the form of waves or particles).

Intelligent machines?

The question of whether computers can really think is still being debated. Some machines seem to mirror human intelligence, like I.B.M.'s chess-playing computer, Deep Blue, or the robotic artist named Aaron that produces paintings that could easily pass for human work. But most researchers in the field of artificial intelligence admit that at the beginning of the twenty-first century, machines do not have the subtlety, depth, richness, and range of human intelligence. Even with the most sophisticated software, a computer can only use the information it is given in the way it is told to use it. The real question is how this technology can best serve the interests of people.

Algorithms

An algorithm is a set of instructions that indicate a method for accomplishing a task in mathematics or some other field. People use algorithms every day, usually without even thinking about it. When you multiply two numbers with a hand calculator, for example, the first step is to enter one number on the keyboard. The next step is to press the multiplication sign (×) on the keyboard. Then you enter the second number on the keyboard. Finally you press the equals sign (=) to obtain the answer. This series of four steps constitutes an algorithm for multiplying two numbers. Many algorithms are much more complicated than this one. They may involve dozens or even hundreds of steps.

[See also Automation; Computer, analog; Computer, digital; Computer software; Cybernetics; Robotics ]

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence." UXL Encyclopedia of Science. 2002. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"Artificial Intelligence." UXL Encyclopedia of Science. 2002. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3438100062.html

"Artificial Intelligence." UXL Encyclopedia of Science. 2002. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3438100062.html

Artificial Intelligence

Artificial intelligence

Computer-based technology intended to replicate the complicated processes of human cognition, including such complex tasks as reasoning, and machine learning, whereby a man-made device actually incorporates its experiences into new endeavors, learning from its mistakes and engaging in creative problem solving.

The study of artificial intelligence , referred to as AI, has accelerated in recent years as advancements in computer technology have made it possible to create more and more sophisticated machines and software programs. The field of AI is dominated by computer scientists, but it has important ramifications for psychologists as well because in creating machines that replicate human thought, much is learned about the processes the human brain uses to "think."

Creating a machine to think highlights the complexities and subtleties of the human mind. For instance, creating a machine to recognize objects in photographs would seem, at first thought, rather simple. Yet, when humans look at a photograph, they do so with expectations about the limitations of the media. We fill in the missing third dimension and account for other missing or inconsistent images with our sense of what the real world looks like. To program a computer to make those kinds of assumptions would be a gargantuan task. Consider, for instance, all the information such a computer would need to understand that the array of images all pressed up against a flat surface actually represent the three-dimensional world. The human mind is capable of decoding such an image almost instantaneously.

This process of simulating human thought has led to the development of new ideas in information processing. Among these new concepts are fuzzy logic, whereby a computer is programmed to think in broader terms than either/or and yes/no; expert systems, a group of programming rules that describe a reasoning process allowing computers to adapt and learn; data mining, detecting patterns in stimuli and drawing conclusions from them; genetic algorithm, a program that provides for random mutation for the machine to improve itself; and several others.

Recent applications of AI technology include machines that track financial investments, assist doctors in diagnoses and in looking for adverse interactions in patients on multiple medications, and spotting credit card fraud. An Australian scientist working in Japan is attempting to create a silicon brain using newly developed quantum resistors. Reported in a 1995 article in Business Week, Hugo de Garis is leading a team of scientists to create a computing system capable of reproducing itself. As Business Week reports, the project will attempt to "not only coax silicon circuits into giving birth to innate intelligence but imbue them with the power to design themselvesto control their own destiny by spawning new generations of ever improving brains at electronic speeds." This type of technology is called evolvable hardware.

Other recent advances in AI have been the creation of artificial neural systems (ANS) which has been described as "an artificial-intelligence tool that attempts to simulate the physical process upon which intuition is basedthat is, by simulating the process of adaptive biological learning." ANS, essentially, is a network of computers that are grouped together in ways similar to the brain's configuration of biological processing lobes.

Even considering all of these advancements, many people are skeptical that a machine will ever replicate human cognition . Marvin Minsky, a scientist at the Massachusetts Institute of Technology, states that the hardest thing of all in the creation of artificial intelligence is building a machine with common sense.

Further Reading

Anthes, Gary H. "Great Expectations: Award Winning AI Scientist Raj Reddy " Computer World (3 April 1995): 82.

Chartrand, Sabra. "A Split in Thinking among Keepers of Artificial Intelligence." New York Times (18 July 1993).

Port, Otis. "Computers That Think Are Almost Here." Business Week (17 July 1995): 68-73.

Wright, Robert. "Can Machines Think?" Time (25 March 1996): 50-58.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence." Gale Encyclopedia of Psychology. 2001. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"Artificial Intelligence." Gale Encyclopedia of Psychology. 2001. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3406000052.html

"Artificial Intelligence." Gale Encyclopedia of Psychology. 2001. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3406000052.html

artificial intelligence

artificial intelligence (AI), the use of computers to model the behavioral aspects of human reasoning and learning. Research in AI is concentrated in some half-dozen areas. In problem solving, one must proceed from a beginning (the initial state) to the end (the goal state) via a limited number of steps; AI here involves an attempt to model the reasoning process in solving a problem, such as the proof of a theorem in Euclidean geometry.

In game theory (see games, theory of), the computer must choose among a number of possible "next" moves to select the one that optimizes its probability of winning; this type of choice is analogous to that of a chess player selecting the next move in response to an opponent's move. In pattern recognition, shapes, forms, or configurations of data must be identified and isolated from a larger group; the process here is similar to that used by a doctor in classifying medical problems on the basis of symptoms. Natural language processing is an analysis of current or colloquial language usage without the sometimes misleading effect of formal grammars; it is an attempt to model the learning process of a translator faced with the phrase "throw mama from the train a kiss." Cybernetics is the analysis of the communication and control processes of biological organisms and their relationship to mechanical and electrical systems; this study could ultimately lead to the development of "thinking" robots (see robotics). Machine learning occurs when a computer improves its performance of a task on the basis of its programmed application of AI principles to its past performance of that task.

In the public eye advances in chess-playing computer programs were symbolic of early progress in AI. In 1948 British mathematician Alan Turing developed a chess algorithm for use with calculating machines—it lost to an amateur player in the one game that it played. Ten years later American mathematician Claude Shannon articulated two chess-playing algorithms: brute force, in which all possible moves and their consequences are calculated as far into the future as possible; and selective mode, in which only the most promising moves and their more immediate consequences are evaluated.

In 1988 Hitech, a program developed at Carnegie-Mellon Univ., defeated former U.S. champion Arnold Denker in a four-game match, becoming the first computer to defeat a grandmaster. A year later, Garry Kasparov, the reigning world champion, bested Deep Thought, a program developed by the IBM Corp., in a two-game exhibition. In 1990 the German computer Mephisto-Portrose became the first program to defeat a former world champion; while playing an exhibition of 24 simultaneous games, Anatoly Karpov bested 23 human opponents but lost to the computer.

Kasparov in 1996 became the first reigning world champion to lose to a computer in a game played with regulation time controls; the Deep Blue computer, developed by the IBM Corp., won the first game of the match, lost the second, drew the third and fourth, and lost the fifth and sixth. Deep Blue used the brute force approach, evaluating more than 100 billion chess positions each turn while looking six moves ahead; it coupled this with the most efficient chess evaluation software yet developed and an extensive library of chess games it could analyze as part of the decision process.

Subsequent matches between Vladimir Kramnik and Deep Fritz (2002, 2006) and Kasparov and Deep Junior (2003) resulted in two ties and a win for the programs. Unlike Deep Blue, which was a specially designed computer, these more recent computer challengers were chess programs running on powerful personal computers. Such programs have become an important tool in chess, and are used by chess masters to analyze games and experiment with new moves.

Another notable IBM AI computer, Watson, competed in 2011 on the "Jeopardy!" television quiz show, defeating two human champions. Watson, about 100 times faster than Deep Blue, was designed to process questions in natural human language (as opposed to simple commands), making sense of the quirky questions' complexity and ambiguity, and to search an extensive database to quickly provide the correct answers. Watson is a prototype for programs or services that can act as knowledgeable assistants, or even human substitutes, in such different fields as medicine, catalog sales, and computer technical support.

See also expert system.

See D. Freedman, Brainmakers: How Scientists Are Moving Beyond Computers to Create a Rival to the Human Brain (1994); D. Gelernter, The Muse in the Machine: Computerizing the Poetry of Human Thought (1994); D. Rasskin-Gutman, Chess Metaphors: Artificial Intelligence and the Human Mind (2009).

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"artificial intelligence." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"artificial intelligence." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1E1-artifInt.html

"artificial intelligence." The Columbia Encyclopedia, 6th ed.. 2016. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1E1-artifInt.html

Artificial Intelligence

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE, a branch of computer science that seeks to create a computer system capable of sensing the world around it, understanding conversations, learning, reasoning, and reaching decisions, just as would a human. In 1950 the pioneering British mathematician Alan Turing proposed a test for artificial intelligence in which a human subject tries to talk with an unseen conversant. The tester sends questions to the machine via teletype and reads its answers; if the subject cannot discern whether the conversation is being held with another person or a machine, then the machine is deemed to have artificial intelligence. No machine has come close to passing this test, and it is unlikely that one will in the near future. Researchers, however, have made progress on specific pieces of the artificial intelligence puzzle, and some of their work has had tangible benefits.

One area of progress is the field of expert systems, or computer systems designed to reproduce the knowledge base and decision-making techniques used by experts in a given field. Such a system can train workers and assist in decision making. MYCIN, a program developed in 1976 at Stanford University, suggests possible diagnoses for patients with infectious blood diseases, proposes treatments, and explains its "reasoning" in English. Corporations have used such systems to reduce the labor costs involved in repetitive calculations. A system used by American Express since November 1988 to advise when to deny credit to a customer saves the company millions of dollars annually.

A second area of artificial intelligence research is the field of artificial perception, or computer vision. Computer vision is the ability to recognize patterns in an image and to separate objects from background as quickly as the human brain. In the 1990s military technology initially developed to analyze spy-satellite images found its way into commercial applications, including monitors for assembly lines, digital cameras, and automotive imaging systems. Another pursuit in artificial intelligence research is natural language processing, the ability to interpret and generate human languages. In this area, as in others related to artificial intelligence research, commercial applications have been delayed as improvements in hardware—the computing power of the machines themselves—have not kept pace with the increasing complexity of software.

The field of neural networks seeks to reproduce the architecture of the brain—billions of connected nerve cells—by joining a large number of computer processors through a technique known as parallel processing. Fuzzy systems is a subfield of artificial intelligence research based on the assumption that the world encountered by humans is fraught with approximate, rather than precise, information. Interest in the field has been particularly strong in Japan, where fuzzy systems have been used in disparate applications, from operating subway cars to guiding the sale of securities. Some theorists argue that the technical obstacles to artificial intelligence, while large, are not insurmountable. A number of computer experts, philosophers, and futurists have speculated on the ethical and spiritual challenges facing society when artificially intelligent machines begin to mimic human personality traits, including memory, emotion, and consciousness.

BIBLIOGRAPHY

Kurzweil, Ray. The Age of Spiritual Machines. New York: Viking, 1999.

Partridge, Derek. A New Guide to Artificial Intelligence. Norwood, N.J.: Ablex, 1991.

Shapiro, Stuart C., ed. Encyclopedia of Artificial Intelligence. 2d ed. New York: Wiley, 1992.

Turbam, Efraim. Expert Systems and Applied Artificial Intelligence. New York: MacMillan, 1992.

VincentKiernan/a. r.

See alsoComputers and Computer Industry ; Cybernetics ; Cyborgs ; Robotics ; Virtual Reality .

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence." Dictionary of American History. 2003. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"Artificial Intelligence." Dictionary of American History. 2003. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3401800280.html

"Artificial Intelligence." Dictionary of American History. 2003. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3401800280.html

artificial intelligence

artificial intelligence (AI) A discipline concerned with the building of computer programs that perform tasks requiring intelligence when done by humans. However, intelligent tasks for which a decision procedure is known (e.g. inverting matrices) are generally excluded, whereas perceptual tasks that might seem not to involve intelligence (e.g. seeing) are generally included. For this reason, AI is better defined by indicating its range. Examples of tasks tackled within AI are: game playing, automated reasoning, learning, natural-language understanding, planning, speech understanding, theorem proving, and computer vision.

Perceptual tasks (e.g. seeing and hearing) have been found to involve much more computation than is apparent from introspection. This computation is unconscious in humans, which has made it hard to simulate. AI has had relatively more success at intellectual tasks (e.g. game playing and theorem proving) than perceptual tasks. Sometimes these computer programs are intended to simulate human behavior to assist psychologists and neuroscientists (see cognitive modeling). Sometimes they are built to solve problems for technological application (see expert systems, robotics).

Both theoretical and applied AI research have made very significant contributions to computer science. Computational techniques that originated from AI include augmented transition networks, means/ends analysis, rule-based systems, resolution, semantic networks, and heuristic search.

Philosophers have long been interested in the question, “can a computer think?” There are two schools of thought: weak AI, which is the proposition that computers can at least simulate thought and intelligence; and strong AI, which argues that a machine that can perform cognitive tasks is actually thinking. This is a complex topic that has received new interest with a focus on consciousness.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

JOHN DAINTITH. "artificial intelligence." A Dictionary of Computing. 2004. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

JOHN DAINTITH. "artificial intelligence." A Dictionary of Computing. 2004. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1O11-artificialintelligence.html

JOHN DAINTITH. "artificial intelligence." A Dictionary of Computing. 2004. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O11-artificialintelligence.html

artificial intelligence

artificial intelligence (AI) Science concerned with developing computers and computer programs that model human intelligence. The most common form of AI involves programming a computer to answer questions on a specialized subject. Such ‘expert systems’ are said to display the human ability to perform expert analytical tasks. A similar system in a word processor may highlight incorrect spellings, and be ‘taught’ new words. A closely related science, sometimes known as ‘artificial life’, is concerned with more low-level intelligence. For example, a robot may be programmed to find its way around a maze, displaying the basic ability to physically interact with its surroundings.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"artificial intelligence." World Encyclopedia. 2005. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"artificial intelligence." World Encyclopedia. 2005. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1O142-artificialintelligence.html

"artificial intelligence." World Encyclopedia. 2005. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O142-artificialintelligence.html

Artificial Intelligence: AI

ARTIFICIAL INTELLIGENCE: AI

As technology becomes more common in today's society, many questions are raised regarding its social and moral consequences. Often filmmakers tap into the high-tech market for ideas about family values and virtues. One such effort is Steven Spielberg's 2001 movie Artificial Intelligence: AI. The story revolves around David, a human-like robot, and his desire to experience real love and emotions, like other humans. The real question the film poses to its viewers is: Can humans love an artificial being?

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Artificial Intelligence: AI." Computer Sciences. 2002. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

"Artificial Intelligence: AI." Computer Sciences. 2002. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1G2-3401200023.html

"Artificial Intelligence: AI." Computer Sciences. 2002. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3401200023.html

artificial intelligence

artificial intelligence the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

ELIZABETH KNOWLES. "artificial intelligence." The Oxford Dictionary of Phrase and Fable. 2006. Encyclopedia.com. 27 Sep. 2016 <http://www.encyclopedia.com>.

ELIZABETH KNOWLES. "artificial intelligence." The Oxford Dictionary of Phrase and Fable. 2006. Encyclopedia.com. (September 27, 2016). http://www.encyclopedia.com/doc/1O214-artificialintelligence.html

ELIZABETH KNOWLES. "artificial intelligence." The Oxford Dictionary of Phrase and Fable. 2006. Retrieved September 27, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O214-artificialintelligence.html

Facts and information from other sites