Computer Science: Artificial Intelligence and Economics: The Rise of Formalism and Behaviorism

views updated

Computer Science: Artificial Intelligence and Economics: The Rise of Formalism and Behaviorism

Introduction

Starting the 1930s, American scientist Herbert Simon (1916–2001) has had a major influence on political science, economics, management science, cognitive psychology, the philosophy of science, and artificial intelligence (AI). Simon won the 1978 Nobel Prize in economics for his theory of bounded rationality, which challenged the traditional assumption in economics that consumers and producers always make perfectly rational choices to maximize utility and profits. Simon helped transform the social sciences into the behavioral sciences and was a leading advocate of behavioralism, the attempt in political science to describe group behavior mathematically. He also sought to extend his idea of mind as a physical symbol-system to the process of scientific discovery, thus contributing, albeit controversially, to the philosophy of science.

Historical Background and Scientific Foundations

Herbert Simon was born and grew up in Milwaukee, Wisconsin. His father, Arthur Carl Simon, an electrical engineer and patent attorney, immigrated to the United States in 1903; his mother, Edna Simon (born Merkel), was an American pianist of third-generation German and Czech descent. Simon had one brother, Clarence, five years older. Simon attributed his early interest in books and music to the influence of his mother. He camped, canoed, and hiked extensively in Wisconsin and formed lifelong interests in chess, classical piano, and beetles. Conversation around the family dinner table often involved politics and science. He was educated in the public school system of Milwaukee.

His mother's younger brother, Harold Merkel, who had studied economics at the University of Wisconsin, lived briefly with the family. He died young, but left his personal library of economics and psychology texts in the home. It was through these books that Simon became aware of the social sciences. Simon joined his high-school debate club, where he found himself defending unpopular causes such as free trade and disarmament. This moved him to read several textbooks on economics.

Simon entered the University of Chicago in 1933, determined to bring mathematical rigor to the social sciences. The university was loosely organized by today's standards; Simon's graduate transcript records that he formally attended only one course, boxing, in which he earned a “B.” His studies in symbolic logic, statistics, advanced math, physics, economics, and political science were self-guided. In 1936, a term-paper project stimulated his interest in decision-making in organizations. His paper helped him obtain a research assistantship studying the internal operations of municipal administrations. This led to Simon being appointed to the directorship of a research group at the University of California, Berkeley, from 1939 to 1942, performing research on the administration of state relief programs. While in California he was enrolled as a doctoral student at the University of Chicago, taking his exams by mail. His work on state relief programs provided the material of his doctoral dissertation. He received his Ph.D. in political science in 1942.

In 1947, he published a revised form of his Ph.D. dissertation as Administrative Behavior, a landmark work in administration theory. By the late 1950s, Administrative Behavior had become standard reading in college courses on public administration, organizational sociology, and business education.

Early and Middle Career

After receiving his doctorate in 1942, Simon joined the political science faculty of the Illinois Institute of Technology in Chicago. Living in Chicago enabled Simon to participate regularly in the staff seminars of the Cowles Commission for Research in Economics at the University of Chicago. The Cowles Commission was a private foundation devoted to integrating mathematical techniques with economics. The Cowles seminars introduced Simon to mathematical economic theories, including new econometric (economy-measurement) techniques and John Maynard Keynes's general theory.

In 1949, the Carnegie Institute of Technology (which merged with Mellon Institute of Industrial Research in 1967 to form Carnegie Mellon University) was given a $6 million endowment by William Larimer Mellon to found its Graduate School of Industrial Administration (GSIA, renamed the David A. Tepper School of Business in 2004). Simon left the Illinois Institute of Technology to become the GSIA's first faculty member.

For the first five years of the GSIA's existence, Simon was a direct participant in all team research projects at the GSIA. Funding came mostly from contracts with the RAND Corporation, the Ford Foundation, the U.S. Air Force, and the U.S. Navy. RAND (an acronym derived from “Research and Development”) had been set up in 1946 to provide scientific expertise to the U.S. military, but was spun off as an independent, nonprofit think-tank in 1948. It remained influential in setting defense policy, including nuclear deterrence policy. In 1952, Simon was asked by several RAND scientists to participate in a social-psychology study of a simulated air-defense direction center. At RAND's facility in Santa Monica, Simon met Allen Newell (1927–1992). Newell and Simon found they had much in common, and together sought to use information-processing concepts to analyze the way air-defense personnel operated their machines. RAND exposed Simon to the most advanced computer technology of the day, which was used in its air-defense study for printing out maps—a startling accomplishment at the time. Computer-generated maps alerted Simon to the fact that computers can be used to handle non-numerical information.

His work with Newell moved Simon to consider more carefully the analogy of human brain to digital computer. He already conceived of the human mind (in its problem-solving capacity) primarily as a logic machine, as a device that took premises (claims of fact) and processed them to reach conclusions: this idea now began to take the specific form that the mind is like a computer that takes programming and data and processes them to produce output.

In November 1954, Newell heard a talk on a computerized pattern-recognition system. In the following months, Newell wrote a paper about the prospects for computer chess and left RAND briefly for Pittsburgh, where he intended to collaborate with Simon on a computer program that would play chess. In the summer of 1955, Newell moved back to RAND but continued commuting to Pittsburgh to work with Simon. The two eventually wrote a program called Logic Theorist, designed to prove mathematical theorems.

Logic Theorist was a stunning success, finding proofs for thirty-eight of the first fifty-two theorems of Alfred North Whitehead and Bertrand Russell's magisterial Principia Mathematica (1910–1913). Newell and Simon presented their results at the pivotal Dartmouth Summer Research Conference on Artificial Intelligence of 1956, where the term “artificial intelligence” was first used. Wishing to generalize the success of Logic Theorist, the two men next produced a program called General Problem Solver. The program never achieved its most ambitious goals, but the effort to produce it was formative of Simon and Newell's ideas about artificial intelligence.

Later Career

Given Simon and Newell's formalism, that is, their belief that all thought is symbol manipulation, it followed for them that the institutional boundary between computation and psychology reflects nothing real: both are ultimately ways of studying physical symbol systems. The digital computer might therefore not only model or mimic the human mind but work just as the human mind does. Psychologists, for their part, were deeply impressed by Logic Theorist and General Problem Solver, the apparent advent of the “thinking machines” long forecast by science fiction. Several landmark books in the late 1950s and 1960s signaled the cognitive revolution in psychology that overthrew S-R (stimulus-response) behaviorism, then the prevailing psychological orthodoxy. The use of computational models became commonplace—although never uncontroversial—in the study of cognitive psychology, and remains so today. Simon and Newell continued to contribute to this field, stimulating and incorporating the results of painstaking experimental work with human volunteers. For example, in the 1970s and 1980s their computer models of mental processes achieved close matches between program performance and human eye and finger movements and certain other behaviors.

Simon extended his ideas about intelligence as a formalistic problem-solving procedure guided by heuristic learning rules to the philosophy of science. Scientific discovery, he maintained, being a cognitive problem-solving process, could be modeled by computer programs. Beginning in the 1960s he repeatedly published on this theme.

Modern Cultural Connections

Administrative Theory

Prior to the 1930s, economics concerned itself mostly with the behavior of total markets rather than the behavior of individuals or organizations. However, interest in organization theory was stimulated by the growth of large companies through merger and monopoly and, within companies, the growth of managerial structures distinct from executive and productive operations. Simon was influenced by and contributed to this new area of study. His Administrative Behavior (1947) became one of the century's most important works in public administration and political science.

His approach to the study of administrative behavior had philosophical roots. While studying at the University of Chicago in the late 1930s, he adopted logical positivism under the influence of philosopher Rudolf Carnap and A.J. Ayer's Language, Truth and Logic (1936). Central to logical positivism was the verification theory of meaning, which asserts that a statement only has meaning if it is empirically verifiable. One version or offshoot of verification theory is operationalism, the claim that to understand something, one must understand the operations or procedures by which that something is brought into being. Simon brought his operationalist viewpoint to the study of organizational behaviors. In Administrative Behavior, he promoted an operational model of human decision-making in organizations. Simon's model assumed essentially rational behavior (Simon, like most other economists, excluded the possibility of truly irrational behavior as a significant factor), but only within limits. The organizational decision maker, Simon said, is constantly faced with multiple means-and-ends strategies associated with various costs and consequences. Achievement of ends with minimal cost is the definition of “rationality.” Simon's most important innovation was his observation that decision makers never approach perfect rationality in real life because of (1) inherently limited knowledge and cognitive ability and (2) contextual constraints such as limited resources. This is the principle Simon called “subjective rationality” in Administrative Behavior but later referred to as “bounded rationality.”

Simon modeled the company or organization itself as a network of cooperating, boundedly rational decision makers. He rejected the classical view that a company can be effectively treated as omniscient, perfectly rational, and profit-maximizing.

Economics and Political Science

In the 1940s, Simon extended his theory of the bounded rationality organizational decision maker to microeconomics. The prevalent view among economic theorists, then as now, was the neoclassical rationality postulate, namely, the assertion that consumers rationally maximize their utility (the benefit they experience from their expenditures) while producers rationally maximize their profits. Simon viewed these postulates as unrealistic: all economic actors, like the intracorporate decision makers modeled in Administrative Behavior, apply limited powers of reasoning to limited information in constraining contexts, so optimization—always achieving the best of all possible outcomes—cannot be assumed. In a 1956 paper in Psychological Review, he invented the term satisficing, a combination of satisfying and sufficing. Satisficing is the selection and pursuit of a course of action that, the actor hopes, will be “good enough” to achieve a specific goal but which may not be optimal.

Simon proposed that the neoclassical maximizing rationality postulate be replaced with the postulate that decision makers engage in satisficing behavior: consumers and entrepreneurs or corporations are satisficers, not optimizers. Simon did not succeed in dislodging rational choice theory from economic theory. Rather, a mixture of competing rational-choice theories has remained dominant and theories emphasize bounded rationality have grown up in competition. Simon's bounded-rationality thesis contributed to the insitutionalist school of economics, which focuses on bounded rationality and adaptation in economic behavior.

Contributions to AI and Cognitive Psychology

In 1955, Simon and Newell realized that before they could build an intelligent machine they would need to invent an appropriate programming language. Working with RAND programmer Cliff Shaw, they developed a list-processing programming language called Information Processing Language (an important conceptual precursor of the computer language LISP, LIst Processor, formally specified in 1958 and still extensively used today in AI research). Information Processing Language was essentially a tool for managing the allocation of limited memory space. In December 1955, Simon tested his program concept by assigning logical tasks written on cards to human volunteers (including his three children, then ages 13, 11, and 9). Obeying the instructions on the cards, the volunteers enacted the program, solving a test problem. On the strength of this test, Simon announced to his mathematical modeling class in January 1956 that he, Newell, and Simon had invented a “thinking machine.”

In 1956, using their new programming tools, Simon and Newell wrote Logic Theorist (also called Logic Theory Machine), a computer program designed to discover proofs for theorems in symbolic logic. The program stored axioms and already-proved theorems (including those proved by itself) in symbolic form. Supplied with a logical expression, it sought to prove the expression by combining its stored axioms and theorems. This is the program that was able to prove thirty-eight of the first fifty-two theorems of the Principia Mathematica.

Next, Newell and Simon moved on to a new, more ambitious project, General Problem Solver (GPS). GPS, they hoped, would show that a heuristic (learning-based) rather than an algorithmic (rule-based) approach to problem solving was not only at the core of human reasoning but could be replicated in machines. Algorithms are deterministic recipes or rule systems for carrying out tasks, while heuristic rules or heuristics are loosely defined rules that allow approach to solutions through trial and error. Working on GPS allowed Newell and Simon to develop their view of physical symbol systems. They claimed that hierarchic symbolic manipulation was the process used by humans, consciously or otherwise, in all problem-solving, and argued that human thought could be abstracted from neurology: also that thought, being symbolic, could be ported from one calculating mechanism to another.

All intelligent behavior, according to this central doctrine, is the behaving of sufficiently complex “physical symbol systems.” Symbolic structures are constructed as in physical systems: “thinking” occurs when symbols are modified, duplicated, created, or otherwise transformed by physical processes corresponding to logical rules. The physical system, on this view—brains, computers, or cards being carried about a classroom by volunteers—is irrelevant: distinct physical systems are equivalent if they instantiate equivalent symbol systems. Not only can an appropriate physical symbol system reproduce any intelligent behavior, but, conversely, Simon and Newell claimed, any intelligent behavior must be found to be the product of some physical symbol system.

Newell and Simon's formalist view has shaped one of the two basic approaches in AI research for over half a century. The main alternative to the formalist approach has sought to realize intelligent behaviors in neural networks—devices employing numerous interconnected simple units (analogous to neurons) rather than physical symbol systems. Due to the slow progress of symbolic AI in the many years since Logic Theorist's success, the AI trend in the 1990s and 2000s has been toward the neural-network or connectionist approach.

Following Logic Theorist and General Problem Solver, a field of computational psychology sprang up and has continued in the decades since, though never without controversy. This field and AI overlap in personnel, techniques, and disputes; the main difference is that AI is oriented toward the manufacture of intelligent systems, while computational psychology is oriented toward understanding the human mind as an intelligent machine.

Newell and Simon's “thinking machine” programs were intended as psychological simulations from the beginning—realizations in software of exactly what goes on in the human mind (allegedly) during problem-solving. Later cognitive-simulation work by D. Marr, J.R. Anderson, Z.W. Pylyshyn, and others has derived directly from Newell and Simon's cognitive-simulation approach. Although not an unqualified success, GPS established theoretical ideas that have proved useful in analyzing some forms of problem-solving in the half-century since: specifically, structuring one's search of a problem-space using goals and sub-goals, conducting one's search using heuristic (flexible) rules or operators, and controlling processing using information about preconditions and results of operators.

Although cognitive models of the Newell-Simon type have produced impressive matches to empirical observations of human problem-solving behavior in some experiments, they have also been criticized by some other psychologists. For example, they argue that Simon and Newell went too far in assuming that all problem solving can be treated as sequential search in a state-space. Some other psychologists have rejected the premise that the human mind is fundamentally a symbol-processing system at all, although it may engage in symbol processing as a special-case behavior (e.g., when doing mental arithmetic, proving symbolic logic theorems, or counting out chess positions). Even some who support the assumptions of formalist computational psychology contend that the seriality (one-step-at-a-time nature) of Simon and Newell's approach is a drawback, and that human cognition has an inherently parallel nature, with many things happening at once rather than one thing happening after another.

IN CONTEXT: THE DREYFUS DISPUTE

Hubert Dreyfus (1929–), a philosophy professor at the University of California, Berkeley, for most of his career, has been a vocal critic of formalist AI and particularly of Simon and Newell. In 1965, while a consultant for RAND, Dreyfus wrote a critical analysis of AI that was published in 1972 as the book What Computer's Can't Do: A Critique of Artificial Reason. (An updated version was released in 1992.)

Dreyfus acknowledged the value of the work done by Simon, Newell, Marvin Minsky, and their AI colleagues on list structures, database organization, and the like, and affirmed the theoretical possibility of simulating human thought in a supercomputer that models the physics of the entire human brain at the molecular level. However, he criticized Simon, Newell, and others for making certain assumptions and accused them of chronic exaggeration and overpromising about the achievements and future of AI. For example, in a talk given in 1957, Simon predicted that within a decade a computer would be the world chess champion, a computer would discover an important new theorem in mathematics, and that most new psychological theories would be in the form of computer programs. Dreyfus noted in 1972 that Simon's first two predictions had not been fulfilled over 10 years after having been made, and were in fact not even close to being fulfilled. He identified this failure as a special case of what he said was an overall pattern in AI research: “early, dramatic success based on the easy performance of simple tasks, or low-quality work on complex tasks, and then diminishing returns, disenchantment, and, in some cases, pessimism” (What Computers Can't Do).

Simon and Newell, Dreyfus argued, made bad predictions because they made four faulty assumptions: (1) the biological assumption that the brain processes information by discrete operations at some level, (2) the psychological assumption that the mind works primarily by applying formal rules to bits of symbolic information, (3) the epistemological assumption that all knowledge can take the form of logical statements, and (4) the knowledge assumption that the world can be completely modeled as a collection of context-free bits of information. Since all four assumptions were (Dreyfus argued) false, promises such as Simon's 1965 claim that “machines will be capable, within twenty years, of doing any work that a man can do” (The Shape of Automation for Men and Management), were doomed. Tempers flared on both sides of the debate. Dreyfus was reviled by the AI community and responded heatedly.

In the long run, some of Dreyfus's pessimistic forecasts have proved correct. A chess computer (IBM's custom-built Deep Blue) did not beat the human world chess champion, Garry Kasparov, until 1997, 40 years after Simon's original prediction. Moreover, it did not do so by mimicking human chess-playing strategies, but partly by evaluating about 18 billion board positions per move, far more than the 100 to 200 positions explicitly evaluated per move by a top human player. Yet Simon would not concede that Deep Blue beat Kasparov primarily by counting out billions of boards per move rather than by applying heuristic rules of the type Simon had been insisting since the 1950s were the basis of human chess-playing. As of 2008, programs using more advanced heuristics (e.g., Deep Junior, Deep Fritz) and running on less powerful hardware were still evaluating on the order of 1.5 billion positions per move; despite 50 years of research, it had still not been possible to model human chess-playing directly in software. Computers play excellent chess, but they do not play it like human beings.

By the early 2000s, Simon's 1967 prediction of fully human-equivalent AI capability by 1986 looked particularly quaint: AI had still not produced computers, for example, with any significant ability to converse in natural language. Computers still could not do most of the things that human beings can do. AI's most impressive demonstrations so far have been in chess-playing and autonomous vehicle navigation.

However, the importance of Simon's contributions to AI and numerous other fields is indisputable. His insistence on mathematical rigor and empirical verification, combined with his passion for considering problem-solving and decision-making in all its forms, has helped shape and reshape several fields of thought.

See Also Computer Science: Artificial Intelligence; Computer Science: The Computer.

bibliography

Books

Boden, Margaret. Computer Models of Mind: Computational Approaches in Theoretical Psychology. New York: Cambridge University Press, 1988.

Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993.

Dreyfus, Hubert. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.

Gardner, Howard. The Mind's New Science: A History of the Cognitive Revolution. New York: Basic Books, 1985.

McCorduck, Pamela. Machines Who Think. Natick, MA: A.K. Peters, Ltd., 2004.

Simon, Herbert. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations. 3rd ed. New York: Macmillan, 1976.

Simon, Herbert. Models of My Life. New York: Basic Books, 1991.

Simon, Herbert. The Sciences of the Artificial. 1st paperback ed.

Cambridge, MA: MIT Press, 1970.

Simon, Herbert. The Shape of Automation for Men and Management. New York: Harper & Row, 1965.

Periodicals

Bendor, Jonathan. “Herbert A. Simon: Political Scientist.” Annual Review of Political Science 6 (2003):433–471.

Feigenbaum, Edward A. “Herbert A. Simon, 1916–2001.” Science 291 (2001): 2107.

Simon, Herbert. “The Information-processing Theory of Mind.” American Psychologist 50 (1995):507–508.

Simon, Herbert. “Rational Choice and the Structure of the Environment.” Psychological Review 63 (1956):129–138.

Simon, Herbert. “Theories of Decision Making in Economics.” American Economic Review 49 (1954): 223–283.

Web Sites

Simon, Herbert. “Autobiography.” 1978. http://www.nobelprize.org/nobel_prizes/economics/laureates/1978 (accessed January 5, 2008).

Larry Gilman

About this article

Computer Science: Artificial Intelligence and Economics: The Rise of Formalism and Behaviorism

Updated About encyclopedia.com content Print Article

NEARBY TERMS