Tversky, Amos

views updated

TVERSKY, AMOS

(b. Haifa, Israel, 16 March 1937;

d. Stanford, California, 2 June 1996), judgment and decision making, mathematical psychology, cognitive science.

Tversky made profound and influential contributions to the study of human judgment and decision making and to the foundations of measurement. He published more articles in Psychological Review, the premier journal for theoretical psychology, founded more than a century before he died, than anyone else in the journal’s history. In collaboration with his friend and colleague, Daniel Kahneman, he laid the foundations of new fields of research into heuristics and biases, behavioral decision theory, and judgment under uncertainty. Kahneman was awarded a Nobel Prize in Economics for this work in 2002 (jointly with the economist Vernon Smith), and he began his prize lecture by pointing out that the work for which he had earned the prize had been done in collaboration with Tversky, who had died several years earlier and was therefore ineligible to share it. The work of Tversky and Kahneman inspired subsequent generations of researchers in judgment and decision making and contributed significantly to the emergence of behavioral economics as a new field of research.

Early Life and Education. Tversky was born in Haifa, Israel, to parents who had emigrated from Poland via Russia to Israel, and he died of metastatic melanoma at age fifty-nine at his home in Stanford, California. His father, Yosef, originally trained in medicine, worked as a veterinarian, and his mother, Genia, was a social worker who also served in the Israeli parliament, the Knesset, from its establishment in 1948 until her death in 1964. Tversky served in the Israeli Defense Forces (IDF), rising to the rank of captain in a paratrooper unit, and he saw active service in three wars. He was wounded in 1956, not in combat but during a military exercise in front of the IDF general staff. In his role of platoon commander, Tversky sent one of his soldiers to place an explosive charge under a barbed wire fence in order to blast a hole in it. The soldier placed the charge, lit the fuse, and then lost his nerve, freezing to the spot. Tversky leapt from behind a rock where he was sheltering and managed to get the panic-stricken soldier away from the charge just before it exploded, being wounded in the process. For this act of bravery, Tversky was awarded Israel’s highest military decoration.

Tversky graduated from the Hebrew University of Jerusalem with a bachelor of arts degree in 1961, majoring in philosophy and psychology, and he received a PhD degree from the University of Michigan in 1965. While studying for his doctorate, he met and married Barbara Gans, a fellow graduate student who later became a professor of cognitive psychology at Stanford University and with whom he had three children, Oren, Tal, and Dona. He taught at the Hebrew University of Jerusalem from 1966 to 1978 and then at Stanford University, where he was the first Davis-Brack Professor of Behavioral Sciences and principal investigator at the Stanford Center on Conflict and Negotiation. He spent leave periods at Harvard University, the Center for Advanced Studies in the Behavioral Sciences in Stanford, and the Oregon Research Institute. Tversky made frequent trips to Israel throughout his years in the United States, and from 1992 he was senior visiting professor of economics and psychology and permanent fellow of the Sackler Institute of Advanced Studies at Tel Aviv University. Although he tended to shun administrative work, he was in Stanford University’s Faculty Senate from 1990 until his death and sat on the Academic Council’s advisory board to the president and provost.

Early Work. Tversky’s early research was devoted to the study of individual choice behavior and the foundations of psychological measurement, preoccupations that were combined in his doctoral dissertation. His dissertation comprised a mathematical analysis of the necessary and sufficient conditions for the satisfaction of certain requirements of psychological measurement and an experimental test of expected utility theory. Expected utility theory is the theoretical foundation of orthodox decision theory, according to which decision makers choose options that maximize expected utility, a measure of the subjective desirability of outcomes or events. Tversky’s doctoral dissertation, supervised by Clyde Coombs, earned him the Marquis Award from the University of Michigan.

Conjoint Measurement Theory. While working on his dissertation, Tversky met David H. Krantz, who joined the University of Michigan’s faculty in 1964; this led to a fruitful collaboration on aspects of mathematical psychology, including conjoint measurement theory and multidimensional scaling. Conjoint measurement theory, introduced in 1964 by R. Duncan Luce and John W. Tukey, provides a mathematical method of constructing measurement scales for objects with multiple attributes— for example, houses that vary according to price, location, and number of rooms, or job applicants whose application forms reveal different strengths and weaknesses—in such a way that attributes are traded off against one another and the resultant value of each object is a suitable function of the scale values of its component attributes. In 1965 Luce invited Tversky and Krantz to join him and Patrick Suppes in coauthoring The Foundations of Measurement, which, when it was completed in 1990, became a major three-volume work. Among other things, this project presents a fully developed analysis of conjoint measurement that, because it provides a much-needed method of measuring and interpreting responses to multiattribute alternatives, has had an impact on commercial applications in market research and other areas.

Transitivity. Expected utility theory and other normative theories of choice require preferences to satisfy basic axioms, one of the most fundamental and compelling of which is transitivity. A decision maker who prefers a to b and b to c cannot, according to normative accounts, also prefer c to a— for example, a person who prefers tea to coffee and coffee to cocoa cannot prefer cocoa to tea without violating the consistency axiom. Intransitive preferences are considered a violation of rational choice. In “Intransitivity of Preferences” (1969), one of the most frequently cited of Tversky’s early publications, he reported on an experimental procedure that reliably induces people to violate the transitivity axiom. According to Tversky’s interpretation of this phenomenon, people, when making decisions, often use approximation methods that work quite well but sometimes generate predictable errors. Such errors, in turn, can help us to understand how the decisions were made.

Theory of Similarity. The theory of similarity, known as the Contrast model, was presented by Tversky in “Features of Similarity” (1977) and elaborated by Tversky and Itamar Gati in “Studies of Similarity” (1978). This theory provided an explanation for a number of judgmental anomalies that had been observed by other researchers. In particular, it explained a remarkable asymmetry that had been noted in similarity judgments, wherein a may be judged as more similar to b than b is to a. For example, it seems more natural to claim that a son resembles his father than that a father resembles his son, and Tel Aviv is generally considered more similar to New York City than New York City is to Tel Aviv. These observations are inconsistent with the usual representation of similarity in terms of proximity of points in coordinate Euclidian space.

Tversky pointed out that our representations of stimuli are rich and complex, including attributes associated with appearance, function, relation to other objects, and further attributes inferred from general world knowledge. Objects or stimuli, he proposed, are represented by a collection of features, with some features necessarily attended to more than others, depending on the nature of the task. In his theory, each object a is denoted by a set A of features, and the similarity of a to b, denoted by s(a, b), is a weighted linear function of three arguments, namely the features that a and b share in common, denoted by A ÇB, the distinctive features of a, denoted by AB, and the distinctive features of b, denoted by BA. Similarity increases with the measures of the common features and decreases with the measures of the distinctive features. The theory also includes a scale factor f, reflecting the salience or prominence of the various features and parameters allowing common or distinctive features to be weighted more or less heavily. Asymmetry of similarity can arise from one of the objects or stimuli (e.g., New York City) having more distinctive features than the other (Tel Aviv) or from a shift of attention, causing the person to focus on one of the stimuli as compared to the other, thus weighting its distinctive features more heavily than those of the referent. Hence, a toy train is quite similar to a real train, because most features of the toy train are included in the real train, but a real train is not as similar to a toy train, because many of its features are absent from a toy train. Tversky proved that, if the distinctive features of the subject are weighted more heavily than those of the referent, then s(a, b) >s (b, a) if and only if f(BA) > f(AB), which means that a appears more similar to b than b does to a if and only if the distinctive features of b are more salient than those of a, and this is generally the case when b is more prominent than a.

Elimination by Aspects. Another of Tversky’s highly influential contributions was his article, “Elimination by Aspects” (1972). It describes a new theory of multiattribute decision making. According to the standard value-maximization model of multiattribute choice, a decision maker choosing between alternatives with multiple attributes forms a weighted average of each alternative’s attribute values, the weights corresponding to perceived attribute importance, and then chooses the alternative with the highest average weighted value. But there are reasons to doubt that this model accurately reflects the behavior of decision makers with bounded rationality facing complex choices. Suppose a decision maker has to choose between two travel agencies a1 and a2 offering tours to two different destinations d1 (Greece) and d2(South Africa). Agency a1 offers only d1, whereas agency a2 offers both d1 and d2. There are three feasible combinations of travel agency and destination, a1d1, a2d1, and a2d2, and if the decision maker is equally attracted by Greece and South Africa and is also indifferent between the travel agencies, then each combination should have the same probability of being chosen, according to the standard value-maximization model. But it seems intuitively obvious that most people would choose first between the destinations and only then between the travel agencies, because destination is a more important attribute of a trip than travel agency. With this stipulation, although the decision maker is equally attracted to Greece and South Africa and is therefore equally likely to choose d1 or d2, the probabilities of choosing each of the three feasible combinations are p(a1d1) = p(d1)p(a1) = 1/4, p(a2d1) = p(d1)p(a2) = 1/4, and p(a2d2) = p(d2) = 1/2. Thus, one of the agency-destination combinations is twice as likely to be chosen as either of the others.

This intuition is captured in Tversky’s elimination by aspects (EBA) theory, according to which choice is reached through an iterated series of eliminations. At each iteration the decision maker selects an aspect (or attribute) whose probability of selection is proportional to its perceived importance, and eliminates all alternatives that fail to satisfy on that aspect. The decision maker then selects the next aspect and proceeds this way until all but one of the alternatives have been eliminated. This is a stochastic version of lexicographic choice; that is, it introduces a probabilistic element into a procedure where the decision maker first compares the alternatives on the most important attribute, then on the next most important attribute, and so on until one alternative emerges as best (or else the set of attributes is exhausted without a definite preference emerging). In “Preference Trees” (1979), Tversky and Shmuel Sattath presented a revised version of EBA. Pretree, as the theory is also called, is more parsimonious than the general EBA model because it has fewer parameters. In essence, it is a restricted version of EBA that arranges subsets of similar alternatives in a hierarchical structure. Each alternative is represented as a collection of aspects and the entire ensemble of aspects is assumed to have a tree structure. At each stage, the decision maker selects an aspect (corresponding to a branch of the tree) and eliminates all the alternatives that do not belong on the selected branch; the process continues until a single alternative remains. Although lexicographic choice is not guaranteed to produce an optimal decision, experimental tests have tended to confirm the hypothesis that a process resembling EBA is characteristic of human multiattribute decision making.

Heuristics and Biases. From 1969 until the early 1980s, Tversky enjoyed an exceptionally fruitful collaboration with Daniel Kahneman, during which they initiated their influential research into heuristics and biases, prospect theory, and framing effects. Tversky was an owl, often working through the night, and Kahneman was a lark (a morning person), so their normal pattern of collaboration involved having lunch and working together through the afternoon. Heuristics, which Tversky and Kahneman illustrated through many clever vignettes, are rough-and-ready judgmental procedures or rules of thumb that are quick and useful but sometimes lead to systematic biases and errors. In 1971 Tversky and Kahneman published their first joint article, “Belief in the Law of Small Numbers.” They showed that people tend to believe that the law of large numbers applies to small numbers as well or, in other words, that people expect even small samples to be representative of the populations from which they are drawn. Even researchers trained in statistics were shown to grossly overestimate the representativeness of small samples and to make other unwarranted inferences from data. This undermined the view widely accepted at the time that people are rather good intuitive statisticians.

A classic article, “Subjective Probability” (1972), naturally followed. In it, Kahneman and Tversky provided a theoretical interpretation of what looked like belief in the law of small numbers and laid the foundations of the heuristics and biases research program. For example, participants were asked to estimate probabilities, such as the probability that the average height of a group of people is over six feet. Participants produced almost identical estimates for group sizes of ten, one hundred, and one thousand, whereas in reality the probability that a sample average will be substantially higher than the population average is much greater for a small sample than a large one. This “sample size fallacy” is explained by the representativeness heuristic, in which people estimate the probability that something belongs to, or originates from, a particular class by the extent to which it is representative or typical of the class. This can lead to judgmental errors because perceived representativeness is insensitive to base rates and to sample sizes.

Next came the availability heuristic, according to which people judge the frequency or probability of an event by the ease with which instances of it come to mind. For example, one may assess the risk of heart attack among middle-aged people by recalling specific instances among one’s acquaintances or through media exposure. Like other heuristics, availability provides a useful clue for assessing frequency or probability because it is usually easier to recall instances of classes that are big or frequent rather than instances of classes that are small and rare. Because availability is affected by factors other than frequency and probability, however, this heuristic, like other heuristics, can generate biased or incorrect judgment. For example, Tversky and Kahneman showed that when people are asked whether the English language contains more words beginning with the letter k or more words having the letter k in the third position, most find it easier to think of instances of the former rather than the latter, whereas in fact a typical long text contains twice as many words with k in the third position. In 1974 Tversky and Kahneman published a classic article, “Judgment under Uncertainty,” that reviewed the basic findings. It brought heuristics and biases to the attention of a wide readership outside psychology, where the ideas were already well-known. In 1982 a book edited by Kahneman, Tversky, and Paul Slovic, Judgment under Uncertainty: Heuristics and Biases, collected a large number of relevant papers and brought the heuristics and biases research program to an ever-wider readership.

Prospect Theory. Kahneman and Tversky’s “Prospect Theory,” arguably their most important and influential joint work, appeared in 1979. It is an explicitly descriptive theory of risky choice, and it is built on a number of fundamental principles that have become the prominent alternative to expected utility theory in accounting for decision making in the real world. Prospect theory highlights several ways in which preferences tend to violate expected utility theory. Probabilities, according to prospect theory, are nonlinear, with very small probabilities often overweighted and moderate to high probabilities underweighted. Furthermore, people tend to evaluate outcomes as gains or losses relative to a current reference point or the status quo rather than in terms of final wealth, in contrast to expected utility theory, where outcomes are final states, irrespective of whether they were reached by gaining or losing. The theory is typically depicted by an S-shaped value function that assigns a subjective value to amounts gained or lost. The upper part of the function, representing gains, is concave, whereas the lower part, representing losses, is convex. This yields conflicting risk attitudes: risk aversion in the context of choices involving gains but risk seeking for losses.

Furthermore, the slope of the value function is steeper for losses than for corresponding gains, capturing the observation that a loss has greater subjective impact than an equivalent gain, which is known as “loss aversion.” For example, most people will reject a fifty-fifty gamble in which they might lose twenty dollars unless they stand to win more than forty dollars. This discovery has profound implications not only for choice, but also for negotiation and the power of the status quo. One remarkable implication is the endowment effect, a phenomenon discovered by the economist Richard Thaler. The effect is illustrated by the owner of a bottle of vintage wine who refuses to sell it for two hundred dollars but would not pay more than one hundred dollars to replace it. The effect is explained by two principles of prospect theory: first, that the carriers of utility are not states (owning or not owning the wine), but gains or losses, and second, by loss aversion, the fact that losing an item in one’s possession hurts more than obtaining that item is deemed worthwhile. In “Experiences of Collaborative Research” (2003), Kahneman expressed the view that the concept of loss aversion was the most useful contribution to the study of decision making made by his work with Tversky.

In “The Framing of Decisions and the Psychology of Choice” (1981), Tversky and Kahneman presented experimental data showing that 84 percent of a group of undergraduate students preferred a sure gain of $240 to a gamble involving a 25 percent chance of gaining $1,000 and a 75 percent chance of gaining nothing (risk aversion for gains), but 87 percent preferred a gamble involving a 75 percent chance of losing $1,000 and a 25 percent chance of losing nothing to a sure loss of $750 (risk seeking for losses). Combining these prospects, most respondents were expressing a preference for a portfolio containing a 25 percent chance of winning $240 and a 75 percent chance of losing $760 over a portfolio containing a 25 percent chance of winning $250 and a 75 percent chance of losing $750, contrary to the requirement of dominance. In “Advances in Prospect Theory” (1992), Tversky and Kahneman published an updated version of prospect theory called cumulative prospect theory, which was predicated on similar psychological principles but could be applied to more varied sets of alternatives.

Framing Effects. A framing effect occurs when people make different choices as a result of changes in the description, labeling, or presentation of options that do not logically alter the information available. In 1981 Tversky and Kahneman provided a classic example in “The Framing of Decisions and the Psychology of Choice.” They invited participants to choose between two programs for combating a disease that was expected to kill 600 people. Participants in one group were told that program A would save 200 lives, whereas program B had a one-third probability of saving 600 lives and a two-thirds probability of saving no one; in this “gain” frame, 72 percent preferred A to B. Participants in a second group were told that under program C, 400 people would die, whereas under program D there was a one-third probability that no one would die and a two-thirds probability that all 600 would die; in this “loss” frame, 78 percent preferred D to C. As predicted, the majority of participants exhibited a risk-averse preference for A over B in the “gain” frame, but a risk-seeking preference for D over C in the “loss” frame. This is a framing effect, because the two frames presented different but logically equivalent descriptions of the same choice problem but elicited highly discrepant preferences. As suggested earlier, this effect is well explained by prospect theory, as are framing effects in the context of riskless choice.

Later Work. In the early 1990s, Tversky collaborated with Eldar Shafir and others in investigations of reason-based choice. People, they proposed, tend to look for compelling reasons when there are no obvious rules or evaluations to guide their decisions. This may be the standard method of decision making in legal contexts, for example. In “The Disjunction Effect in Choice under Uncertainty” (1992), Tversky and Shafir described an experiment in which they asked participants to assume they had just taken a tough qualifying examination. Members of one group were told they had passed the exam, others were told that they had failed, and those in a third group were told that they would learn the results the following day. Members of each group were invited to choose between buying a vacation in Hawaii immediately, not buying the vacation, or paying five dollars to retain the right to buy the vacation the following day. The majority of those who had purportedly passed or failed the examination chose to buy the vacation, but the majority of those who did not know the examination’s outcome chose to retain the right to buy the vacation the following day, presumably because they did not have a compelling reason to buy it while the exam’s outcome was unknown.

Tversky’s last major contribution was support theory, developed in collaboration with Derek Koehler in an article titled “Support Theory” (1994). “Unpacking, Repacking, and Anchoring” (1997), an article by Yuval Rottenstreich and Tversky that elaborated support theory, was in press when Tversky died. Support theory was inspired by an observation reported in the literature that the independently judged probabilities of an event and its complement generally sum to approximately one, but the judged probabilities of separate constituents of an inclusive event usually sum to more than the judged probability of that inclusive event. Tversky and Koehler showed that many descriptions of events are implicit disjunctions whose subjective probability is less than the sum of the components, once the former are unpacked. For example, when people are asked to judge the probability that a random person will die from an accident, judgments tend to be less than the sum of judgments of the separate possible causes—road traffic accidents, plane crashes, fire, drowning, and so on—that form part of the notion of an accident.

The basic elements in support theory are descriptions of events, called hypotheses. Descriptions are presented to participants for probabilistic judgment, and it is assumed that they are evaluated in terms of a mathematically defined “support function,” s, which yields the judged support value s(∝) for a description ∝. The theory assumes that different descriptions of the same event often produce different subjective probability estimates, and it explains this phenomenon in terms of subjective judgments of supporting evidence, which are eventually combined to yield judged support according to an equation specified in the theory. The process of evaluation is assumed to incorporate standard heuristics and therefore to be subject to the familiar biases. The theory explains the conjunction fallacy, a judgmental error identified and named by Tversky and Kahneman in “Extensional versus Intuitive Reasoning” (1983), according to which a conjunction of two or more attributes is judged to be more probable or likely than either attribute on its own. For example, Tversky and Kahneman presented undergraduate students with personality sketches of a hypothetical person called Linda (young, single, deeply concerned about social issues, and involved in antinuclear activity) and asked whether it was more probable that (a) Linda is a bank teller or that (b) Linda is a bank teller who is active in the feminist movement. Eighty-six percent of the students judged (b) to be more probable than (a). This is a fallacy, because the probability of a conjunction can never exceed the probability of either of its conjuncts. It is consistent with a reliance on the representativeness heuristic, because Linda appears more typical of a feminist bank teller than of a bank teller. The example also captures the aesthetic combination of compelling psychological intuition and insightful normative critique that characterized so much of Tversky’s work.

Early in 1996, Amos Tversky was told that he had only months to live. Kahneman and Tversky began editing a book on decision making that would draw together the progress that had been made since they began working together on the topic more than twenty years earlier. That book, Choices, Values, and Frames, appeared in 2000, four years after Tversky’s death. Tversky kept working until a few weeks before he died, and when he died he had twelve articles in press. An astonishing number of his papers continue to be considered both seminal and definitive.

Honors and Awards. Tversky’s academic accomplishments were recognized with many honors and awards. Tversky was elected to the American Academy of Arts and Sciences in 1980, to the National Academy of Sciences in 1985, and to the Econometric Society in 1993. He shared with Daniel Kahneman the American Psychological Association’s award for distinguished scientific contribution in 1982 and posthumously, in 2003, the University of Louisville Grawemeyer Award for Psychology. In 1984 he was awarded both a MacArthur Foundation Fellowship, given to “talented individuals who have shown extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction,” and a Guggenheim Fellowship, awarded to “men and women who have already demonstrated exceptional capacity for productive scholarship or exceptional creative ability in the arts.” He received the Warren Medal of the Society of Experimental Psychologists in 1995 and was awarded honorary doctorates by the University of Chicago, Yale University, the State University of New York at Buffalo, and the University of Göteborg in Sweden. The University of Chicago honorary degree citation in 1988 stated: “Through your extraordinary blend of inventive theory and creative experimentation, you have illuminated complex behavioral phenomena and influenced work by many other social scientists, who have been inspired by the combination of your substantive insights, rigorous modeling, and exacting use of experimental methods.”

BIBLIOGRAPHY

WORKS BY TVERSKY

“Intransitivity of Preferences.” Psychological Review 76 (1969):31–48.

With Daniel Kahneman. “Belief in the Law of Small Numbers.”Psychological Bulletin 76 (1971): 105–110.

With David H. Krantz, R. Duncan Luce, and Patrick Suppes. Foundations of Measurement. 3 vols. New York: Academic Press, 1971–1990.

“Elimination by Aspects: A Theory of Choice.” Psychological Review 79 (1972): 281–299.

With Daniel Kahneman. “Subjective Probability: A Judgment of Representativeness.” Cognitive Psychology 3 (1972): 430–454.

With Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science 185 (1974): 1124–1131.

“Features of Similarity.” Psychological Review 84 (1977):327–352.

With Itamar Gati. “Studies of Similarity.” In Cognition and Categorization, edited by Eleanor Rosch and Barbara B. Lloyd. Hillsdale, NJ: Erlbaum, 1978.

With Shmuel Sattath. “Preference Trees.” Psychological Review 86 (1979): 542–573. An updated version of the theory of elimination by aspects.

With Daniel Kahneman. “Prospect Theory: An Analysis of Decision under Risk.” Econometrica 47 (1979): 263–291.

With Daniel Kahneman. “The Framing of Decisions and the Psychology of Choice.” Science 211 (1981): 453–458.

With Daniel Kahneman and Paul Slovic, eds. Judgment under Uncertainty: Heuristics and Biases. Cambridge, U.K., and New York: Cambridge University Press, 1982.

With Daniel Kahneman. “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological Review 90 (1983): 293–315.

With Daniel Kahneman. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty 5 (1992): 297–323.

With Eldar Shafir. “The Disjunction Effect in Choice under Uncertainty.” Psychological Science 3 (1992): 305–309.

With Derek J. Koehler. “Support Theory: A Nonextensional Representation of Subjective Probability.” Psychological Review 101 (1994): 547–567.

With Yuval S. Rottenstreich. “Unpacking, Repacking, and Anchoring: Advances in Support Theory.” Psychological Review 104 (1997): 406–415.

With Daniel Kahneman, eds. Choices, Values, and Frames. New York: Russell Sage Foundation, 2000; Cambridge, U.K. Cambridge University Press, 2000.

Preference, Belief, and Similarity: Selected Writings. Edited by Eldar Shafir. Cambridge, MA: MIT Press, 2003.

OTHER SOURCES

Evans, Jonathan St. B. T., and David E. Over. “The Contribution of Amos Tversky.” Thinking and Reasoning 3 (1997): 1–8.

Gilovich, Thomas, Dale Griffin, and Daniel Kahneman, eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, U.K., and New York: Cambridge University Press, 2002.

Kahneman, Daniel. “Experiences of Collaborative Research.”American Psychologist 58 (2003): 723–730.

———. “Maps of Bounded Rationality: A Perspective on Intuitive Judgment and Choice.” In Les Prix Nobel 2002: The Nobel Prizes 2002, edited by Tore Frängsmyr. Stockholm: Nobel Foundation, 2003.

———, and Eldar Shafir. “Amos Tversky (1937–1996).” American Psychologist 53 (1998): 793–794.

Laibson, David, and Richard Zeckhauser. “Amos Tversky and the Ascent of Behavioral Economics.” Journal of Risk and Uncertainty 16 (1998): 7–47.

McDermott, Rose. “The Psychological Ideas of Amos Tversky and Their Relevance for Political Science.” Journal of Theoretical Politics 13 (2001): 5–33.

Shafir, Eldar, ed. “Belief and Decision: The Continuing Legacy of Amos Tversky.” Special issue, Cognitive Psychology 38, no. 1 (1999).

———, ed. Preference, Belief, and Similarity: The Selected Writings of Amos Tversky. Cambridge, MA: MIT Press, 2004.

Andrew M. Colman

Eldar Shafir