Decision-Making Theory and Research

views updated

DECISION-MAKING THEORY AND RESEARCH

Decision making must be considered in any explanation of individual behavior, because behaviors are based on decisions or judgments people have made. Thus, decision-making theory and research is of interest in many fields that examine behavior, including cognitive psychology (e.g., Busemeyer, Medin, and Hastie 1995), social psychology (e.g., Ajzen 1996), industrial and organizational psychology (e.g., Stevenson, Busemeyer, and Naylor 1990), economics (e.g., Lopes 1994), management (e.g., Shapira 1995), and philosophy (e.g., Manktelow and Over 1993), as well as sociology. This section will be an overview of decision-making theory and research. Several excellent sources of further information include: Baron (1994), Dawes (1997), Gilovich (1993), and Hammond (1998).


DECISION-MAKING THEORIES

Most decision-making theory has been developed in the twentieth century. The recency of this development is surprising considering that gambling has existed for millennia, so humans have a long history of making judgments of probabilistic events. Indeed, insurance, which is in effect a form of gambling (as it involves betting on the likelihood of an event happening, or, more often, not happening), was sold as early as the fifteenth and sixteenth centuries. Selling insurance prior to the development of probability theory, and in many early cases without any statistics for, or even frequencies of, the events being insured led to bankruptcy for many of the first insurance sellers (for more information about the history of probability and decision making see Hacking 1975, 1990; Gigerenzer et al. 1989).

Bayes's Theorem. One of the earliest theories about probability was Bayes's Theorem (1764/1958). This theorem was developed to relate the probability of one event to another; specifically, the probability of one event occurring given the probability of another event occurring. These events are sometimes called the cause and effect, or the hypothesis and the data. Using H and D (hypothesis and data) as the two events, Bayes's Theorem is: P(H|D) = P(H)P(D|H)/P(D) [1] or P(H|D) = P(D|H)P(H)/[P(D|H)P(H)+P(D|−H)P(−H)] [2] That is, the probability of H given D has occurred is equal to [1] the probability of H occurring multiplied by the probability of D occurring given H has occurred divided by the probability of D occurring, or [2] the probability of D given H has occurred multiplied by the probability of H then divided by the probability of D given H has occurred multiplied by the probability of H plus the probability that D occurs given H has not occurred multiplied by the probability that H does not occur.

The cab problem (introduced in Kahneman and Tversky 1972) has been used in several studies as a measure of whether people's judgments are consistent with Bayes's Theorem. The problem is as follows: A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data: (a) 85 percent of the cabs in the city are Green and 15 percent are Blue. (b) a witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80 percent of the time and failed 20 percent of the time. What is the probability that the cab involved in the accident was Blue rather than Green? Using the provided information and formula [2] above, P(Blue Cab|Witness says "Blue") = P(Witness says "Blue"|Blue Cab)P(Blue Cab)/[P(Witness says "Blue"|Blue Cab)P(Blue Cab) + P(Witness says "Blue"|Green Cab)P(Green Cab)] or P(Blue Cab|Witness says "Blue") = (.80)(.15)/[(.80)(.15)+(.20)(.85)] = (.12)/[(.12)+(.17)] = .41

Thus, according to Bayes's Theorem, the probability that the cab involved in the accident was Blue, given the witness testifying it was Blue, is 0.41. So, despite the witness's testimony that the cab was Blue, it is more likely that the cab was Green (0.59 probability), because the probabilities for the base rates (85 percent of cabs are Green and 15 percent Blue) are more extreme than those for the witness's accuracy (80 percent accuracy). Generally, people will rate the likelihood that the cab was Blue to be much higher than .41, and often the response will be .80—the witness's accuracy rate (Tversky and Kahneman 1982).

That finding has been used to argue that people often ignore base rate information (the proportions of each type of cab, in this case; Tversky and Kahneman 1982), which is irrational. However, other analyses of this situation are possible (cf. Birnbaum 1983; Gigerenzer and Hoffrage 1995), which suggest that people are not irrationally ignoring base rate information. The issue of rationality will be discussed further below.

Expected Utility (EU) Theory. Bayes's Theorem is useful, but often we are faced with decisions to choose one of several alternatives that have uncertain outcomes. The best choice would be the one that maximizes the outcome, as measured by utility (i.e., how useful something is to a person). Utility is not equal to money (high-priced goods may be less useful than lower-priced goods), although money may be used as a substitute measure of utility. EU Theory (von Neumann and Morganstern 1947) states that people should maximize their EU when choosing among a set of alternatives, as in: EU = ([Ui * Pi]; where Ui is the utility for each alternative, i, and Pi is the probability associated with that alternative.

The earlier version of this theory (Expected Value Theory, or EV) used money to measure the worth of the alternatives, but utility recognizes people may use more than money to evaluate the alternatives. Regardless, in both EU and EV, the probabilities are regarded similarly, so people only need consider total EU/EV, not the probability involved in arriving at the total.

However, research suggests that people consider certain probabilities to be special, as their judgments involving these probabilities are often inconsistent with EU predictions. That is, people seem to consider events that have probabilities of 1.0 or 0.0 differently than events that are uncertain (probabilities other than 1.0 or 0.0). The special consideration given to certain probabilities is called the certainty effect (Kahneman and Tversky 1979). To illustrate this effect, which of these two options do you prefer? A. Winning $50 with probability .5 B. Winning $30 with probability .7 Now which of these next two options do you prefer? C. Winning $50 with probability .8 D. Winning $30 with probability 1.0

Perhaps you preferred A and D, which many people do. However, according to EU, those choices are inconsistent, because D does not have a higher EU than C (for D, EU = $30 = (1.0 * $30); for C, EU = $40 = (.8 * $50)). Recognize that A and B differ from C and D by a .3 increase in probability, and EU prescribes selecting the option with the highest EU. The certainty effect may also be seen in the following pair of options. E. Winning $1,000,000 with probability 1.0 F. Winning $2,000,000 with probability .5

According to EU, people should be indifferent between E and F, because they both have the same EU ($1,000,000 * 1.0 = $2,000,000 * .5). However, people tend to prefer E to F. As the cliche goes, a bird in the hand is worth two in the bush. These results (choosing D and E) suggest that people are risk averse, because those are the certain options, and choosing them avoids risk or uncertainty. But risk aversion does not completely capture the issue. Consider this pair of options: G. Losing $50 with probability .8 H. Losing $30 with probability 1.0

If people were risk averse, then most would choose H, which has no risk; $30 will be lost for sure. However, most people choose G, because they want to avoid a certain loss, even though it means risking a greater loss. In this case, people are risk seeking.

The tendency to treat certain probabilities differently from uncertain probabilities led to the development of decision-making theories that focused on explaining how people make choices, rather than how they should make choices.

Prospect Theory and Rank-Dependent Theories. Changing from EV to EU acknowledged that people do not simply assess the worth of alternatives on the basis of money. The certainty effect illustrates that people do not simply assess the likelihood of alternatives, so decision theories must take that into account. The first theory to do so was prospect theory (Kahneman and Tversky 1979).

Prospect theory proposes that people choose among prospects (alternatives) by assigning each prospect a subjective value and a decision weight (a value between 0.0 and 1.0), which may be functionally equal to monetary value and probability, respectively, but need not be actually equal to them. The prospect with the highest value as calculated by multiplying the subjective value and the decision weight is chosen. Prospect theory assumes that losses have greater weight than gains, which explains why people tend to be risk seeking for losses but not for gains. Also, prospect theory assumes that people make judgments from a subjective reference point rather than an objective position of gaining or losing.

Prospect theory is similar to EU in that the decision weight is independent of the context. However, recent decision theories suggest weights are created within the context of the available alternatives based on a ranking of the alternatives (see Birnbaum, Coffey, Mellers, and Weiss 1992; Luce and Fishburn 1991; Tversky and Kahneman 1992). The need for a rank-dependent mechanism within decision theories is generally accepted (Mellers, Schwartz, and Cooke 1998), but the specifics of the mechanism are still debated (see Birnbaum and McIntosh 1996).

Improper Linear Models. Distinguishing between alternatives based on some factor (e.g., value, importance, etc.) and weighting the alternatives based on those distinctions has been suggested as a method for decision making (Dawes 1979). The idea is to create a linear model for the decision situation. Linear models are statistically derived weighted averages of the relevant predictors. For example: L(lung cancer) = w1*age + w2*smoking + w3*family history, where L(lung cancer) is the likelihood of getting lung cancer, and wx is the weight for each factor. Any number of factors could be included in the model, although only factors that are relevant to the decision should be included. Optimally, the weights for each factor should be constructed from examining relevant data for the decision.

However, Dawes (1979) has demonstrated that linear models using equal weighting are almost as good as models with optimal weights, although they require less work, because no weight calculations need be made; factors that make the event more likely are weighted +1, and those that make it less likely are weighted −1.

Furthermore, linear models are often better than a person's intuition, even when the person is an expert. Several studies of clinical judgment (including medical doctors and clinical psychologists) have found that linear models always do as well as, if not better than, the clinical experts (see Einhorn 1972). Similarly, bank loan officers asked to judge which businesses will go bankrupt within three years of opening was about 75 percent accurate, but a statistical model was 82 percent accurate (Libby 1976).

Arkes, Dawes, and Christensen (1986) demonstrated this point with people knowledgeable about baseball. Participants were asked to identify "which of. . . three players won the MVP [most valuable player] award for that year." Each player's season statistics were provided. One of the three players was from the World Series winning team, and subjects were told that 70 percent of the time the MVP came from the World Series winning team, so if they were uncertain, they could use that decision rule.

Participants moderately knowledgeable of baseball did better than the participants highly knowledgeable, although the highly knowledgeable participants were more confident. The moderately knowledgeable group did better because they used the decision rule more. Yet, neither group did as well as they could have, if they had used the decision rule for every judgment. How a little knowledge can influence judgment will be further explained in the next section on how people make decisions.


DECISION PROCESSING

Decision theories changed because studies revealed that people often do not make judgments that are consistent with how the theories said they should be making judgments. This section will describe evidence about how people make judgments. Specifically, several heuristics will be discussed, which are short cuts that people may use to process information when making a judgment (Kahneman, Slovic, and Tversky 1982, is a classic collection of papers on this topic).

Availability. Consider the following questions: What is the first digit that comes to mind?, What is the first one-digit number that comes to mind?, and What is the first digit, such as one, that comes to mind? Kubovy (1977) found that the second statement, which mentions "1" in passing, resulted in more "1" responses than either of the other two statements. The explanation is that mentioning "1" made it more available in memory. People are using availability, when they make a judgment on the basis of what first comes to mind.

Interestingly, the third statement, which mentions "1" in an explicit manner, resulted in fewer "1" responses than the first statement, which does not mention "1" at all. Kubovy suggests there are fewer "1" responses, because people have an explanation of why they are thinking of "1" following that statement (it mentions "1"), so they do not choose it, because "1" has not been thought of at random. Thus, information must not only be available, but it must be perceived as relevant also.

Mood. Information that is available and may seem relevant to a judgment is a person's present mood (for a review of mood and judgment research see Clore, Schwarz, and Conway 1994). Schwarz and Clore (1983) suggest that people will use their mood state in the judgment process, if it seems relevant to that judgment. They contacted people by phone on cloudy or sunny days, and predicted that people would be happier on the sunny days than on the cloudy days. That prediction was verified. However, if people were first asked what the weather was like, then there was no difference in people's happiness on sunny and cloudy days. Schwarz and Clore suggested that asking people about the weather gave them a reason for their mood, so they did not use their mood in making the happiness judgment, because it did not seem relevant. Thus, according to Schwarz and Clore, people will use their mood as a heuristic for judgment, when the situation elicits actions from people as if they ask themselves, how do I feel about this?

Quantity and Numerosity. An effect that is also similar to availability was demonstrated in a series of experiments by Josephs, Giesler, and Silvera (1994). They found that subjects relied on observable quantity information when making personal performance judgments. They called this effect the quantity principle, because people seemed to use the size or quantity of material available to make their judgment. The experimenters had participants proofread text that was either attached to the source from which it came (e.g., a book or journal) or not, which was an unfamiliar task for the participants. They found that performance estimations on an unfamiliar task were affected by the quantity of the task work completed. Participants whose work resulted in a large pile of material rated their performance higher than participants whose work resulted in small pile. However, the amount of actual work done was the same. Moreover, if the pile was not in sight at the time of judgment, then performance estimates did not differ.

Pelham, Sumarta, and Myaskovsky (1994) demonstrated an effect similar to the quantity principle, which they called the numerosity heuristic. This heuristic refers to the use of the number of units as a cue for judgment. For example, the researchers found that participants rated the area of a circle as larger when the circle was presented as pieces that were difficult to imagine as a whole circle than if the pieces were presented so that it was easy to imagine, and larger than the undivided circle. This result suggests that dividing a whole into pieces will lead to those pieces being thought of as larger than the whole, and this will be especially so when it is hard to imagine the pieces as the whole. That is, the numerosity of the stimulus to be rated can affect the rating of that stimulus.

Anchoring. Anchoring is similar to availability and quantity heuristics, because it involves making a judgment using some stimulus as a starting point, and adjusting from that point. For example, Sherif, Taub, and Hovland (1958) had people make estimates of weight on a six-point scale. Prior to each estimation, subjects held an "anchor" weight that was said to represent "6" on the scale. When the anchor's weight was close to the other weights, then subjects' judgments were also quite close to the anchor, with a modal response of 6 on the six-point scale. However, heavier anchors resulted in lower responses. The heaviest anchor produced a modal response of 2.

However, anchoring need not involve the direct experience of a stimulus. Simply mentioning a stimulus can lead to an anchoring effect. Kahneman and Tversky (1972) assigned people the number 10 or 65 by means of a seemingly random process (a wheel of fortune). These people then estimated the percentage of African countries in the United Nations. Those assigned the number 10 estimated 25 percent, and those assigned the number 65 estimated 45 percent (at the time the actual percentage was 35). This indicates the psychological impact of one's starting point (or anchor), when making a judgment.

An anecdotal example of someone trying to fight the anchoring phenomenon is a writer who tears up a draft of what she is working on, and throws it away, rather than trying to work with that draft. It may seem wasteful to have created something, only to throw it out. However, at times, writers may feel that what they have produced is holding them in a place that they do not want to be. Thus, it could be better to throw out that draft, rather than trying to rework it, which is akin to pulling up anchor.

Endowment Effect. Related to anchoring is the tendency people have to stay where they are. This tendency is known as the endowment effect (Thaler 1980) or the status quo bias (Samuelson and Zeckhauser 1988). This tendency results in people wanting more for something they already have than they would be willing to pay to acquire that same thing. In economic terms, this would be described as a discrepancy between what people are willing to pay (WTP) and what they are willing to accept (WTA). Kahneman, Knetsch, and Thaler (1990) found WTA amounts much higher for an item already possessed than WTP amounts to obtain that item. In their study, half of the people in a university class were given a university coffee mug, and half were not. A short time later, the people with the mugs were asked how much money they wanted for their mug (their WTA amount), and those without the mug were asked how much they would pay to get a mug (their WTP amount). The median WTA amount was about twice the median WTP amount: WTA = $7.12, WTP = $2.87. Thus, few trades were made, and this is consistent with the idea that people often seem to prefer what they have to what they could have.

Representativeness. People may make judgments using representativeness, which is the tendency to judge events as more likely if they represent the typical or expected features for that class of events. Thus, representativeness occurs when people judge an event using an impression of the event rather than a systematic analysis of it. Two examples of representativeness misleading people are the gambler's fallacy and the conjunction fallacy.

The gambler's fallacy is the confusion of independent and dependent events. Independent events are not causally related to each other (e.g., coin flips, spins of a roulette wheel, etc.). Dependent events are causally related; what happened in the past has some bearing on what happens in the present (e.g., the amount of practice has some bearing on how well someone will perform in a competition). The confusion arises when people's expectations for independent events are violated. For example, if a roulette wheel comes up black eighteen times in a row, some people might think that red must be the result of the next spin. However, the likelihood of red on the next spin is the same as it is each and every spin, and the same as it would be if red, instead of black, had resulted on each of the previous eighteen spins. Each and every roulette wheel spin is an independent event.

The conjunction fallacy (Tversky and Kahneman 1983) occurs when people judge the conjunction of two events as more likely than (at least) one of the two events. The "Linda scenario" has been frequently studied: Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Following that description, subjects are asked to rank order in terms of probability several statements including the following: Linda is active in the feminist movement. [F] Linda is a bank teller. [B] Linda is a bank teller and is active in the feminist movement. [B&F]

The conjunction fallacy is committed if people rank the B&F statement higher (so more likely) than either the B or F statement alone, because that is logically impossible. The likelihood of B&F may be equal to B or F, but it cannot be greater than either of them, because B&F is a subset of the set of B events and the set of F events.

People find events with multiple parts (such as B&F) more plausible than separate events (such as B or F alone), but plausibility is not equal to likelihood. Making an event more plausible might make it a better story, which could be misleading and result in erroneous inferences (Markovits and Nantel 1989). Indeed, some have suggested that people act as if they are constructing stories in their minds and then make judgments based on the stories they construct (Pennington and Hastie 1993). But of course good stories are not always true, or even likely.

INDIVIDUAL DIFFERENCES

Another question about decision processing is whether there are individual differences between people in their susceptibility to erroneous decision making. For example, do some people tend to inappropriately use the heuristics outlined above, and if so, is there a factor that accounts for that inappropriate use?

Stanovich and West (1998) had participants do several judgment tasks and related the performance on those tasks to assessments of cognitive ability and thinking styles. They found that cognitive capacity does account for some performance on some judgment tasks, which suggests that computational limitations could be a partial explanation of non-normative responding (i.e., judgment errors). Also, independent of cognitive ability, thinking styles accounted for some of the participants' performance on some judgment tasks.

A similar suggestion is that some erroneous judgments are the result of participants' conversational ability. For example, Slugoski and Wilson (1998) show that six errors in social judgment are related to people's conversational skills. They suggest that judgment errors may not be errors, because participants may be interpreting the information presented to them differently than the researcher intends (see also Hilton and Slugoski 1999).

Finally, experience affects decision-making ability. Nisbett, Krantz, Jepson, and Kunda (1983) found that participants with experience in the domain in question preferred explanations that reflected statistical inferences. Similarly, Fong, Krantz, and Nisbett (1986) found that statistical explanations were used more often by people with more statistical training. These results suggest that decision-making ability can improve through relevant domain experience, as well as through statistical training that is not domain specific.


GROUP DECISION MAKING

Social Dilemmas. Social dilemmas occur when the goals of individuals conflict with the goals of their group; individuals face the dilemma of choosing between doing what is best for them personally and what is best for the group as a whole (Lopes 1994). Hardin (1965) was one of the first to write about these dilemmas in describing the "tragedy of the commons." The tragedy was that individuals tried to maximize what they could get from the common and, or the "commons," which resulted in the commons being overused, thereby becoming depleted. If each individual had only used his allotted share of the commons, then it would have continued to be available to everyone.

Prisoner's Dilemma. The best-known social dilemma is the prisoner's dilemma (PD), which involves two individuals (most often, although formulations with more than two people are possible). The original PD involved two convicts' decision whether or not to confess to a crime (Rapoport and Chammah 1965). But the following example is functionally equivalent.

Imagine you are selling an item to another person, but you cannot meet to make the exchange. You agree to make the exchange by post. You will send the other person the item, and receive the money in return. If you both do so, then you each get 3 units (arbitrary amount, but amounts received for each combination of choices is important). However, you imagine that you could simply not put the item in the post, yet still receive the money. Imagine doing so results in you getting 5 units and the other person −1 units. However, the other person similarly thinks that not posting the money would result in getting the item for free, which would result in 5 units for the other person and −1 for you. If you both do not put anything in the post, although you agreed to do so, you would be at the status quo (0 units each).

Do you post the item (i.e., cooperate) or not? Regardless of what the other person does, you will get more out of not cooperating (5 v. 3, when the other person cooperates, and 0 v. −1, when other does not). However, if you both do not cooperate, that produces an inferior group outcome, compared to cooperating (0 [0+0] v. 6 [3+3], respectively). Thus, the dilemma is that each individual has an incentive to not cooperate, but the best outcome for the group is obtained when each person cooperates. Can cooperation develop from such a situation?

Axelrod (1984; cf. Hofstader 1985) investigated that question by soliciting people to participate in a series of PD games (social dilemmas are often referred to as games). Each person submitted a strategy for choosing to cooperate or not over a series of interactions with the other strategies. Each interaction would result in points being awarded to each strategy, and the strategy that generated the most points won. The winning strategy was Tit for Tat. It was also the simplest strategy. The Tit for Tat strategy is to cooperate on the first turn, and then do whatever the other person just did (i.e., on turn x, Tit for Tat will do whatever its opponent did on turn x−1).

Axelrod suggested that four qualities led to Tit for Tat's success. First, it was a nice strategy, because it first cooperates, and Tit for Tat will cooperate as long as the other person cooperates. But when the other person does not cooperate, then it immediately retaliates. That is, it responds to noncooperation with noncooperation, which illustrates its second good quality. Tit for Tat is provocable, because it immediately reacts to noncooperation, rather than waiting to see what will happen next, or ignore the noncooperation. However, if the opponent goes back to cooperating, then Tit for Tat will also go back to cooperating. That is quality three: forgiveness. Tit for Tat will not continue punishing the other player for previous noncooperations. All that counts for Tit for Tat is what just happened, not the total amount of noncooperation that has happened. Finally, Tit for Tat has clarity, because it is simple to understand. A complex strategy can be confusing, so it may be misunderstood by opponents. If the opponent's intentions are unclear, then noncooperation is best, because if or when a complex strategy is going to be cooperative cannot be predicted.

Thus, a cooperative strategy can be effective even when there are clear incentives for noncooperation. Furthermore, Axelrod did another computer simulation in which strategies were rewarded by reproducing themselves, rather than simply accumulating points. Thus, success meant that the strategy had more of its kind to interact with. Again, Tit for Tat was best, which further suggests that a cooperative strategy can be effective and can flourish in situations that seem to be designed for noncooperation.

Bargaining and Fairness. Bargaining and negotiation have received increasing attention in decision theory and research (e.g., Pruitt and Carnevale 1993), as has the issue of justice or fairness (e.g., Mellers and Baron 1993). These issues are involved in the "ultimatum game," which involves two people and a resource (often a sum of money). The rules of the game are that one person proposes a division of the resource between them (a bargain), and the other person accepts or rejects the proposal. If the proposal is rejected, then both people get nothing, so the bargain is an ultimatum: this or nothing.

Expected utility (EU) theory suggests that the person dividing the resource should offer the other person just enough to get him to accept the bargain, but no more. Furthermore, EU suggests the person should accept any division, because any division will be more than zero, which is what the person will receive if the bargain is refused. However, typically the bargain is a fifty-fifty split (half of the resource to each person). Indeed, if people are offered anything less than a fifty-fifty split, they will often reject the offer, although that will mean they get nothing, rather than what they were offered, because that seems unfair.

In studies, people have evaluated these bargains in two ways. People can rate how attractive a bargain is (e.g., on a 1–7 scale). Possible divisions to be rated might be: $40 for you, $40 for the other person; $50 for you, $70 for the other person, and so on. Thus, bargains are presented in isolation, one after another, as if each was an individual case unrelated to anything else. This type of presentation is generally referred to as "absolute judgment" (Wever and Zener 1928).

Alternatively, people may evaluate bargains in pairs, and choose one. For example, do you prefer a bargain where you get $40, and the other person gets $40, or a bargain where you get $50, and the other person gets $70? Thus, the bargains are presented such that they can be compared, so people can see the relative outcomes. This type of presentation is generally referred to as "comparative judgment."

Absolute and comparative judgment have different results in the "ultimate" bargaining game. Blount and Bazerman (1996) gave pairs of participants $10 to be divided between them. In absolute judgment (i.e., is this bargain acceptable?), the money holder accepted a minimum division of $4 for him and $6 for the other person. But asked in a comparative judgment format (i.e., do you prefer this bargain or nothing?), participants were willing to accept less (a minimum division of $2.33 for the money holder and $7.67 for the other person). This result suggests that considering situations involving the division or distribution of resources on a case-by-case basis (absolute judgment) may result in sub-optimal choices (relative to those resulting from comparative judgment) for each person involved, as well as the group as a whole.

Comparative and absolute judgment can be applied to social issues as adoption. There has been controversy about adoption, when the adopting parents have a different cultural heritage than the child being adopted. Some argue that a child should be adopted only by parents of the same cultural heritage as the child to preserve the child's connection to his or her culture. That argument views this situation as an absolute judgment: should children be adopted by parents of a different cultural heritage or not?

However, there is an imbalance between the cultural heritages of the children to be adopted and those of the parents wanting to adopt. That imbalance creates the dilemma of what to do with children who would like to be adopted when there are no parents of the same cultural heritage wanting to adopt them. That dilemma suggests this comparative judgment: should children be adopted by parents of a different cultural heritage than their own, or should children be left unadopted (e.g., be brought up in a group home)?

The answers to these absolute and comparative judgments may differ, because the answer may be that a child should not be adopted by parents of a different cultural heritage as an absolute judgment, but as a comparative judgment the answer may be that a child should be adopted by such parents, despite the cultural differences, because having parents is better than not having parents. Thus, the best answer may differ depending on how the situation is characterized. Such situations may involve more than one value (in this case, the values are providing parents for a child and preserving the cultural heritage that the child was born into). Typically, absolute judgements reflect an acceptance or rejection of one value, while comparative judgments reflect more than one value.


GENERAL JUDGMENT AND DECISIONMAKING ISSUES


That the best solution for a situation can seem different if the situation is characterized differently is one of the most important issues in judgment and decision making. The theories mentioned above (Bayes's, EU, prospect, and rank-dependent) assume problem invariance. That is, they assume that people's judgments will not vary with how the problem is characterized. However, because the characterization of the problem affects how people frame the problem, people's decisions often do vary. (Tversky and Kahneman 1981).

An implication of this variability is that eliciting people's values becomes difficult (Baron 1997; 1998). However, different methods can produce contradictory results. For example, choice and matching tasks often reveal different values preferences. Choice tasks are comparative judgments: Do you prefer A or B? Matching tasks require participants to estimate one dimension of an alternative so that its attractiveness matches that of another alternative (e.g., program A will cure 60 percent of patients at a cost of $5 million, what should B cost if it will cure 85 percent of patients?).

The difference produced by these tasks has been extensively examined in studies of preference reversals (Slovic and Lichtenstein 1983). Tversky, Sattath, and Slovic (1988) have suggested that the dimension of elicitation (e.g., probability or value) will be weighted most, so reversals can result from changing the elicitation dimension. Fischer and Hawkins (1993) suggested that preference reversals were the result of the compatibility between people's strategy for analyzing the problem and the elicitation mode. Preference reversals clearly occur, but the cause of them continues to be debated (cf. Payne, Bettman, and Johnson 1992).

Ideas about rationality have also been influenced by the variability of people's judgments. Generally, the idea of rationality originated with a theory and then examined whether people behaved in that way, rather than examining how people behave and then suggesting what is rational. That is, rationality has typically been examined based on a prescriptive theory, such as Bayes's or EU, about how people should make decisions rather than a descriptive theory based on how people actually process information. When studies resulted in judgments that were inconsistent with those prescriptive theories of decision making, researchers concluded that people often acted irrationally.

However, there is growing recognition that study participants may be thinking of situations differently than researchers have assumed (Chase, Hertwig, and Gigerenzer 1998), which has led several researchers to create theories about decision processing (e.g., Dougherty, Gettys, and Ogden in press; Gigerenzer, Hoffrage, and Kleinbolting 1991) and use those theories to address rationality issues rather than the reverse. Approaches that focus on processing have been present in decision theory for some time (cf. Brunswick 1952; Hammond 1955), but they have not been dominant in the field. The acknowledgement of multiple views of rationality coupled with the poor explanations of prescriptive theories for people's actual decision behavior may shift the emphasis to processing models.

Further consideration of the decision-making process has led to other questions that are garnering increased attention. For example, how do people make decisions within dynamic environments? Generally, people make decisions in a dynamic world (Brehmer 1990; Busemeyer and Townsend 1993; Diehl and Sterman 1995), but many decision-making theories (such as those reviewed above) do not account for the dynamics of the environment. Also, how do people's emotions affect the decision-making process? Decisions can involve topics that evoke emotion or have emotional consequences (such as regret, Gilovich and Medvec 1995). Some decision theories have tried to include emotional considerations in decision making (e.g., Bell 1982), but this topic deserves more attention. These questions, as well as the issues discussed above, will make decision theory and research an area of continued interest and relevance.


references

Ajzen, I. 1996 "The Social Psychology of Decision Making." In E. T. Higgins and A. W. Kruglanski, SocialPsychology: Handbook of Basic Principles. New York: Guilford.

Arkes, H. R., R. Dawes, and C. Christensen 1986 "Factors Influencing the Use of a Decision Rule in a Probabilistic Task." Organizational Behavior and Human Decision Making, 37:93–110.

Axelrod, R. 1984 The Evolution of Cooperation. New York: Basic Books.

Baron, J. 1994 Thinking And Deciding, 2d ed. Cambridge, U.K.: Cambridge University Press.

——1997 "Biases in the Quantitative Measurement of Values for Public Decisions." Psychological Bulletin 122:72–88.

——1998 Judgment Misguided: Intuition and Error inPublic Decision Making. Oxford: Oxford University Press.

Bayes, T. (1764) 1958 "An Essay Towards Solving a Problem in the Doctrine of Chances." Biometrika 45:293–315.

Bell, D. 1982 "Regret in Decision Making Under Uncertainty." Operations Research 30:961–981.

Birnbaum, M. H. 1983 "Base Rates in Bayesian Inference: Signal Detection Analysis of the Cab Problem." American Journal of Psychology 96:85–94.

Birnbaum, M. H., G. Coffey, B. A. Mellers, and R. Weiss 1992 "Utility Measurement: Configural-Weight Theory and the Judge's Point of View." Journal of Experimental Psychology: Human Perception And Performance 18:331–346.

Birnbaum, M. H., and W. R. Mcintosh 1996 "Violations of Branch Independence in Choices Between Gambles." Organizational Behavior And Human DecisionProcesses 67:91–110.

Blount, S., and M. H. Bazerman 1996 "The Inconsistent Evaluation of Absolute Versus Comparative Payoffs in Labor Supply and Bargaining." Journal of EconomicBehavior and Organization 30:227–240.

Brehmer, B. 1990 "Strategies in Real-Time, Dynamic Decision Making." In R. M. Hogarth, ed., Insights inDecision Making: A Tribute To Hillel J. Einhorn 262–279. Chicago: University of Chicago Press.

Brunswick, E. 1952 The Conceptual Framework of Psychology. Chicago: University of Chicago Press.

Busemeyer, J., D. L. Medin, and R. Hastie 1995 DecisionMaking From a Cognitive Perspective. San Diego: Academic.

——, and J. T. Townsend 1993 "Decision-Field Theory: A Dynamic-Cognitive Approach to Decision-Making in an Uncertain Environment." Psychological Review 100:432–459.

Chase, V. M., R. Hertwig, and G. Gigerenzer 1998 "Visions of Rationality." Trends in Cognitive Sciences, 2:206–214.

Clore, G. L., N. Schwarz, and M. Conway 1994 "Affective Causes and Consequences of Social Information Processing." In R. S. Wyer and T. K. Srull, eds., Handbook of Social Cognition: 2nd Edition. Hillsdale, N.J.: Erlbaum.

Dawes, R. M. 1979 "The Robust Beauty of Improper Linear Models." American Psychologist 34:571–582.

——1998 "Behavioral Decision Making and Judgment." In D. T. Gilbert, S. T. Fiske, and G. Lindzey, eds., The Handbook of Social Psychology, 4th ed. Boston: Mcgraw-Hill.

Diehl, E., and J. D. Sterman 1995 "Effects of Feedback Complexity on Dynamic Decision Making." Organizational Behavior and Human Decision Processes 62:198–215.

Dougherty, M. R. P., C. F. Gettys, and E. E. Ogden (in press) "MINERVA-DM: A Memory Processes Model for Judgments of Likelihood." Psychological Review 106:180–209.

Einhorn, H. J. 1972 "Expert Measurement and Mechanical Combination." Organizational Behavior andHuman Performance 13:171–192.

Fischer, G. W., and S. A. Hawkins 1993 "Strategy Compatibility, Scale Compatibility, and the Prominence Effect." Journal of Experimental Psychology: HumanPerception and Performance 19:580–597.

Fong, G. T., D. H. Krantz, and R. E. Nisbett 1986 "The Effects of Statistical Training on Thinking About Everyday Problems." Cognitive Psychology 18:253–292.

Gigerenzer, G., and U. Hoffrage 1995 "How to Improve Bayesian Reasoning without Instruction: Frequency Formats." Psychological Review 102:684–704.

——, and H. Kleinbolting 1991 "Probabilistic Mental Models: A Brunswickian Theory of Confidence." Psychological Review 98:506–528.

Gigerenzer, G., Z. Swijtink, T. Porter, L. Daston, J. Beatty, and L. Kruger 1989 The Empire of Chance: HowProbability Changed Science and Everyday Life. Cambridge, U.K.: Cambridge University Press.

Gilovich, T., and V. H. Medvec 1995 "The Experience of Regret: What, Why, and When." Psychological Review 102:379–395.

Hacking, I. 1975 The Emergence of Probability. Cambridge, U.K.: Cambridge University Press.

——1990 The Taming of Chance. Cambridge, U.K.: Cambridge University Press.

Hammond, K. R. 1955 "Probabilistic Functioning and the Clinical Method." Psychological Review 62:255–262.

——1998 Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Justice. New York: Oxford.

Hardin, G. R. 1968 "The Tragedy of the Commons." Science 162:1243–1248.

Hastie, R., and N. Pennington 1991 "Cognitive and Social Processes in Decision Making." In L. B. Resnick, J. M. Levine, and S. D. Teasley, eds., Perspectives onSocially Shared Cognition, 308–327. Washington, D.C.: APA.

Hilton, D. J., and B. R. Slugoski 1999 "Judgment and Decision-Making in Social Context: Discourse Processes and Rational Inference." In T. Connolly, ed., Judgment and Decision-Making: An Interdisciplinary Reader 2d ed. Cambridge, U.K.: Cambridge University Press.

Hofstadter, D. R. 1985 "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation." In D. R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern, 715–734. New York: Basic Books.

Josephs, R. A., R. B. Giesler, and D. H. Silvera 1994 "Judgment By Quantity." Journal of Experimental Psychology: General 123:21–32.

Kahneman, D., J. L. Knetsch, and R. H. Thaler 1990 "Experimental Tests of the Endowment Effect and the Coase Theorem." Journal of Political Economy 98:1325–1348.

——, P. Slovic, and A. Tversky 1982 Heuristics andBiases: Judgments Under Uncertainty. Cambridge, U.K.: Cambridge University Press.

——, and A. Tversky 1972 "On Prediction and Judgment." ORI Research Monograph, 12.

——1979 "Prospect Theory: An Analysis of Decisions Under Risk." Econometrica 47:263–291.

Kubovy, M. 1977 "Response Availability and the Apparent Spontaneity of Numerical Choices." Journal ofExperimental Psychology: Human Perception and Performance 3:359–364.

Libby, R. 1976 "Man Versus Model of Man: Some Conflicting Evidence." Organizational Behavior andHuman Performance 16:1–12.

Lopes, L. L. 1994 "Psychology and Economics: Perspectives on Risk, Cooperation, and the Marketplace." Annual Review of Psychology 45:197–227.

Luce, R. D., and P. C. Fishburn 1991 "Rank- and Sign-Dependent Linear Utility Models for Finite First-Order Gambles." Journal of Risk and Uncertainty 4:29–59.

Manktelow, K. I., and D. E. Over 1993 Rationality:Psychological and Philosophical Perspectives. London: Routledge.

Markovits, H., and G. Nantel 1989 "The Belief-Bias Effect in the Production and Evaluation of Logical Conclusions." Memory & Cognition 17:11–17.

Mellers, B. A., and J. Baron 1993 Psychological Perspectives on Justice: Theory and Applications. New York: Cambridge University Press.

Mellers, B. A., A. Schwartz, and A. D. J. Cooke 1998 "Judgment and Decision Making." Annual Review ofPsychology 49:447–477.

Nisbett, R. E., D. H. Krantz, D. Jepson, and Z. Kunda 1983 "The Use of Statistical Heuristics in Everyday Inductive Reasoning." Psychological Review 90:339–363.

Payne, J. W., J. R. Bettman, and E. J. Johnson 1992 "Behavioral Decision Research: A Constructive Processing Perspective." Annual Review of Psychology 49:447–477.

Pelham, B. W., T. T. Sumarta, and L. Myaskovsky 1994 "The Easy Path from Many to Much: The Numerosity Heuristic." Cognitive Psychology 26:103–133.

Pruitt, D. G., and P. J. Carnevale 1993 Negotiation inSocial Conflict. Pacific Grove, Calif.: Brooks/Cole.

Rapoport, A., and A. M. Chammah 1965 Prisoner'sDilemma: A Study in Conflict and Cooperation. Ann Arbor, Mich.: University of Michigan Press.

Samuelson, W., and R. Zeckhauser 1988 "Status-Quo Bias in Decision Making." Journal of Risk and Uncertainty 1:7–59.

Schwarz, N., and G. L. Clore 1983 "Mood, Misattribution, and Judgments of Well-Being: Informative and Directive Functions of Affective States." Journal of Personality and Social Psychology 45:513–523.

Shapira, Z. 1995 Risk Taking: A Managerial Perspective. New York: Russell Sage Foundation.

Sherif, M., D. Taub, and C. I. Hovland 1958 "Assimilation and Contrast Effects of Anchoring Stimuli on Judgments." Journal of Experimental Psychology 55:150–155.

Slovic, P., and S. Lichtenstein 1983 "Preference Reversals: A Broader Perspective." American Economic Review 73:596–605.

Slugoski, B. R., and A. E. Wilson 1998 "Contribution of Conversation Skills to the Production of Judgmental Errors." European Journal of Social Psychology 28:575–601.

Stanovich, K. E., and R. F. West 1998 "Individual Differences in Rational Thought." Journal of ExperimentalPsychology: General 127:161–188.

Stevenson, M. K., J. R. Busemeyer, and J. C. Naylor 1990 "Judgment and Decision-Making Theory." In M. D. Dunette and L. M. Hough, eds., Handbook of Industrial and Organizational Psychology, 2d ed., vol. 1, 283–374. Palo Alto, Calif.: Consulting Psychologists Press.

Tversky, A., and D. Kahneman 1981 "The Framing of Decisions and the Psychology of Choice." Science 211:453–458.

——1982 "Evidential Impact of Base Rates." In D. Kahneman, P. Slovic, and A. Tversky, eds., Heuristicsand Biases: Judgments Under Uncertainty. Cambridge, U.K.: Cambridge University Press.

——1983 "Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment." Psychological Review 90:293–315.

——1992 "Advances in Prospect Theory: Cumulative Representations of Uncertainty." Journal of Risk andUncertainty 5:297–323.

Tversky, A., S. Sattath, and P. Slovic 1988 "Contingent Weighting in Judgment and Choice." PsychologicalReview 95:371–384.

von Neumann, J., and O. Morganstern 1947 Theory ofGames and Economic Behavior, 2d ed. Princeton, N.J.: Princeton University Press.

Wever, E. G., and K. E. Zener 1928 "The Method of Absolute Judgment in Psychophysics." PsychologicalReview 35:466–493.


Evan Thackeray Pritchard