I. Psychological AspectsWard Edwards
II. Economic AspectsJacob Marschak
III. Political AspectsJames A. Robinson
Men must choose what to do. Often, choices must be made in the absence of certain knowledge of their consequences. However, an abundance of fallible, peripheral, and perhaps irrelevant information is usually available at the time of an important choice; the effectiveness with which this information is processed may control the appropriateness of the resulting decision. This article is concerned with laboratory studies of human choices and of certain kinds of human information processing leading up to these choices. It is organized around two concepts and two principles. The two concepts are utility, or the subjective value of an outcome, and probability, or how likely it seems to the decision maker that a particular outcome will occur if he makes a particular decision. Both of the principles are normative or prescriptive; they specify what an ideal decision maker would do and thus invite comparison between performances of ideal and of real decision makers. One, the principle of maximizing expected utility, in essence asserts that you should choose the action that on the average will leave you best off. The other, a principle of probability theory called Bayes’ theorem, is a formally optimal rule for transforming opinions in the light of new information, and so specifies how you should process information. The basic conclusions reached as a result of comparison of actual human performance with these two principles is that men do remarkably well at conforming intuitively to ideal rules, except for a consistent inefficiency in information processing.
Utility measurement and expected utility
The concepts of utility and probability have been with us since at least the eighteenth century. But serious psychological interest in any version of them did not begin until the 1930s, when Kurt Lewin wrote about valence (utility) and several probability-like concepts. Lewin had apparently been influenced by some lectures on decision theory that the mathematician John von Neumann had given in Berlin in 1928. But the Lewinian formulations were not very quantitative, and the resulting research did not lead to explicit psychological concern with decision processes. However, in 1944 von Neumann and Morgenstern published their epochal book Theory of Games and Economic Behavior. The theory of games as such has been remarkably unfruitful in psychological research, mostly because of its dependence on the absurdly conservative minimax principle that in effect instructs you to deal with your opponent as though he were going to play optimally, no matter how inept you may know him to be. But von Neumann and Morgenstern rather incidentally proposed an idea that made utility measurable; that proposal is the historical origin of most psychological research on decision processes since then. Their proposal amounts to assuming that men are rational, in a rather specific sense, and to designing a set of procedures exploiting that assumption to measure the basic subjective quantities that enter into a decision.
Since the origin of probability theory, the idea has been obvious that bets (and risky acts more generally) can be compared in attractiveness. Formally, every bet has an expectation, or expected value (EV), which is simply the average gain or loss or money per bet that you might expect to accrue if you played the bet many times. To calculate the EV, you multiply each possible dollar outcome of the bet by the probability of that outcome, and sum the products. In symbols, the EV of the zth bet is calculated as follows, where Vij is the payoff for the jth outcome of the ith bet and Pj is the probability of obtaining that payoff:
Bets can be ordered in terms of their EV, and it seems plausible to suppose that men should prefer a bet with a higher EV to a bet with a lower one. But a little thought shows that men buy insurance in spite of the fact that the insurance companies pay their employees and build buildings, and thus must take in more money in premiums than they pay out in benefits. Thus insurance companies are in the business of selling bets that are favorable to themselves and unfavorable to their customers. Nevertheless, it is doubtful that anyone would call buying insurance irrational. This and other considerations led to a reformulation of the notion that men should order bets in terms of EV. The seventeenth-century British utilitarian philosophers had distinguished between objective value, or price, and subjective value, or utility. If the utility of some object to you is different from its price, then surely your behavior should attempt to maximize not expected value in dollars but expected utility. That is, you should substitute u(Vij), the utility of the payoff, for the jth outcome of bet i, for the payoff itself, Vij in equation (1). Since it is utility, not payoff, that you attempt to maximize in this model, it is called the expected utility maximization model, or EU model. In symbols,
Von Neumann and Morgenstern proposed simply that one should use equation (2) to measure utility, by assuming that men make choices rationally. Several specific implementations of this idea will be examined below.
Freud and psychiatry have taught us, perhaps too stridently, to look for irrational motivations behind human acts, and introspection confirms this lesson of thousands of years of human folly. Why bother, then, with measurement based on the assumption that men are rational? Three kinds of answers seem clear. First, rationality, as decision theorists think of it, has nothing to do with what you want, but only with how you go about implementing your wants. If you would rather commit rape than get married and rather get drunk than commit rape, the decision theorist tells you only that, to be rational, you should get drunk rather than get married. The compatibility of your tastes with your, or society’s, survival or welfare is not his concern. So it is easy to be irrational in Freud’s sense, and yet rational from a standpoint of decision theory. Second, men often want to implement their tastes in a consistent (which means rational) way, and when large issues are at stake, they often manage to do so. In fact, knowledge of the rules of rational behavior can help one make rational decisions, that is, knowledge of the theory helps make the theory true. Third, the most important practical justification of these or any other scientific procedures is that they work. Methods based on von Neumann and Morgenstern’s ideas do produce measurements, and those measurements provide predictors of behavior. The following review of experiments supports this statement.
The Mosteller and Nogee experiment
The von Neumann-Morgenstern proposal was elaborated by Friedman and Savage (1948), a mathematical economist and a statistician writing for economists, and then was implemented experimentally by Mosteller and Nogee (1951), a statistician and a graduate student in social psychology. (No discipline within the social or mathematical sciences has failed to contribute to, or use, decision theory.) Mosteller and Nogee asked subjects to accept or reject bets of the form “If you beat the following poker-dice hand, you will win $X; otherwise, you will lose $0.05.” A value of X was found such that the subject was indifferent between accepting and rejecting the bet. Arbitrary assignment of 0 utiles (the name for the unit of utility, as gram is the name for a unit of weight) to no transaction (rejection of the bet) and of -1 utile to losing a nickel fixed the origin and unit of measurement of the utility scale, and the calculated probability p of beating the poker-dice hand was substituted into the following equation:
pu($X) + (1 -p) u(-$0.05) = u($0).
Since u($X), the utility of $X, is the only unknown in this equation, it is directly solvable. Mosteller and Nogee used two groups of subjects in this experiment: Harvard undergraduates and National Guardsmen. For the Harvard undergraduates, they found the expected decreasing marginal utility; that is, the utility function rose less and less rapidly as the amount of money increased. For the National Guardsmen, they found the opposite; the larger the amount of money, the steeper the slope of the utility function.
Perhaps the most important of the many criticisms of the Mosteller-Nogee experiment is about the role that probabilities display in it. The probability of beating the specified poker-dice hand is not at all easy to calculate; still, it is substituted into equation (2), which is supposed to represent the decision-making processes of the subject. Actually, Mosteller and Nogee did display these probabilities as numbers to their subjects. But they also displayed as numbers the values of the amounts of money for which the subjects were gambling. Why should we assume that the subjects make some subjective transformation that changes those numbers called dollars into subjective quantities called utilities, while the numbers called probabilities remain unchanged? Mosteller and Nogee made the point themselves, and reanalyzed their data using equation (2), but treating p rather than u($X) as the unknown quantity. But this is no more satisfactory. The fundamental fact is that equation (2) has at least two unknowns, p and u($X). This fact has been recognized by renaming the EU model; it is now called the SEU (subjectively expected utility) model. The addition of the S means only that the probabilities which enter into equation (2) must be inferred from behavior of the person making the decision, rather than calculated from some formal mathematical model. Of course, the person making the decision may not have had a very orderly set of probabilities. In particular, his probabilities for an exhaustive set of mutually exclusive events may not add up to 1. Thus there are really two different SEU models, depending on whether or not the probabilities are assumed to add up to 1. (In the latter case, the utilities must be measured on a ratio, not an interval, scale.) This article will examine the topic of subjective probabilities at length later.
In the SEU model every equation like equation (2) has at least two unknowns. A single equation with two unknowns is ordinarily insoluble. All subsequent work on utility and probability measurement has been addressed in one way or another to solution of this problem of too many unknowns. Edwards (1953; 19540; 1954b) gave impetus to further analysis of the problem by exhibiting sets of preferences among bets that could not be easily accounted for by any plausible utility function for money but that seemed to require instead the notion that subjects simply prefer to gamble at some probabilities rather than others, and indeed will accept less favorable bets embodying preferred probabilities over more favorable bets embodying less preferred probabilities.
The Davidson, Suppes, and Siegel experiment
The next major experiment in the utility measurement literature was performed by Davidson, Suppes, and Siegel (see Davidson & Suppes 1957), two philosophers and a graduate student in psychology. The key idea in their experiment—the pair of subjectively equally likely events—was taken from a neglected paper by the philosopher Ramsey (1926). Suppose that you find yourself committed to a bet in which you stand to win some large sum if event A occurs and to lose some other large sum if it does not occur. Now suppose it is found that by paying you a penny I could induce you to substitute for the original bet another one exactly like it except that now you are betting on A not occurring. Now, after the substitution, suppose that I could induce you to switch back to the original bet by offering you yet another penny. Clearly, for you, the probability that A will occur is equal to the probability that it will not, that is, the two events are subjectively equally likely.
If we assume that either A or not-A must happen, and if we assume that for you (as for any probability theorist) the probabilities of any set of events no more than one of which can happen and some one of which must happen—that is, an exhaustive set of mutually exclusive events—add up to 1, then the sum of the probabilities of A and not-A must be 1. Now, if those two numbers are equal, then each must be 0.5.
Davidson, Suppes, and Siegel hunted for subjectively equally likely events, and finally used a die with one nonsense syllable (e.g., ZEJ) on three of its faces and another (ZOJ) on the other three. (They were very lucky that the same event turned out equally likely for all subjects.) They fixed two amounts of money, –4 cents and 6 cents. They found an amount of money, X cents, such that the subject was indifferent to receiving 6 cents if, say, ZOJ occurred and X cents if ZEJ occurred, and receiving -4 cents for sure. It follows from the SEU model that
p(ZOJ) u(6¢) + p(ZEJ) u(X¢) = u(-4¢).
A little algebra shows that
u(6¢) -u(-4¢) = u(-4¢) – u(X¢).
That is, the distance on the utility-of -money scale from 6 cents to -4 cents is equal to the distance from –4 cents to X cents (of course X cents is a larger loss than –4 cents). Once two equal intervals on the utility scale have been determined, it is no longer necessary to use a sure thing as one of the options. If A, B, C, and D are decreasing amounts of money, if the distance from B to C is equal to the distance from C to D, and if the subject is indifferent between a subjectively equally likely bet in which he wins A for ZOJ and D otherwise, and another in which he wins B for ZOJ and C otherwise, then the distance from A to B is equal to the other two distances. Davidson, Suppes, and Siegel used this procedure to construct a set of equally spaced points on their subjects’ utility-for-money functions. Thereafter, they used the resulting utility functions as a basis for measuring the probability that one face of a symmetrical four-faced die would come up. (If the four faces were considered equally likely and if their probabilities added to 1, then that probability would be 0.25.)
The most important finding of the Davidson, Suppes, and Siegel experiment was that a good many internal consistency checks on their utility functions worked out well. Once a number of equal intervals on the utility function have been determined, many predictions can be made about preferences among new bets; some of these were tested, and in general they were successful. The utility functions were typically more complicated than those of the Mosteller-Nogee experiment and differed from subject to subject; they were seldom linearly related to money. The subjective probability of the face of the four-faced die was typically found to be in the region of 0.20.
The Davidson-Suppes-Siegel procedure remains intellectually valid, though criticisms of details are easy to make. However, it is unlikely that future utility measurement experiments will use it. The prior determination of the subjectively equally likely event is less attractive than procedures now available for determination of both utilities and probabilities from the same set of choices.
Among a substantial set of utility-measurement experiments, only two others are reviewed here. They both embody sophisticated ideas taken from recent developments in measurement theory, and they both use what amounts to a simultaneous-equations approach to the solution of a system of equations like equation (2) for utilities and probabilities, treating both as unknowns.
The Lindman experiment
Harold Lindman (1965) began by giving a subject a two-outcome bet of the form that if a spinner stopped in a specified region, the subject would win $X, while if it did not, he would win $Y. Then he invited the subject to state the minimum price for which he would sell the bet back to the experimenter. After the subject had stated that amount, $Z, Lindman operated a random device that specified an experimenter’s price. If the experimenter’s price was at least as favorable to the subject as the subject’s price, then the sale took place, the experimenter paid his price to the subject, and the bet was not played. Otherwise, the sale did not take place, and the subject played the bet. Since the sale, if it took place, always took place at the experimenter’s price, it was to the subject’s advantage to name the actual minimum amount of money that he considered just as valuable as the bet. Thus Lindman could write
pu($X) + (1 -p) u($Y) = u($Z).
This equation has at least four unknowns, three utilities and a probability. If we question whether subjects unsophisticated about probability theory make their probabilities add up to 1, it may have five unknowns, since then both the probability of the event and the probability of its complement can be treated as unknowns. Thus, in a system of such equations there will always be many more unknowns than equations, even though the same probabilities and amounts of money are used in many different bets. However, the system can be rendered soluble, and even overdetermined, by taking advantage of the fact that if $Z is between $X and $Y (as it will be in the example), then w($Z) will be between w($X) and w($Y). In more formal language, the relation between u($X) and $X is monotonic. Lindman exploited this fact by fitting a series of line segments to his utility functions; by controlling the number of line segments, he controlled both the number of unknowns and the amount of curvilinearity (more precisely, changes in slope) he could introduce into the utility function.
Lindman’s results are complex, orderly, and pretty. He obtained a variety of shapes of utility functions for money from his different subjects. When he analyzed the data without assuming additivity of probabilities, he found that the actual sums were very close to 1; the data strongly support the idea that his subjects do in fact make probability judgments that add to 1. The probability and utility functions that were found predicted choices among bets very well indeed. There were some interactions between probabilities and utilities of a kind not appropriate to the EU model, but they were not major ones.
The Tversky experiment
Deviating from the nearly universal use of college student subjects, Amos Tversky used prisoners at Jackson Prison, the largest state prison in Michigan, as subjects (1964). He used cigarettes, candy bars, and money, rather than money alone, as the valuable objects whose utilities were to be measured. Tversky’s research was based on an application of simultaneous conjoint measurement, a new approach to fundamental measurement which emphasizes the idea of additive structures. Consider a bet in which with probability p you win $X and with probability 1 – p no money changes hands. Suppose such a bet is worth just $Z to you. A matrix with different values of p on one axis and different values of $X on the other defines a family of such bets. If the utility of no money changing hands is taken to be 0, then the EU model can be written for any such bet in logarithmic form as follows:
log pi + log u($Xij) = log u($Zij).
That is, in logarithmic form this is an additive model. If all the values of $Zij for such a matrix of bets are known, then the rules for additive representations of two-dimensional matrices permit solution (by complex computer methods) of a system of inequalities that give close values of pj and u($Xij). Tversky did just this, both for gambles and for commodity bundles consisting of so many packs of cigarettes and so many candy bars.
The main finding of Tversky’s study was a consistent discrepancy between behavior under risky and riskless conditions. If probabilities are not forced to add to 1 in the data analysis, then the form that this discrepancy takes is that the probabilities of winning are consistently overestimated relative to the probabilities of losing. If probabilities are forced to add to 1, then the utilities measured under risky conditions are consistently higher than those measured under riskless conditions. The latter finding would normally be interpreted as reflecting the attractiveness or utility of gambling as an activity, independently of the attractiveness of the stakes and prizes. The discrepancy between Tversky’s finding and Lindman’s remains unexplained.
In all of these studies, the choices among bets made under well-defined experimental conditions turn out to be linked via some form or other of the SEU model to choices among other bets made by the same subjects under more or less the same conditions. That is, the SEU model permits observation of coherence among aspects of subjects’ gambling behavior. Of course it would be attractive to find that such coherence would hold over a larger range of risk-taking activity. A substantial disappointment of these studies is that the individual utility and probability functions vary so much from one person to another. It would be scientifically convenient if different people had the same tastes—but experience offers no reason for hoping that they will, and the data clearly say that they do not.
At any rate, these studies offer no support for those who reject a priori the idea that men make “rational” decisions. An a priori model of such decision making turns out to predict very well the behavior of a variety of subjects in a variety of experiments. Nor is this finding surprising. A very general and intuitively appealing model of almost any kind of human behavior is contained in the following dialogue:
Question: What is he doing?
Answer: He’s doing the best he can.
Identification rules for probability
The previous discussion has probably been somewhat confusing to many readers not familiar with what is now going on in statistical theory. It has consistently treated probability as a quantity to be inferred from the behavior of subjects, rather than calculated from such observations as the ratio of heads to total flips of a coin. It is intuitively reasonable to think that men make probability judgments just as they make value judgments and that these judgments can be discovered from men’s behavior. But how might such subjective probabilities relate to the more familiar quantities that we estimate by means of relative frequencies?
This question is a controversial one in contemporary statistics. Considered as a mathematical quantity, a probability is a value that obeys three quite simple rules: it remains between 0 and 1; 0 means impossibility and 1 means certainty; and the probability of an event’s taking place plus the probability of its not taking place add up to 1. These three properties are basic to all of the elaborate formal structure of probability theory considered as a topic in mathematics. Nor are they, or their consequences, at all controversial. What are controversial are the identification rules linking these abstract numbers with observations made or makable in the real world. The usual relative-frequency rules suffer from a number of intellectual difficulties. They require an act of faith that a sequence of relative frequencies will in fact approach a limit as the number of observations increases without limit. They are very vague and subjective while pretending to be otherwise; this fact is most conspicuous in the specification that relative frequencies are supposed to be observed under “substantially similar conditions,” which means that the conditions should be similar enough but not too similar. (A coin always tossed in exactly the same way would presumably fall with the same face up every time.) Perhaps most important, the frequentistic set of rules is just not applicable to many, perhaps most, of the questions about which men might be uncertain. What is the probability that your son will be a straight-A student in his senior year in high school? While an estimate might be made by counting the fraction of senior boys in the high school he is likely to attend who have straight-A records, a much better estimate would be based primarily on his own personal characteristics, past grade record, family background, and the like.
The Bayesian approach
Dissatisfaction with the frequentistic set of identification rules, for these and other more technical reasons, has caused a set of statisticians, probability theorists, and philosophers led by Leonard J. Savage, author of The Foundations of Statistics (1954), to adopt a different, personalistic, set of such rules. According to the personalistic set of identification rules, a probability is an opinion about how likely an event is. If it is an opinion, it must be someone’s opinion; 1 will remind you of this from time to time by reference to your probability for something, and by calling such probabilities personal. Not any old opinion can be a probability; probabilities are orderly, which mostly means that they add up to 1. This requirement of orderliness is extremely constraining, so much so that no real man is likely to be able to conform to it in his spontaneous opinions. Thus the “you” whose opinions I shall refer to is a slightly idealized you, the person you would presumably like to be rather than the person you are.
Those who use the personalistic identification rules for probabilities are usually called Bayesians, for the rather unsatisfactory reason that they make heavier use of a mathematical triviality named Bayes’ theorem than do nonpersonalists. Bayes’ theorem, an elementary consequence of the fact that probabilities add up to 1, is important to Bayesians because it is a formally optimal rule for revising opinions on the basis of evidence. Consider some hypothesis H. Your opinion that it is true at some given time is expressed by a number p(H), called the prior probability of H. Now you observe some datum D, with unconditional probability p(D) and with conditional probability p(D|H) of occurring if H is true. After that, your former opinion about H, p(H), is revised into a new opinion about H, p(H|D), called the posterior probability of H on the basis of D. Bayes’ theorem says these quantities are related by the equation
An especially useful form of Bayes’ theorem is obtained by writing it for two different hypotheses, HA and HB, on the basis of the same datum D, and then dividing one equation by the other:
or, in simpler notation,.
In equation (4), Ω0, the ratio of p(HA) to p(HB), is called the prior odds, Ω1 is called the posterior odds, and L is the likelihood ratio. Equation (4) is perhaps the most widely useful form of Bayes’ theorem.
Conservatism in information processing
Bayes’ theorem is of importance to psychologists as well as to statisticians. Psychologists are very much interested in the revision of opinion in the light of information. If equation (3) or equation (4) is an optimal rule for how such revisions should be made, it is appropriate to compare its prescriptions with actual human behavior.
Consider a very simple experiment by Phillips and Edwards (1966). Subjects were presented with a bookbag full of poker chips. They were told that it had been chosen at random, with 0.5 probability, from two bookbags, one containing 700 red and 300 blue chips, while the other contained 700 blue and 300 red. The question of interest is which bag this one is. Subjects were to answer the question by estimating the probability that this was the predominantly red bookbag. On the basis of the information so far available, that probability is 0.5, as all subjects agreed.
Now, the experimenter samples randomly with replacement from the bookbag. At this point let me invite you to be a subject in this experiment. Suppose that in 12 samples, with replacement, the experimenter gets red, red, red, blue, red, blue, red, blue, red, red, blue, red—that is, 8 reds and 4 blues. Now, on the basis of all the evidence you have, what is the probability that this is the predominantly red bookbag? Write down an intuitive guess before starting to read the next paragraph.
Let us apply equation (4) to the problem. The prior odds are 1:1, so all we need is the likelihood ratio. To derive that is straightforward; for the general binomial case of r reds in n samples, where HA says that the probability of a red is pA (and of a blue is qA) and HB says the probability of a red is pB (and of a blue is qB),
In this particular case, made much simpler by the fact that pA = qB and vice versa, it is simply
Of course 2r – n – r– (n – r) and is the difference between the number of reds and the number of blues in the sample—in this case, 4. So the likelihood ratio is (7/3)4 = 29.64. Since the prior odds are 1:1, the posterior odds are then 29.64:1. And so the posterior probability that this is the predominantly red bookbag is 0.97.
If you are like Phillips and Edwards’ subjects, the number you wrote down wasn’t nearly so high as 0.97. It was probably about 0.70 or 0.80. Phillips and Edwards’ subjects, and indeed all subjects who have been studied in experiments of this general variety, are conservative information processors, unable to extract from data anything like as much certainty as the data justify. A variety of experiments has been devoted to this conservatism phenomenon. It is a function of response mode; Phillips and Edwards have shown that people are a bit less conservative when estimating odds than when estimating probability. It is a function of the diagnosticity of the information; Peterson, Schneider, and Miller (1965) have shown that the larger the number of poker chips presented at one time, the greater is the conservatism, and a number of studies by various investigators have shown that the more diagnostic each individual poker chip, the greater the conservatism.
Conservatism could be attributed to either or both of two possible failures in the course of human information processing. First, the subjects might be unable to perceive the data-generating process accurately; they might attribute to data less diagnostic value than the data in fact have. Or the subjects might be unable to combine information properly, as Bayes’ theorem prescribes, even though they may perceive the diagnostic value of any individual datum correctly. Data collected by Beach (1966) favor the former hypothesis; data collected by Edwards (1966) and by Phillips (1965) favor the latter. It seems clear by now that this formulation of the possible causes of conservatism is too simple to account for the known facts, but no one has yet proposed a better one.
Probabilistic information processing
If men are conservative information processors, then it seems reasonable to expect that this fact has practical consequences. One practical consequence is familiar to anyone who has ever grumbled over a hospital bill: human conservatism in processing available diagnostic information may lead to collection of too much information, where the collection process is costly. An even more serious consequence may arise in situations in which speed of a response is crucial: a conservative information processor may wait too long to respond because he is too uncertain what response is appropriate. This consequence of conservatism is especially important in the design of large military information-processing systems. In the North American Air Defense System, for example, the speed with which the system declares we are under attack, if we are, may make a difference of millions of lives.
Edwards (1962; 1965b) has proposed a design for diagnostic systems that overcomes the deficiency of human conservatism. A probabilistic information processing system (PIP) is designed in terms of Bayes’ theorem. For vague, verbal data and vague, verbal hypotheses, experts must estimate p(D|H) or L as appropriate (usually L rather than p(D|H) will be appropriate). They make these estimates separately for each datum and each hypothesis or pair of hypotheses of interest to the system. Then a computer uses equation (3) or equation (4) to synthesize these separate judgments into a posterior distribution that reflects how all the hypotheses stand in the light of all the data. This distribution is of course revised each time a new datum becomes available.
A number of studies of PIP have been performed (see, e.g., Schum, Goldstein, & Southard 1966; Kaplan & Newman 1966; Edwards 1966). The studies have generally found it more efficient than competitive systems faced with the same information. In a large unpublished simulation study, Edwards and his associates found that data that would lead PIP to give 99:1 odds in favor of some hypothesis would lead its next-best competitor to give less than 5:1 odds in favor of that hypothesis. Even larger discrepancies in favor of PIP appear in Phillips’ study. Thus a combination of human judgment and Bayes’ theorem seems to be capable of doing a better job of information processing than either alone.
Are men rational?
All in all, the evidence favors rationality. Men seem to be able to maximize expected utility rather well, in a too-restricted range of laboratory tasks. There are, of course, a number of well-known counterexamples to the idea that men consistently do what is best for them. More detailed analysis of such experiments (e.g., probability learning experiments) indicates that substantial deviations from rationality seldom occur unless they cost little; when a lot is at stake and the task isn’t too complex for comprehension, men behave in such a way as to maximize expected utility.
The comparison of men with Bayes’ theorem is less favorable to men. The conservatism phenomenon is a large, consistent deviation from optimal information-processing performance. Nevertheless, it is surprising to those newly looking at this area that men can do as well as they do at probability or odds estimation.
The topic of the articulation between diagnosis and action selection will receive much more study in the next few years than it has so far. What little evidence is available suggests that men do remarkably well at doing what they should, given the information on hand. But the surface of this topic has scarcely been scratched.
Total rejection of the notion that men behave rationally is as inappropriate as total acceptance would be. On the whole, men do well; exactly how well they do, depends in detail on the situation they are in, what’s at stake, how much information they have, and so on. The main thrust of psychological theory in this area is likely to be a detailed spelling out of just how nearly rational men can be expected to be under given circumstances.
[Directly related are the entriesDecision Theory; Problem Solving; Reasoningand Logic. Other relevant material may be found inBayesian Inference; Gambling; Game Theory; Information Theory; Models, Mathematical; Probability; Utilitarianism; Utility; and in the biographies ofBayes; Bentham; von Neumann.]
Edwards 1954c; 1961 provide two reviews of decision theory from a psychological point of view; these constitute a good starting point for more intensive study. For those who find these papers too difficult, Edwards, Lindman, & Phillips 1965 is an easier and more up-to-date but less thorough introduction to the topic. Luce & Raiffa 1957, though somewhat out-of-date, remains unique for its clear and coherent exposition of the mathematical content of decision theory. Luce & Suppes 1965 performs a similar job at chapter rather than book length and much more recently; its emphasis is on the probabilistic models of choice and decision. The easiest introduction to Bayesian statistics is Edwards, Lindman, & Savage 1963 or Schlaifer 1961. By far the most authoritative treatment of the topic is Raiffa & Schlaifer 1961, but Lindley 1965 and Good 1965 are also important. No review of the more recent psychological work on subjective probability structured around the Bayesian ideas exists yet, though one is in preparation. Beach 1966; Kaplan & Newman 1966; Phillips, Hays, & Edwards 1966; Schum, Goldstein, & Southard 1966; Slovic 1966; Peterson & Phillips 1966 provide a good sample of that work, comparing men with Bayes’ theorem as information processors. Wasserman & Silander 1958 is an annotated bibliography that emphasizes the sorts of interpersonal topics and social applications here ignored; it is a good guide to older parts of that literature. There is a much more up-to-date supplement, but it is not widely available. Rapoport & Orwarit 1962 reviews the literature on experimental games. Kogan & Wallach 1964 is on personality variables in decision making.
Beach, Lee Roy 1966 Accuracy and Consistency in the Revision of Subjective Probabilities. IEEE Transactions on Human Factors in Electronics HFE-7:29–37.
Briggs, George E.; and Schum, David A. 1965 Automated Bayesian Hypothesis-selection in a Simulated Threat-diagnosis System. Pages 169–176 in Congress on the Information System Sciences, Second, Proceedings. Washington: Spartan.
Davidson, Donald; and Suppes, Patrick 1957 Decision Making: An Experimental Approach. In collaboration with Sidney Siegel. Stanford Univ. Press.
Edwards, Ward 1953 Probability-preferences in Gambling. American Journal of Psychology 66:349–364.
Edwards, Ward 1954a Probability-preferences Among Bets With Differing Expected Values. American Journal of Psychology 67:56–67.
Edwards, Ward 1954b The Reliability of Probability-preferences. American Journal of Psychology 67: 68–95.
Edwards, Ward 1954c The Theory of Decision Making. Psychological Bulletin 51:380–417.
Edwards, Ward 1961 Behavioral Decision Theory. Annual Review of Psychology 12:473–498.
Edwards, Ward 1962 Dynamic Decision Theory and Probabilistic Information Processing. Human Factors 4:59–73.
Edwards, Ward 1965a Optimal Strategies for Seeking Information: Models for Statistics, Choice Reaction Times, and Human Information Processing. Journal of Mathematical Psychology 2:312–329.
Edwards, Ward 1965 & Probabilistic Information Processing Systems for Diagnosis and Action Selection. Pages 141–155 in Congress on the Information System Sciences, Second, Proceedings. Washington: Spartan.
Edwards, Ward 1966 Non-conservative Probabilistic Information Processing Systems Final Report. ESD Final Report No. 05893–22-F. Unpublished manuscript.
Edwards, Ward; Lindman, Harold; and Phillips, Lawrence D. 1965 Emerging Technologies for Making Decisions. Pages 261–325 in New Directions in Psychology, II. New York: Holt.
Edwards, Ward; Lindman, Harold; and Savage, Leonard Y. 1963 Bayesian Statistical Inference for Psychological Research. Psychological Review 70: 193–242.
Edwards, Ward; and Phillips, Lawrence D. 1964 Man as Transducer for Probabilities in Bayesian Command and Control Systems. Pages 360–401 in Maynard W. Shelly and Glenn L. Bryan (editors), Human Judgments and Optimality. New York: Wiley.
Friedman, Milton; and Savage, L. J. 1948 The Utility Analysis of Choices Involving Risk. Journal of Political Economy 56:279–304.
Good, I. J. 1965 The Estimation of Probabilities: An Essay on Modern Bayesian Methods. Cambridge, Mass.: M.I.T. Press.
Kaplan, R. J.; and Newman, J. R. 1966 Studies in Probabilistic Information Processing. IEEE Transactions on Human Factors in Electronics HFE-7:49–63.
Kogan, Nathan; and Wallach, Michael A. 1964 Risk Taking: A Study in Cognition and Personality. New York: Holt.
Lindley, Dennis V. 1965 Introduction to Probability and Statistics From a Bayesian Viewpoint. 2 vols. Cambridge Univ. Press.
Lindman, Harold R. 1965 The Simultaneous Measurement of Utilities and Subjective Probabilities. Ph.D. dissertation, Univ. of Michigan.
Luce, R. Duncan; and Raiffa, Howard 1957 Games and Decisions: Introduction and Critical Survey. A study of the Behavioral Models Project, Bureau of Applied Social Research, Columbia University. New York: Wiley.
Luce, R. Duncan; and Suppes, Patrick 1965 Preference, Utility, and Subjective Probability. Volume 3, pages 249–410 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), Handbook of Mathematical Psychology. New York: Wiley.
Mosteller, Frederick; and Nogee, Philip 1951 An Experimental Measurement of Utility. Journal of Political Economy 59:371–404.
Peterson, Cameron R.; and Miller, Alan J. 1965 Sensitivity of Subjective Probability Revision. Journal of Experimental Psychology 70:117–121.
Peterson, Cameron R.; and Phillips, Lawrence D. 1966 Revision of Continuous Subjective Probability Distributions. IEEE Transactions on Human Factors in Electronics HFE-7:19–22.
Peterson, Cameron R.; Schneider, Robert J.; and Miller, Alan J. 1965 Sample Size and the Revision of Subjective Probabilities. Journal of Experimental Psychology 69:522–527.
Peterson, Cameron R. et al. 1965 Internal Consistency of Subjective Probabilities. Journal of Experimental Psychology 70:526–533.
Phillips, Lawrence D. 1965 Some Components of Probabilistic Inference. Ph.D. dissertation, Univ. of Michigan.
Phillips, Lawrence D.; and Edwards, Ward 1966 Conservatism in a Simple Probability Inference Task. Unpublished manuscript, Univ. of Michigan, Institute of Science and Technology.
Phillips, Lawrence D.; Hays, William L.; and Edwards, Ward 1966 Conservatism in Complex Probabilistic Inference. IEEE Transactions on Human Factors in Electronics HFE-7:7–18.
Raiffa, Howard; and Schlaifer, Robert 1961 Applied Statistical Decision Theory. Graduate School of Business Administration, Studies in Managerial Economics. Boston: Harvard Univ., Division of Research.
Ramsey, Frank P. (1926) 1964 Truth and Probability. Pages 61–92 in Henry E. Kyburg, Jr. and Howard E. Smokier (editors), Studies in Subjective Probabilities. New York: Wiley.
Rapoport, Anatol; and Orwant, Carol 1962 Experimental Games: A Review. Behavioral Science 7:1–37. SAVAGE, LEONARD J. 1954 The Foundations of Statistics. New York: Wiley.
Schlaifer, Robert 1961 Introduction to Statistics for Business Decisions. New York: McGraw-Hill.
Schum, D. A.; Goldstein, I. L.; and Southard, J. T. 1966 Research on a Simulated Bayesian Information-processing System. IEEE Transactions on Human Factors in Electronics HFE-7:37–48.
Slovic, Paul 1966 Value as a Determiner of Subjective Probability. IEEE Transactions on Human Factors in Electronics HFE-7:22–28.
Tversky, Amos 1964 Additive Choice Structures. Ph.D. dissertation, Univ. of Michigan.
von Neumann, John; and Morgenstern, Oskar (1944) 1964 Theory of Games and Economic Behavior. 3d ed. New York: Wiley.
Wasserman, Paul S.; and Silander, Fred S. 1958 Decision Making: An Annotated Bibliography. Ithaca, N.Y.: Cornell Univ., Graduate School of Business and Public Administration.
The distinction between prescriptive and descriptive theories of decision is similar to that between logic—a system of formally consistent rules of thought—and the psychology of thinking. A logical rule prescribes, for example, that if you believe all X to be Y, you should also believe that all non-Y are non-X, but not that all Y are X; and if you believe, in addition, that all Y are Z, you should believe all X to be Z (a logical rule known as transitivity of inclusion). Here, X, Y, and Z are objects or propositions. Such prescriptive rules do not state that all people of a given culture, social position, age, and so forth, always comply with them. If, for example, the links in a chain of reasoning are numerous, the rule of transitivity will probably be broken, at least by children or by unschooled or impatient people.
Descriptions, and consequently predictions, of “illogical” behavior, as indeed of all human behavior, are presumably a task for psychologists or anthropologists. Such predictions are obviously important to practicing lawyers, politicians, salesmen, organizers, teachers, and others who work with people, just as predictions of the behavior of metals and animals are important to engineers and dairy farmers. These practitioners are well advised to know the frequencies of the various types of illogical behavior of their clients, adversaries, or students. But they are also well advised to avoid logical errors in their own behavior, as when they apply their knowledge of men (and of all nature, for that matter) to win lawsuits or elections or to be successful in organizing or teaching. For example, their propositions X, Y, and Z may be about another’s madness, yet should obey the rule of transitivity of inclusion. Moreover, if you want to train future lawyers, statesmen, or businessmen, you are well advised to learn, from your own or other people’s experiences, the techniques needed to make your pupil not only knowledgeable about other men’s logical frailties but also strong in his own thinking and able to solve the brain twisters his future will offer. These pedagogical techniques are, of course, objects of descriptive study.
To illustrate the prescriptive-descriptive distinction in the domain of decisions, let us consider one of the rules proposed by prescriptive decision theory. We choose the rule called transitivity of preferences (other proposed rules will be discussed below) because of its special similarity with transitivity of inclusion. If you prefer a to b and b to c, you should prefer a to c. Here, a, b, and c are, in general, your actions with uncertain outcomes, Although in the special case of “sure actions,” the outcome, e.g., gaining or losing an object, is unique, in which case preferring action a is, in effect, the same as preferring the outcome of action a. If an individual disobeys the transitivity rule and prefers a to b, b to c, and c to a, we would say that he does not know what he wants. Surely we would advise or train a practitioner to develop the ability for concentrated deliberation, weighing advantages and disadvantages in some appropriate way. This should result in a consistent ranking, from the most to the least preferable, with ties (i.e., indifference between some alternatives) not excluded, of alternative actions from which the decision maker may be forced to choose.
The statement that “given a set of feasible alternatives, the ’reasonable’ (rational) man should choose the best” and the parallel statement that “an actual man does choose the best (and if the best is not feasible, the second best, etc.)” are both empty if the ranking order is not supposed to be stable over a period long enough to make the statements of practical relevance. If the ranking order is stable, economics as the study of the best allocation of available resources becomes possible.
The boundary between descriptive and prescriptive economics did not worry early writers. Gresham’s law, Ricardo’s explanations of rent and of the high postwar price of bullion, and BoümBawerk’s theory of interest are deductions from the assumption of consistent ranking of alternatives by reasonable men. But those authors also practiced induction and looked for historical facts that confirmed their deductions. The two approaches are not inconsistent if one assumes that in the cultures considered, housewives and adult, gainfully occupied men have by and large been “reasonable,” at least when deciding not on wars and divorces but on matters of major interest to the economists of the time—for example, quantities such as prices, demands, and outputs, as well as less quantifiable choices such as location and internal organization (division of labor) of a plant. These matters were studied under the assumption of a tolerably well-known future.
The descriptive and prescriptive approaches became more clearly distinguished as more attention was directed to economic decisions under uncertainty (Fisher 1906; Hicks 1939; Hart 1940) and, stimulated by the Theory of Games and Economic Behavior (von Neumann & Morgenstern 1944), to nonquantifiable decisions traditionally assigned to political and military science or to sociology. In search of tools for their descriptive work, economists were also influenced by modern statistical decision theory. [SeeDECISION THEORY.] It then appeared that the reasonable man was not sufficiently approached even by the ideal type of an industrial entrepreneur, who evidently must be aided by a sophisticated hypothesis tester, a statistician, or, more recently and generally, an expert in operations research. [SeeOPERATIONS RESEARCH.] As such experts, armed with computers, do indeed progressively penetrate the economic world, predictive and descriptive economics (e.g., the prediction of aggregate inventories) may again become closer to what can be deductively derived from rational decision rules.
To return to the analogy between theories of thinking and of decision, prescriptive theories of thinking and decision are concerned with formal consistency between sentences or decisions, not with their content. If one believes that Los Angeles is on the Moon and that the Moon is in Africa, it is logical for him to believe that Los Angeles is in Africa, although none of the three statements is true. Similarly, if a man prefers killing to stealing and stealing to drinking strong liquor, he should prefer killing to drinking, although none of these actions is laudable or hygienic. Hicks (1939) used Milton’s “reason is also choice” as a motto to the theory of consumption. Ramsey (1923–1928) called his theory of decision an extension of logic, an attitude adopted more recently by other logicians, for example, Carnap (1962), von Wright (1963), and Jeffrey (1965). The mutual penetration of logic and decision theory will appear even more founded and urgent when we come to discuss the cost of thinking and deciding (section 8).
In sections 1, 3, 4, and 5 we shall discuss some prescriptive decision rules and outline some simple experiments whose outcomes determine whether the subject did or did not comply with these rules. Whenever such experiments or similar ones have actually been performed, we shall refer to available descriptive evidence, namely, the frequency of subjects’ obeying these rules or obeying some other stated behavior patterns, such as the probabilistic decision models of section 2. In section 6 the expected utility theorem implied by the prescriptive rules is discussed. Sections 7 and 8 point to unsolved problems on the frontier of decision theory.
1. Complete ordering of actions
Before we consider the notion of a complete ordering of alternative actions or decisions (we need not distinguish between the two here), it is convenient to restate the transitivity of preferences by defining the relation “not preferred to” or “not better than,” written symbolically as ≤. If a ≤ b and b ≤ c, then a ≤ c. It is a reflexive relation, i.e., a ≤ a. A relation that is transitive and reflexive is called an ordering relation. When a ~ b and b ≤ a, we write a ~ b and say that the decision maker is indifferent between a and b, even though a and b are not identical. For example, an investor may be indifferent between two different portfolios of securities. Thus “≤” (unlike the relation “not larger than” applied to numbers) is a weak ordering relation. Further, when a ≤ b and not b ≤ a, we write a < b and say that b is preferred to, or is better than, a.
Shall we say that a reasonable man always either prefers b to a or a to b or is indifferent? Or can he, in addition, refuse to compare them? If he can, the ordering of the actions by the relation “not better than” is only a partial one. This position has been taken, for example, by Aumann (1962), Chipman (in Interdisciplinary Research Conference 1960), and Wolfowitz (1962). But can one escape choosing? As in Pascal’s (1670) famous immortality bet, “II faut parier!” The avoiding of choice itself is a decision. Like other decisions, it is made more or less desirable by the prizes and penalties it entails. If this is granted, the transitive relation “≤” is also connective and induces a complete ordering. Then any subset of actions can be arranged according to their desirabilities (with ties not excluded). Rank numbers, also called ordinal utilities (priorities), can then be assigned to the alternative actions provided that certain weak conditions are satisfied (Debreu 1959, section 4, note 2, and section 4.6).
Tests of transitivity of preferences
As a matter of descriptive theory we ask whether people (of a given culture, etc.) do act as if there exist complete orderings of their actions by preferences. In actual experiments the penalty for refusing to choose was assumed to be strong enough to force subjects to make genuine choices in the form of verbal statements of preferences or, in some cases, actually to stake money on one wager rather than another. With connectivity of the preference relation thus assured and its reflexivity assumed by definition, the transitivity of the relation remained to be tested. The alternative actions were represented by multidimensional objects—a bundle of tickets to various shows (Papandreou 1957), marriage partners classified by three three-valued characteristics (May 1954; Davis 1958), a monetary wager (Edwards 1953; Davidson & Marschak 1959), a price policy affecting both the firm’s rate of return and its share of the market, an air trip varying in both cost and duration (MacCrimmon 1965), and so forth. The subject responded by a triad of binary choices—a or b? b or c? c or a?— but in most experiments these choices were separated in time by choices from other triads, that is, “a or b?” was followed by “a’ or b’?” rather than by”b or c?” Moreover, neither of the two actions “dominated” the other (see section 3), and no subject could use paper and pencil.
In all experiments, the transitivity rule was violated with varying frequencies. The proportion of the number of violations by all subjects to the maximum theoretical number of intransitive triads that were possible in the given experimental designs ranged from .04 in MacCrimmon’s tests of business executives to .27 in May’s tests of students. Since the experiments record actual choices and not preferences, intransitivity might be exhibited by a person who, although consistent, is strictly indifferent among three alternatives. (To approach, more generally, the case of “near indifference,” the hypothesis tested must be rephrased in probabilistic terms. This will be discussed in section 2.) Thus the outcome of an experiment depends on the nature of the alternatives offered, making the experiments mentioned not quite comparable.
It would also be of great interest to study the effects of the passage of time and the effects of learning, especially if different methods of training are applied, such as “post-mortem” discussion (a domain opened by MacCrimmon), sequential modifications of the list of choices, etc. In fact, what would be the effect of supplying the subjects with paper and pencil and training them to use these prerequisites of our culture to tabulate the decision problem in an “outcome matrix” as in Table 1 below?
It has been suggested (Quandt 1956; Simon 1957) that intransitivities of decisions occur when the subject is unable to pay simultaneous attention to the several dimensions of each object of choice. This is somewhat analogous to the psychophysical finding that discrimination is greater between sounds differing in pitch but of equal loudness than between sounds differing in both (Pollack 1955). This suggests that the probabilistic approach common in psychophysics might be tried in a description of decision behavior. As to prescriptive theory, one may recall Benjamin Franklin’s suggestion to score the pros and cons of each decision or the practice of adding scores when grading students or judging young ladies in beauty contests. Again, these or any other ways of overcoming the multi-dimensionality of decisions through deliberate and simultaneous marshaling of all the dimensions will in general require paper and pencil.
2. Probabilistic ordering of actions
Since the prescriptive rule of transitivity is often violated, it is necessary to recast descriptive decision theory in different terms. The hypothesis that transitivity of decisions may be achieved by learning or training has already been mentioned. It is also traditional in experimental psychology, especially since Fechner’s (1860) studies of perception, to describe behavior by probability distributions. Economists, by contrast, have succeeded in the past in making useful predictions of the aggregate behavior of large numbers of individuals based on the assumption that individuals are, by and large, “reasonable.” The probabilistic approach is rather new in economics, perhaps as young as the econometricians’ attempts to test aggregate economic models statistically and to relate these models to observable behavior of sampled individual households and firms.
Luce and Suppes (1965) have comprehensively surveyed the literature and the experimental evidence relevant to probabilistic orderings. One model that implies all the others suggested so far is that presented by Luce (1959). Fixed positive numbers va, vb,vc, …, called strict utilities, are attached to the alternative actions a, b, c, … . The set of actions includes all alternatives that may ever be offered to a given subject. The strict utilities are assumed to have the following property: if the subject must choose from a particular subset of alternatives, he will choose each with a probability proportional to its strict utility. For example, suppose VaVbVc:Vd = 1:2:3:4. Then, if asked to choose from the subset (a, b, c), the subject will choose a, b, or c with respective probabilities 1/6, 2/6, 3/6; if he must choose from the pair (a, b), he will choose a or b with respective probabilities 1/3, 2/3. The hypothesis can be tested at least “in principle” (i.e., assuming that representative samples from the universe of all choice situations can be obtained). One would observe the relative frequencies of choices from various subsets and infer, for example, that the ratio of the probability of choosing a to the probability of choosing b from the pair (a, b) does or does not change if a third alternative is also offered.
In another model, constants ua, ub, uc, …, called strong utilities, are associated with the alternatives a, b, c, … . The strong utilities are assumed to have the following property: in the binary choice between a and b, the probability p(a,b) of choosing a is an increasing function of the difference ua – ub (except when p(a,b) is equal to 1 or 0). It follows that according as Note that the case provides an operational definition of “indifference” based on actual choices rather than on the verbal statement “I am indifferent.” This model was used, in effect, in Thurstone’s discussion of public opinion and of soldiers’ food preferences (1927–1955). It is analogous to Fechner’s perception model, which associates a physical stimulus a with a subjective “sensation” ua; the subject perceives a to be heavier (or louder or brighter) than b with a probability p(a,b) that increases with the difference ua–ub (provided p(a, b) is neither 0 nor 1).
It can be shown that if strict utilities exist, then strong utilities also exist (putting ua = log va, etc.), but not the converse. The strong utility model implies, in turn, the still less restrictive weak utility model, which assigns the ordinal numbers wa, wb, wc, … to the alternatives a, b, c, …, such that Clearly, if the strong utilities exist, so do the weak utilities (putting wa = un, etc.), but not the converse since differences between ordinal numbers are undefined. Both the weak utility model and the strong utility model can be tested in principle for possible rejection, for they have implications that involve choice probabilities only, avoiding the intervening utility concept. In particular, the models imply, respectively, the weak stochastic transitivity,
and the strong stochastic transitivity,
Of these two forms of stochastic transitivity, the strong implies the weak one; it is, in turn, implied by the exact transitivity rule treated in section 1 if the choice probabilities are assumed to take on only three values: 0, , and 1, corresponding to the preference relations a < b, a ~ b, and b < a.
Another probabilistic model also implied by, and less restrictive than, Luce’s strict utility model is the random utility model. It does not imply, nor is it implied by, the strong or the weak utility model. It asserts, for a given subject, the existence of a fixed probability distribution on the set of all preference orderings of all alternatives, so that for a set of N alternatives, a probability is attached to each of their N! permutations. In the nonprobabilistic economics of consumer’s choice the (ordinal) utility function that attaches a constant rank to each commodity bundle is a fixed one. In the random utility model this function is, instead, visualized as drawn at random from a set of such functions, according to a probability distribution that is fixed for the given subject. Thus the word “utility” designates here a random number, not a constant as in the other probabilistic models described.
What relevance do these (and related) models have for a prescriptive theory of decision? Just as not all men are always consistent, not all men are always in good health. Doctors and nurses are busy measuring the temperatures and blood pressures of the sick ones. The probability distributions supposed to characterize a “stochastic decision maker” may vary in the degree of their closeness to the ideal limit—consistency. For example, consistency is approached in the strong utility model as the values of p(a, b) concentrate near 1 or 0 or . We can thus trace the path of progress of learning or training for consistency.
3. Inadmissibility of dominated actions
An action’s result (outcome) depends, in general, not only on the action but also on the “state of the world (nature or environment),” which is not in the decision maker’s control. For our purposes an action (called an act by Savage ) can be defined by the results it yields at various states of the world. Thus an action is a function from the set of states to the set of results. (This function has constant value if the action is a “sure action.”) In a decision situation, only some such functions are available; their “feasible set” (the “offered set of alternatives” of section 2) depends, for example, on the decision maker’s resources, technology, and market as he views them.
An event is a subset of the states of nature. In particular, the subset consisting of all states that, for any given feasible action, result in the same outcome is called an outcome-relevant event. In a decision situation the set of states is partitioned into such events, with all irrelevant details omitted.
Even if the well-disciplined decision maker can consistently rank multidimensional results according to his preferences, action under uncertainty remains multidimensional because of the multiplicity of events. Actions are bets, or wagers. Yet it seems reasonable to prescribe (as in section 1) that bets be ordered; indeed, some authors prescribe that bets be completely ordered (Ramsey 1923–1928; Savage 1954; de Finetti 1937). It will be shown in section 6 that a complete ordering corresponds to that of numerical “expected utilities” of actions provided that the decision maker is consistent in the sense of obeying both the traditional rules of logic and certain postulates that are plausible enough to qualify as extensions of logic. These postulates will be discussed in this and subsequent sections, essentially following Savage.
Action a is said to dominate action b if the results of a are sometimes (i.e., when some events take place) better than the results of b and are never worse. Is it not reasonable that you should prefer
the dominating action? Any action that is dominated by some feasible action is thus inadmissible. Consider, for example, the actions listed in Table 1. If action a is taken and the state of the world is Z1, a gain of $1,000 will be realized; if action a is taken and the state of the world is Z2, a gain of $1,500 will be realized; and so on. Such a table is called an outcome matrix.
The inadmissibility postulate would order these actions as follows: a’ < a, b < a. It does not, by itself, induce complete ordering of the actions. Thus, it is silent about the preference relation between actions a’ and b in Table 1. The “expected utility” rule of section 6 will determine this relation roughly in the following manner: depending on the “probabilities” of events and on the “utilities” of results, the $500 advantage of a’ over b if Z2 happens may or may not outweigh its $1 disadvantage if Z1 happens. This appeals to common sense but remains vague until, with help of other postulates, we define the concepts of “probability” and “utility.”
Does the inadmissibility rule describe people’s actual behavior? It is difficult to make an individual violate the rule when the decision situation is presented with clarity, for example, by an outcome matrix whose entries have an obvious ordering (sums of money, as in Table 1, or good health, light illness, and death). But consider the following experiment. A subject is given an envelope containing v dollars; he is permitted to convince himself that the envelope does contain v dollars. He writes, but does not tell the experimenter, his asking price, a, for the envelope and its contents. He will then receive a bid of x dollars. If x exceeds a, he will receive x dollars; otherwise, he will keep the v dollars. There will be no further negotiations. Subjects often ask more than the true value, setting a = a1 > v. In Figure 1, the stipulated result r of such action is plotted against possible levels of the bid x (state of nature). Comparison with the corresponding plot for the “honest” asking price, a = a0 = v, shows that a2 is dominated by a0. In fact, an asking price a2 < v (not shown in Figure 1) is also dominated by a0. Hence the honest asking price is the only admissible one. It seems that some subjects expect further negotiations despite the explicit warning. We surmise that they would waive uncalled-for associations and habits if trained to plot or tabulate payoffs.
4. Irrelevance of nonaffected outcomes
Table 2 presents a matrix of outcomes measured in per cent of return on a firm’s investment. We may interpret a, b, a’, and b’ as the firm’s investing in the development of alternative products. The outcomes of these actions depend on the mutually exclusive events (say, business conditions) Z1, Z2, and Z3. Suppose the firm prefers a to b. This preference cannot be due to any difference in the outcomes if the event Z1 happens, for these outcomes are identically 5. Therefore, the firm’s preference of a to b must be due to its preferring the wager “-200 if Z2, 100 if Z3” to the return of 5 with certainty. But then the firm should also prefer a’ to b’, for, again, if Z1 happens, the outcome (-200) is not affected by the firm’s choice. Its preference as between a’ and b’ must depend on its preference as between the wager “-200 if Z2, 100 if Z3” and the certainty of 5, just as in the previous case. By the same reasoning, if the firm is indifferent between a and b, it should be indifferent between a’ and b’. The rule enunciated here can be regarded as a generalization of the admissibility rule of the previous section, with wagers admitted as a form of outcomes so that, for example, action a is described
as follows: if Z1 happens, you get 5; otherwise you get a lottery ticket, losing 200 if Z2 happens and gaining 100 if Z3 happens.
Tests of the irrelevance rule
Most business executives violated the irrelevance rule when MacCrimmon (1965) verbally gave them the choices in Table 2—first a versus b, then a’ versus b’. However, a large majority of my students, who also never heard of the irrelevance principle but knew how to draw up a matrix of outcomes, complied with the rule. Allais (1953) performed a similar experiment, but instead of describing the three events as alternative business conditions, he gave them numerical probabilities (an extraneous notion in the present context), the probability of one of the events being only .01. Most of his respondents, including some decision theorists, violated the principle (but see Savage’s pungent introspective discussion of his own second thought [1954, sec. 5.6g]).
The following type of experiment has been discussed widely. Funnel 1 contains an equal number of red and black balls, and the subject is invited to convince himself of this; but he is not permitted to look into funnel 2, which he only knows to contain one or more balls whose colors are either black or red. When a handle is pulled, each funnel will release only the bottom ball. In the following bets, he will win either $100 or nothing:
Bet IR: bottom ball in funnel 1 is red.
Bet IB: bottom ball in funnel 1 is black.
Bet 2R: bottom ball in funnel 2 is red.
Bet 2B: bottom ball in funnel 2 is black.
The outcomes are as indicated in Table 3, where, for example, event RB is “bottom ball in funnel 1 is red, bottom ball in funnel 2 is black.” Suppose the subject always prefers to use funnel 1 rather than funnel 2, that is, he prefers bet 1R to 2R and also bet IB to 2B. He will thus treat the results unaffected by his choice—namely, those earned when events RR and BB occur—as relevant to his choice! Yet similar experiments performed by Chipman (see Interdisciplinary Research Conference 1960) and Ellsberg (1961) suggest that many people do indeed prefer funnel 1 to funnel 2. Some of my
student subjects who did so motivated their choice by stating the sharp distinction between “risk” (funnel 1) and “uncertainty” (funnel 2), a distinction taught to economists since Knight (1921). Others stated that they could base their bets on more information when using funnel 1 than when using funnel 2. But information, although never harmful, can be useless. In experiments of Raiffa (1961) and Fellner (1965), subjects were, in effect, willing to pay for such useless information.
5. Definitions of probabilities
Having defined events Z1, Z2, Z3, … as subsets of the set X of all states of nature, it is consistent with current mathematical language and current English to require that any numbers, P(Zl), P(Z2), P(Z3), …, claimed to be the probabilities of these events, satisfy the following conditions: (1) they should be nonnegative; (2) for any two mutually exclusive events, say Z1 and Z2, the number assigned to the event “Z1 or Z2” (meaning the occurrence of either Zl or Z2) should equal the sum of P(Z1) and P(Z2); (3) P(X), i.e., P(Z1 or Z2 or Z3 or …), should be equal to one.
The numbers P(Z1), P(Z2), P(Z3), … are an individual’s personal probabilities of these events if, in addition to having the mathematical properties just stated, they describe his behavior in the following sense: whenever P(Z1) > P(Z2), for example, he will prefer betting on Z1 to betting on Z2, assuming he wants to win the bet. This, too, is consistent with ordinary English—he will prefer betting on the victory of the Democrats in the next election to betting on the truth of the proposition that California is longer than Norway if and only if he considers winning the former bet “more probable” than winning the latter one. Here, “betting on Z1” means taking an action that yields a result s (for success) if Z1 occurs, a result that is more desirable than the result f (for failure) if Z1 does not occur. The probabilities of events must depend on the events only, i.e., the individual’s preferences between bets must be the same for all pairs of results s, f ($100, $0; status quo, loss of prestige; etc.) provided only that s is better than f. This postulate of independence of beliefs on rewards must be added to those of sections 1, 3, and 4. Will the subject reverse his judgment about the comparative chances of any two events—as revealed by his choices between two bets—if the prizes are changed? If not, he satisfies the postulate. It seems to be satisfied by practically all subjects asked to rank several bets according to their preferences, the rewards being first a pair s, f, then a different pair s’, f′.
Suppose a subject is indifferent to bets on any of the eight horses running a race. His preferences would thus imply that P(Z1) = P(Z2) = … = P(Z8), where Z1 is the event that the ith horse wins (ties are excluded for simplicity). The nonnegativity property is satisfied if we put for all i. What about the other two mathematical properties required? Suppose the subject considers double bets with the same prizes s, f as for single bets. He will prefer every double bet, e.g., the bet on the event “Z1, or Z2,” to any single bet (here he has, in effect, applied the inadmissibility postulate), and he will be indifferent between any two double bets (here the postulates of irrelevance and of intransitivity apply). Similarly, he will prefer triple bets to double bets and will be indifferent between any two triple bets, and so on. Consistent with these preferences, we can put
p(Zi or Zj) = 2/8 =p(Zi) + p(Zj)
for any three horses i, j,
P(Zi or Zj or Zk) = 3/8 = P(Zi or Zj) + P(Zk)
for any three horses i, jand k, and so on. In general, we can assign probability P(Z) = m/8 to the event Z that one of the m specified horses will win (where 1 ≤ m ≤ 8). Then the numbers P(Z) are the subject’s personal probabilities, for they agree with his preferences between bets and also satisfy the three mathematical requirements stated at the beginning of this section.
Instead of a horse race, a subject is asked to imagine a dial divided into n equal sectors. Suppose the hand of the dial is spun and comes to rest in the ith sector. We define this occurrence as event Zi. If the subject is convinced that the dial’s mechanism is “fair,” i.e., that the events Z1, Z2, …, Zn are symmetrical (exchangeable), he will be indifferent among bets on any one of these n events. [SeeProbability, article oninterpretations.] Following Borel (1939, chapter 5), he can then assess his personal probability of any event T, say “rain tomorrow,” by finding a number m (1 ≤ m ≤ n) such that betting on T is not more desirable than betting on “Z1 or Z2 or … or Zm” and is not less desirable than betting on “Z1, or Z2 or … or Zm-1.” Then his personal probability of T, P(T), satisfies the inequality
By making n arbitrarily large, one can assess P(T) arbitrarily closely, and by using dial arcs which represent any fraction, rational or irrational, of the dial’s circumference, one can define personal probabilities ranging continuously from 0 to 1. Note that such assessments of personal probabilities, when determined in an experiment, are based not on the subject’s verbal statement of numbers he calls probabilities but on his actual choices. They may therefore be useful in predicting actions provided that the subject is consistent. If he is not consistent, his violations may or may not be similar in principle to those incurred in any instrument readings—a theme of probabilistic psychology touched upon in section 2.
As a special case, personal probabilities of some real-world events may be “objective,” i.e., the same for different people. This is particularly the case when there is agreement that the events come sufficiently close, for all practical purposes of those involved, to fulfilling certain symmetry requirements. Approximate symmetry is assumed for the positions of a roulette dial like Borel’s and for the occurrences of death among many similar males of age 20. Such requirements are strictly satisfied only by idealized, mathematically defined events—events that are never observed empirically. A “fair” coin, a “fair” roulette dial, a “homogeneous” population of males aged 20 (or a “random” sample from such a population) are all mathematical constructs. The mathematical theory of probability applies rules of logic to situations in which strict symmetry and the three properties stated at the beginning of this section hold (refining property 2 in order to accommodate the case of an infinite X). If decision makers agree that certain events are approximately symmetric, and if they apply logical rules, then their choices between betting on (predicting) any two events will agree; their personal (in this case also objective) probabilities will coincide with those given by mathematical theory. Clarity requires us, however, to distinguish between mathematical probabilities and objective probabilities assigned by decision makers to empirical events, just as we distinguish between a geometric rectangle and the shape of an actual sheet of paper.
6. Expected utility
The four postulates discussed thus far—complete ordering of actions, inadmissibility of dominated actions, irrelevance of nonaffected outcomes, and independence of beliefs on rewards—appear about as convincing as the rules of logic (and about as subject to transgression by people not trained in untwisting brain twisters). Together with a “continuity” postulate (to be introduced presently), they imply the following rule, which is more complicated and less immediately convincing: The consistent man behaves as if he (1) assigned personal probabilities P(Z) to events Z, (2) assigned numerical utilities u(r) to the results r of his actions, and (3) chose the action with the highest “expected utility.” The expected utility ω(a) of action a is the weighted average
where the event Zm is the set of all states for which action a yields result r.
The rule is trivially true when the choice is among sure actions; if action a always yields result r, then P(Zm) = 1, so that ω(a) = u(r).
Consider now actions with two possible results—success s and failure f. This is the case, for example, when actions are two-prize bets (as in section 5) or when the decision maker is a “satisficer” (Simon 1957) for whom all outcomes below his “aspiration level” are equally bad and all others equally good. In section 5 we saw that of two two-prize bets, a consistent decision maker prefers the bet that has the higher probability of success. Since s is better than f, we can assign numerical utilities
u(s) = 1 > 0 = u(f),
and we see that the expected utility ω(b) of a twoprize bet b coincides with its probability of success P(Z8b), since
ω(b) = 1 · P(Z8b) + 0 · P(Z∫b) = P(Z8b).
Thus the satisficer maximizes the probability of reaching his aspiration level.
As the next step, we compute the probability of success and hence the expected utility ω(c) of a bet c compounded of n simple two-prize bets or lottery tickets b1, …,bn on n different (but not necessarily mutually exclusive) events T1, …, Tn . Lottery ticket bi is a bet on the event Ti, and the subject will receive ticket bi if Zi happens. The events Z1, …, Zn are mutually exclusive events one of which must happen, and the events “Zi, and Ti” (the occurrence of Zi and Ti) are pairwise independent in the sense that
P(Zi and Ti) = P(Zi) · P(Ti).
We can thus regard ticket bi as the result yielded by action c when Zi happens. Hence P(Zi) = P(Zb;C) in the present notation. Moreover, we have just shown that the expected utility of a simple two-prize bet can be measured by its probability of success, so that P(Ti) = ω(bi.i). Clearly, the probability of success of the compound bet c is the probability of the event “(Z1 and T1) or (Z2 and T2) or … or (Zn and Tn)”; by mathematical property 2 of probabilities (section 5), this is equal to ΣiP(Ti) · P(Zi). Hence,
i.e., the expected utility rule is valid for the special case where each result of an action is a two-prize bet.
To extend this in a final step to the general case, let s be the best and f the worst of all results of an action. In the preference notation of section 1, f ≤ r ≤ s for any result r. Consider the continuous range of all bets b whose two prizes are s and f and whose success probabilities take all the values between (and including) 1 and 0. Then, for any b, f ≤ b ≤ s, and for a given r, r ≤ b or b ≤ r depending on the bet’s success probability. A plausible continuity postulate asserts, for each r, the existence of a bet, say br, such that r~br. We can therefore assign to r a utility u(r) = ω(br). A decision maker should then be indifferent between an action a that yields various results r with respective probabilities P(Zm) and a bet c compounded of the corresponding two-prize bets br just described entering with the same probabilities P(Zra). That is to say, P(Zra) = P(Zm). The expected utility rule follows, since
Some insight into this derivation of the expected utility rule is provided to the trainee in decision making by letting him rank his preferences among the tickets to four lotteries. Each ticket is described by prizes contingent on two alternative events, one of which must occur. An example of such a decision problem is presented in Table 4, where p is written for P(Z) for brevity. If the event Z is “a coin is tossed and comes up heads” (we refer below to this event simply as “heads”) and the subject regards the coin as “fair,” then . But Z may also
|Table 4 – Decision problem involving lottery tickets|
|Prize||if event Z|
|Lottery||if event Z||does not|
|ticket||happens||happen||Probability of gaining $100|
|$100||p2 + 1 – p > p|
|$100||p2 < p|
|ticket b||ticket c||p(p2 + 1 – p) + (1 – p)p2 = p|
be, for example, “the next sentence spoken in this room will contain the pronoun ‘I.’” In any case, when the inadmissibility postulate is applied, it is evident from the last column of Table 4 that ticket a is better than c and worse than b. Furthermore, the decision maker should be indifferent between tickets a and d.
Cash equivalents and numerical utilities
Let us define the cash equivalent of ticket a, denoted k(a), as the highest price the decision maker would offer for ticket a; fe(fo), fe(c), and fe(d) are defined similarly. If asked to name his cash equivalent for each lottery ticket in Table 4, the decision maker should name amounts such that k(b) > k(a) = k(d ) > k(c). If he fails to do this, he is inconsistent, and no scale of numerical utilities can describe his behavior. If he is consistent, and if the event Z is “heads,” the following utilities for some money gains can be ascribed to him:
Some but not all subjects conform with the required ranking of the lottery tickets. Therefore, in any empirical estimation of a subject’s utilities and personal probabilities, one must check whether the subject is consistent, at least in some approximate sense. As pointed out in the simpler context of section 2, probabilistic models of decision and of learning to decide are needed for any descriptive theory, and they too may fail.
Behavior toward risk
Again suppose that the event Z in Table 4 is “heads.” If the subject has named cash equivalents k(a) = k(d) = $50, k(c) = $25, and k(b) = $75 (and similarly for further, easily conceived compound lotteries with utilities ), we would infer that over the observed range he is indifferent between a “fair bet” and the certainty of getting its expected gain. We would say that he is “indifferent to risk.” His utility function of money gain is a straight line. On the other hand, if he has named cash equivalents k(a) = k(d) < $50, , and k(b) < [100 + &(#)], his utility function of money gain will be a concave curve (any of its chords will lie below the corresponding arc), and we would say that he is “averse to risk.” If the inequality signs in the preceding sentence are reversed, the utility function is convex (chords are above arcs), and we would say that over the observed range the subject “loves risk.” When the utility function is either concave or convex, the decision maker maximizes the expected value, not of money gain, but of some nonlinear function of money gain.
Daniel Bernoulli (1738) pointed out that the utility function is concave in the case of the “Petersburg paradox.” Marshall (1890) also assumed risk aversion as an economic fact (or perhaps as a prescription from Victorian morals) equivalent to that of “decreasing marginal utility of money.” Some economic implications of risk aversion were given by Pratt (1964) and Arrow (1965). It should be noted that this assumption is inconsistent with the behavior of a satisficer for whom utility is a step-function of money gain and the behavior of a merchant for whom the disutility of bankruptcy is the same regardless of the amount owed.
The maxmin rule
In competition with the expected utility rule (and the postulates underlying it) is the “conservative” or “maxmin” rule, which states that the decision maker should maximize the minimum payoff; that is to say, the maxmin rule proposes that preferences among actions be based on each action’s worst result only. Thus in the case of Table 1 the maxmin rule would prescribe that a’ < b even if Zl has the probability of, say, an earthquake, and presumably b ~ a, contradicting the inadmissibility postulate. However, because of the strong appeal of the inadmissibility postulate, it is usually proposed that the maxmin rule be applied only when domination is absent; this leads to the ordering a’ < b ~ a rather than a’ < b < a. This proposal is not too satisfactory since it violates a plausible continuity principle (Milnor 1954). A small modification in outcomes —by $1 in Table 1, or by 1 cent for that matter— can make or break domination and thus reverse the preference ordering. Indeed, should we advise people not to live in San Francisco or Tokyo because of possible but not very probable earthquakes? Should we advise pedestrians never to cross streets because of the possibility of being struck by an auto? Such considerations compel us to balance the advantages against the disadvantages of competing decisions, weighing them by appropriately defined probabilities—the expected utility principle.
A historical note
Early theories prescribing maximization of expected utility were confined to special cases. Two types of restrictions were imposed. First, the decision maker was advised to maximize the expected value of his money wealth or gain, i.e., utility was in effect identified with a money amount or some linear function of a money amount. Some other quantifiable good, such as the number of prisoners taken or the number of patients cured, might play the same role as money, but nonquantifiable rewards and penalties of actions were unnecessarily excluded from the set of results. Second, probabilities were restricted to objective ones—a special case of personal probabilities.
It is difficult to imagine that an experienced Bronze Age player who used dice that had “tooled edges and threw absolutely true” (David 1962) would bet much more than 1:5 on the coming of the ace. Cardano’s efforts in computing the gambler’s odds (Ore 1953) suggest at any rate that by the sixteenth century the rule of maximizing average money gains computed on the basis of objective probabilities was taken for granted.
We have already cited Bernoulli and Marshall as having proclaimed utility a nonlinear function of money, thus lifting the first of the above restrictions. Marshall also applied the utility concept to commodity bundles. Von Neumann and Morgen-stern (1944) extended it further to all possible results of actions and derived the expected utility rule from simple consistency postulates. This was simplified by others, especially Herstein and Milnor (1953). All these writers dealt only with objective probabilities, leaving out the important cases in which symmetries between relevant events are not agreed upon.
Bayes (1764) can be credited with the idea of personal probabilities. He thought of them as being revealed by an individual’s choices among wagers—not just on cards and coins, but on horses and fighting cocks as well! Thus Bayes removed the second restriction, but he retained the first in assuming that utility was in effect identified with money gains, so that betting 9 guineas against 1 implies a corresponding ratio of probabilities. In 1937, de Finetti provided mathematical rigor for this approach.
In 1926, Ramsey (1923–1928) stated, perhaps for the first time, simple consistency postulates that imply the existence of both personal probabilities (of any events, regardless of symmetries) and utilities (of any results of actions, quantifiable or not). Savage (1954) restated the consistency postulates and, partly following de Finetti and von Neumann and Morgenstern, proved that they imply the expected utility rule. For an original exposition, see Pratt, Raiffa, and Schlaifer (1964).
Reviews of the works of other contemporary authors are found in Arrow (1951), Luce and Suppes (1965), Ozga (1965), and Fishburn (1964). [See alsoEconomic Expectations.] Surveys and bibliographies as well as much original material by 21 leading authors on both the prescriptive and descriptive aspects of decision theory are found in Shelly and Bryan (1964).
The existence of numerical utilities describing a decision maker’s “tastes” and of probabilities describing his “beliefs” has been shown to follow from rules of consistency. To formulate and solve the problem of choosing a good decision, both tastes and beliefs must be assumed fixed over some given period of time (but see Koopmans  on “flexibility”). In general, the probabilities the decision maker assigns at any time to the various states of nature and thus to the results of his actions depend on his information at that time. New information may also uncover new feasible actions. We shall generalize the decision concept accordingly in several steps.
A strategy (also called a decision function or response rule) is the assignment, in advance of information, of specific actions to respond to the different messages that the decision maker may receive from an information source. If, more generally, messages, actions, and results form time sequences (the case of “earning while learning”), we have a sequential strategy (also called an extensive or dynamic strategy). To each possible sequence of future messages, a sequential strategy assigns a sequence of actions (see, for example, Theil 1964). An optimal sequential strategy maximizes the weighted average of utilities assigned to all possible sequences of results, possibly taking into account “impatience” by means of a discount rate (Fisher 1930; Koopmans 1960). The weights are personal joint probabilities of sequences of events and messages. This amounts to the same thing as saying that the probabilities of events are revised each time a message is received.
It is still more general to redefine action in order to include in it the choice of information sources to be used and thus of “questions to be asked” at a given time. The resulting problem of finding an optimal informational strategy is a task of economic theories of information and organization and also of statistical decision theory (e.g., Raiffa & Schlaifer 1961) where events and information sources correspond to hypotheses and experiments, respectively.
Finally, if we allow the decision maker to receive messages about the feasibility and outcomes of actions he has not formerly considered, we obtain the still more general concept of exploratory strategy. True, it has been almost proverbially tenuous to assign probabilities to the results of industrial or scientific research; yet these undertakings are not different in principle from many other ventures and bets, such as those discussed in section 5.
The complex strategies noted here are hardly maximized by the actual entrepreneur, although the penetration of industry by professionals may again bring descriptive and prescriptive economics to their pristine closeness. Today many descriptive hypotheses in this field use the concept of aspiration level—the boundary between success and failure. The actual decision maker is said to revise his aspiration level upward or downward depending on whether he has or has not reached it by previous action; exploration for actions not previously considered is triggered by failure (Simon 1957). The sequence of actions generated by the dynamic aspiration-level model will, in general, differ from that prescribed by dynamic programming. [SeeProgramming.] Yet, with utilities assigned to results of actions in a particular way, it is possible that the aspiration-level mechanism is indeed optimal. It has been inspired, in fact, by adaptive feedbacks observed in live organisms; such feedbacks presumably have maximized the probability of the survival of species.
8. Cost of decision making
One action or strategy may appear better than another as long as we disregard the toil and trouble of decision making itself, i.e., the efforts of gathering information and of processing it into an optimal decision. The ranking of actions may be reversed when we take these efforts into account and deal, in this sense, with “net” rather than “gross” expected utilities of actions. A small increase in expected profit may not be worth a good night’s sleep. In statistics, we stop sequential sampling earlier the higher the cost is of obtaining each observation. As an approximation to some logically required, but very complicated, decision rule, we may use a linear decision rule (e.g., prescribe inventories to be proportional to turnover) in order to lessen computational costs. The “incrementalism” observed and recommended by Lindblom (1965) in the field of political decisions corresponds to the common mathematical practice of searching for a global optimum in the neighborhood of a local optimum or possibly in the neighborhood of the status quo. How many local search steps one should undertake and how often he should jump (hopefully) toward a global optimum will presumably depend on the costs of searching (see, for example, Gel’fand & Tsetlin 1962). Indeed, some strategies may be too complex to be computable in a finite amount of time, even though they respond to each message by a feasible action. There is, after all, a limit on the capacity of computers and on the brain capacity of decision makers. Hence, some strategies may have infinite costs.
The net utility of an action is often represented as the difference betwen “gross utility” and “decision cost.” A prescriptive theory would presumably require that personal probabilities be assigned not only to the outcomes of the actions but also to the efforts of estimating the outcomes and of searching for the action with the highest expected utility.
On a more general level, it is not strictly permissible to represent net utility as a difference between gross utility and decision cost. Even if the two were measurable in dollars, say, utility may be a nonlinear function of money gains. Yet the assumption that net utility is separable into these components simplifies the theory—it reduces the cost of thinking! Almost all prescriptive theory to date deals with gross utility only. Little attention has been given to decision costs that might be sub-tractable and thus definable in arriving at a net utility concept. Still less attention has been given to a net utility concept that cannot be decomposed into gross-utility and decision-cost components.
The elements for a theory of the expected cost of using inanimate computers, given the (statistical) population of future problems, are probably available. However, the current classification of computation problems as “scientific” and “business” (differing in their comparative needs for speed and memory) is certainly much too rough. A theory of mechanical computation costs would resemble the theory of the cost of manufacturing with complex equipment and known technology when capacities and operations are scheduled optimally. On the other hand, so little is known about the “technology” of human brains! If economists would join forces with students of the psychology of problem solving, insights would undoubtedly be gained into both the descriptive and prescriptive aspects of decision making.
Allais, M. 1953 Le comportement de I’homme rationnel devant le risque: Critique des postulats et axiomes de l’école americaine. Econometrica 21:503–546.
Arrow, Kenneth J. 1951 Alternative Approaches to the Theory of Choice in Risk-taking Situations. Econometrica 19:404–437.
Arrow, Kenneth J. 1958 Utilities, Attitudes, Choices: A Review Note. Econometrica 26:1–23.
Arrow, Kenneth J. 1963 Utility and Expectation in Economic Behavior. Pages 724–752 in Sigmund Koch (editor), Psychology: A Study of a Science. Volume 6: Investigations of Man as Socius: Their Place in Psychology and the Social Sciences. New York: McGraw-Hill.
Arrow, Kenneth J. 1965 Aspects of the Theory of Risk-bearing. Helsinki: Academic Bookstore.
Aumann, Robert J. 1962 Utility Theory Without the Completeness Axiom. Econometrica 30:445–462. → Corrections in Volume 32 of Econometrica.
Bayes, Thomas (1764) 1958 An Essay Towards Solving a Problem in the Doctrine of Chances. Biometrika 45:296–315.
Bernoulli, Daniel (1738) 1954 Exposition of a New Theory on the Measurement of Risk. Econometrica 22:23–36. → First published as “Specimen theoriae novae de mensura sortis.”
Borel, Émile 1939 Valeur pratique et philosophic des probabilités. Paris: Gauthier-Villars.
Carnap, R. 1962 The Aim of Inductive Logic. Pages 303–318 in International Congress for Logic, Methodology, and Philosophy of Science, Stanford, California, 1960, Logic, Methodology, and Philosophy of Science: Proceedings. Edited by Ernest Nagel, Patrick Suppes, and Alfred Tarski. Stanford Univ. Press.
Chernoff, Herman; and Moses, Lincoln E. 1959 Elementary Decision Theory. New York: Wiley.
David, Florence N. 1962 Games, Gods and Gambling: The Origins and History of Probability and Statistical Ideas From the Earliest Times to the Newtonian Era. New York: Hafner.
Davidson, Donald; and Marschak, Jacob 1959 Experimental Tests of a Stochastic Decision Theory. Pages 233–269 in Charles W. Churchman and Philburn Ratoosh (editors), Measurement: Definitions and Theories. New York: Wiley.
Davis, John M. 1958 The Transitivity of Preferences. Behavioral Science 3:26–33.
Debreu, Gerard 1959 Theory of Value. New York: Wiley.
de Finetti, Bruno (1937) 1964 Foresight: Its Logical Laws, Its Subjective Sources. Pages 93–158 in Henry E. Kyburg and Howard E. Smokier (editors), Studies in Subjective Probabilities. New York: Wiley. → First published in French.
Edwards, Ward 1953 Probability-preferences in Gambling. American Journal of Psychology 66:349–364.
Ellsberg, Daniel 1961 Risk, Ambiguity, and the Savage Axioms. Quarterly Journal of Economics 75:643–669.
Fechner, Gustav T. (1860) 1907 Elemente der Psychophysik. 2 vols. 3d ed. Leipzig: Breitkopf & Hartel. → An English translation of Volume 1 was published by Holt in 1966.
Fellner, William 1965 Probability and Profit. Home-wood, III.: Irwin.
Fishburn, Peter C. 1964 Decision and Value Theory. New York: Wiley.
Fisher, Irving 1906 The Risk Element. Pages 265–300 in Irving Fisher, The Nature of Capital and Income. New York: Macmillan.
Fisher, Irving (1930) 1961 The Theory of Interest. New York: Kelley. → Revision of the author’s The Rate of Interest (1907).
Franklin, Benjamin (1772) 1945 How to Make a Decision. Page 786 in A Benjamin Franklin Reader. Edited by Nathan G. Goodman. New York: Crowell. → A letter to J. Priestly.
Gel’fand, I. M.; and Tsetlin, M. L. 1962 Some Methods of Control for Complex Systems. Russian Mathematical Surveys 17, no. 1:95–117.
Hart, Albert G. (1940) 1951 Anticipations, Uncertainty and Dynamic Planning. New York: Kelley.
Herstein, I. N.; and Milnor, John 1953 An Axiomatic Approach to Measurable Utility. Econometrica 21:291–297.
Hicks, John R. (1939) 1946 Value and Capital: An Inquiry Into Some Fundamental Principles of Economic Theory. 2d ed. Oxford: Clarendon.
Interdisciplinary Research Conference, Universityof NEW MEXICO 1960 Decisions, Values and Groups: Proceedings. Edited by D. Willner. New York: Pergamon. → See especially the article by J. S. Chipman, “Stochastic Choice and Subjective Probability.”
Jeffrey, Richard C. 1965 The Logic of Decision. New York: McGraw-Hill.
Knight, Frank H. (1921) 1933 Risk, Uncertainty and Profit. London School of Economics and Political Science Series of Reprints of Scarce Tracts in Economic and Political Science, No. 16. London School of Economics.
Koopmans, Tjalling 1960 Stationary Ordinal Utility and Impatience. Econometrica 28:287–309.
Koopmans, Tjalling 1964 On Flexibility of Future Preference. Pages 243–254 in Maynard W. Shelly and Glenn L. Bryan (editors), Human Judgments and Optimality. New York: Wiley.
Lindblom, Charles E. 1965 The Intelligence of Democracy: Decision Making Through Mutual Adjustment. New York: Free Press.
Luce, R. Duncan 1959 Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.
Luce, R. Duncan; and Raiffa, Howard 1957 Games and Decisions: Introduction and Critical Survey. A study of the Behavioral Models Project, Bureau of Applied Social Research, Columbia University. New York: Wiley. → First issued in 1954 as A Survey of the Theory of Games, Columbia University, Bureau of Applied Social Research, Technical Report No. 5.
Luce, R. Duncan; and Suppes, Patrick 1965 Preference, Utility, and Subjective Probability. Volume 3, pages 249–410 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), Handbook of Mathematical Psychology. New York: Wiley.
MacCrimmon, K. R. 1965 An Experimental Study of the Decision-making Behavior of Business Executives. Ph.D. dissertation, Univ. of California at Los Angeles.
Marschak, Jacob 1950 Rational Behavior, Uncertain Prospects, and Measurable Utility. Econometrica 18:111–141.
Marschak, Jacob (1954) 1964 Scaling of Utility and Probability. Pages 95–109 in Martin Shubik (editor), Game Theory and Related Approaches to Social Behavior: Selections. New York: Wiley.
Marshall, Alfred (1890) 1961 Principles of Economics. 9th ed. New York: Macmillan. → See especially the Mathematical Appendix, note 9.
May, Kenneth O. 1954 Intransitivity, Utility and the Aggregation of Preference Patterns. Econometrica 22:1–13.
Milnor, John 1954 Games Against Nature. Pages 49–60 in Robert M. Thrall, C. H. Combs, and R. L. Davis (editor), Decision Processes. New York: Wiley.
Neyman, Jerzy 1950 First Course in Probability and Statistics. New York: Holt.
Ore, Oystein 1953 Cardano, the Gambling Scholar. Princeton Univ. Press; Oxford Univ. Press. → Includes a translation by Sidney Henry Gould from the Latin of Cardano’s Book on Games of Chance.
OŻga, S. Andrew 1965 Expectations in Economic Theory. London: Weidenfeld & Nicolson.
Papandreou, Andreas. G. 1957 A Test of a Stochastic Theory of Choice. University of California Publications in Economics, Vol. 16, No. 1. Univ. of California Press. → In collaboration with O. H. Sauerlender, O. H. Brownlee, L. Hurwicz, and W. Franklin.
Pascal, Blaise (1670) 1961 Pensées: Notes on Religion and Other Subjects. New York: Doubleday. → See especially Section 3: De la nécessité du pari.
Pollack, Irwin 1955 Sound Level Discrimination and Variation of Reference Testing Conditions. Journal of the Acoustical Society of America 27:474–480.
Pratt, John W. 1964 Risk Aversion in the Small and in the Large. Econometrica 32:122–136.
Pratt, John W.; Raiffa, Howard; and Schlaifer, Robert 1964 The Foundations of Decision Under Uncertainty: An Elementary Exposition. Journal of the American Statistical Association 59:353–375.
Quandt, Richard 1956 A Probabilistic Theory of Consumer Behavior. Quarterly Journal of Economics 70:507–536.
Raiffa, Howard 1961 Risk, Ambiguity, and the Savage Axioms: Comment. Quarterly Journal of Economics 75:690–694.
Raiffa, Howard; and Schlaifer, Robert 1961 Applied Statistical Decision Theory. Graduate School of Business Administration, Studies in Managerial Economics. Boston: Harvard Univ., Division of Research.
Ramsey, Frank P. (1923–1928)1950 The Foundations of Mathematics and Other Logical Essays. New York: Humanities. → See especially Chapter 7, “Truth and Probabilities,” and Chapter 8, “Further Considerations.”
Savage, Leonard J. 1954 The Foundations of Statistics. New York: Wiley.
Shackle, L. S. (1949) 1952 Expectation in Economics. 2d ed. Cambridge Univ. Press.
Shackle, L. S. 1955 Uncertainty in Economics and Other Reflections. Cambridge Univ. Press.
Shackle, L. S. 1961 Decision, Order and Time in Human Affairs. Cambridge Univ. Press.
Shelly, Maynard W.; and Bryan, Glenn L. (editors) 1964 Human Judgments and Optimality. New York: Wiley.
Simon, Herbert A. 1957 A Behavioral Model of Rational Choice. Pages 241–260 in Herbert A. Simon, Models of Man. New York: Wiley.
Simon, Herbert A. 1959 Theories of Decision-making in Economics and Behavioral Science. American Economic Review 49:253–283.
Theil, Henri 1964 Optimal Decision Rules for Government and Industry. Amsterdam: North Holland Publishing; Chicago: Rand McNally.
Thurstone, Louis L. (1927–1955) 1959 The Measurement of Values. Univ. of Chicago Press. → Selections from previously published papers.
von Neumann, John; and Morgenstern, Oskar (1944) 1964 Theory of Games and Economic Behavior. 3d ed. New York: Wiley.
Wald, Abraham (1950)1964 Statistical Decision Functions. New York: Wiley.
Wolfowitz, J. 1962 Bayesian Inference and Axioms of Consistent Decision. Econometrica 3:471–480.
Wright, George H. VON 1963 The Logic of Preference. Edinburgh Univ. Press.
Decision making is a social process that selects a problem for decision (i.e., choice) and produces a limited number of alternatives, from among which a particular alternative is selected for implementation and execution (Snyder et al. 1962, p. 90). Some writers use the term synonymously with policy making, although others distinguish the two, reserving decision making for choices that involve conscious action and are subject to sanctions and policy making for a collectivity of intersecting decisions that has no choice-making unit in a position to decide for all parties involved (Braybrooke & Lindblom 1963, p. 249). For example, one may refer to decision making by the presidency or by Congress, but together these institutions constitute part of the total policy-making process of the United States. Decision making is also distinguished from problem solving, which may refer either (1) to tasks in which both the problem for solution and the alternative solutions are given (Kelley & Thibaut 1954) or (2) to more abstract higher mental processes of thinking and information processing (Newell et al. 1958). Political decision making, however, is conceived of as involving the search for both problems and alternatives. Voting, however, is an exception, because electors have little, if any, power over the timing of elections and for all practical purposes only indirect, if any, participation in the selection of alternatives (candidates).
History. The intellectual origins of decision-making analysis are twofold. One line of its development, mathematical economics, has its beginnings in the eighteenth-century work of Bernoulli (1738) and in the modern formulations of theories of games by von Neumann and Morgenstern (1944). An important political successor was Downs’ formulation of governmental decision making in terms of economic theories (1957). The origins of mathematical, economic, and game-theoretic decision making have been codified by Luce and Raiffa (1957) and give promise of including many further hypotheses for empirical investigation, as is illustrated by Riker’s work on coalitions and coalition formation (1962). This root branched off into experimental studies, reviewed in two papers by Edwards (1954; 1961), to test hypotheses deduced from the mathematical models [seeGame Theory].
The other historical root of decision-making analysis is in public administration, which so far has been the more influential strain in political decision making. Alexander Hamilton in America and Charles-Jean Bonnin in France identified the field at the turn of the nineteenth century (White  1955, pp. xiii, 10), and within a hundred years Woodrow Wilson in the United States and Max Weber in Germany had inaugurated academic studies of organizational decision making. The next major original works on organizations were those of Chester I. Barnard (1938) and Herbert A. Simon (1947), who were pioneers in calling for and introducing social scientific techniques to the study of the subject and also in indicating the relevance of psychological and sociological knowledge to the understanding of organizations and administration. Richard C. Snyder and his colleagues (1954; 1962) followed with a conceptual scheme that was designed for the study of foreign-policy making organizations but is applicable to organizational decision making in general [seePublicadministration].
In political science, decision making—or, more broadly, policy making—has been studied in electoral voting, legislative roll calls, judicial opinions, public opinion, and virtually every other kind of political situation or setting.
Among the most influential modern work on decision making is that of Simon. Beginning with his logical critique of “proverbs of administration,” Simon challenged eighteenth-century assumptions about decision behavior. Classical economic theory assumed that decision makers know all alternatives, that they know the utilities (values) of all alternatives, and that they have an ordered preference among all alternatives. For such a demand for “rationality” Simon proposed to substitute the concept of “bounded rationality,” which would more nearly comport with what is known about the psychological and physiological limits of decision makers. For the model of optimizing decisions, he substituted satisficing, that is, the adoption of a decision when an alternative seems to meet minimal standards or is good enough and is not dependent on the availability of all alternatives from which the best is chosen. These concepts and propositions have been researched primarily in industrial rather than in governmental organizations (see the systematic codification of theory in March & Simon 1958), although there is no theoretical reason to expect them to be any less applicable in the latter. [SeeAdministration, article onAdministrative Behavior.]
Another important stimulus to studies of political decision making is the work of Snyder et al. (1954; 1962). The first presentation of Snyder’s conceptual scheme was an outline of categories on which data for studying foreign policy decisions should be gathered. Although distributed initially only in a privately published paperback format, Snyder’s study was soon cited in publications ranging from work on disturbed communication to studies in judicial behavior. Before Snyder’s work was published in book form eight years after its first appearance, excerpts of it had been widely reprinted. Much of its impact, quite apart from its attention to decision-making analysis, undoubtedly stemmed from its explicit concern with a number of issues in methodology and the philosophy of science that were current in American political science during the 1950s.
Because Snyder’s approach consisted largely of a conceptual scheme that identified clusters of variables for study without containing theory about their interrelations, propositions for empirical study could not logically be derived from the formulation as they had from more formal models, such as those of von Neumann and Morgenstern. However, Snyder and his associates formulated a number of hypotheses for empirical work based on the conceptual scheme that were capable of being studied in a number of contexts by different researchers (Snyder & Paige 1958; Snyder & Robinson 1961).
One of the great merits of the conceptual scheme originated by Snyder et al. was that it joined psychological and sociological levels of explanation; that is, it proposed to combine data and theory about both individual decision makers and the group or organizational context in which they operate. It offered a means of explaining group behavior in terms other than those strictly of personality. The aim was to combine the social and the psychological levels of analysis in order to increase the ability to predict variance. However, some critics felt that neither the state of psychology nor that of sociology permitted such hypothetical combinations. This point undoubtedly had merit, owing to the fact that few political scientists pursued Lasswell’s initiatives in studying political personality (1930; 1935; 1948) and to the continuing separation of experimental social psychologists from field-oriented political scientists. One may expect that among the most active lines of future research on decision making will be studies of the interrelation of individual and organizational factors in producing decision outcomes.
A more descriptive and intellectualized model of the decision process is that of Lasswell (1956), who has identified seven stages or functions in the making of any decision. The first function is that of intelligence, which brings to the attention of the decisional unit problems for decision and information about these problems. There follow the recommendation function, in which alternatives are proposed; the prescription stage, in which one alternative is authoritatively selected; the invocation of the prescribed alternative; its application in particular cases by administrative or enforcement officers; the appraisal of its efficacy; and, finally, the termination of the original decision. This conception of decision making was designed on the basis of numerous investigations of judicial processes and has since been employed in a variety of legal contexts, including sanctioning systems in civil and criminal law, law of outer space, law of the seas, and public order [seePOLICY SCIENCES]. Its usefulness has been demonstrated also in studies of legislative and foreign policy decisions. Like Snyder’s conceptual scheme, Lasswell’s descriptive model does not immediately generate hypotheses for empirical investigation. Many such hypotheses have been formulated and researched, however, among them propositions relating power advantages to decision makers who dominate the intelligence and recommendation functions.
Voting studies have only recently been cast in the language of decision making. Downs considered electoral decisions in terms of the postulates of economic behavior and deduced 25 propositions about party and governmental decision making (1957, pp. 296–300). For many of these data were independently available to confirm or disconfirm the predictions by their consistency or inconsistency with his model. In contrast to such quasi-mathematical models, the more typical voting study has emphasized the social-psychological variables acting to produce the voter’s (and voters’) decisions. An inventory of voting surveys (exclusive of ecological and gross data analyses) found 209 hypotheses on which some empirical evidence was available in one or more studies (Berelson et al. 1954). Contemporary voting theory considers the decision to vote and the direction of the vote to be products of party affiliation, orientation toward candidates, and orientation toward issues. Party affiliation is a stable factor, usually inherited from one’s family, and generally subject to change only by dramatic social events (e.g., an economic depression). In the absence of unusual salience of either candidates or issues, party affiliation will determine the individual and aggregate vote. [SeeVoting.]
Generic characteristics of decision making
Whether these different uses of decision terminology have anything in common remains to be seen. The answer depends on two kinds of effort. One is to search for congruence among specific, narrow-gauged propositions from contrasting kinds of decisions. For example, if organizations have relatively little information and have a deadline for decision, their decision makers tend to rely more heavily than otherwise on fundamental value orientations (Snyder & Paige 1958). Similarly, if voters have relatively little information about issues and candidates, they tend to rely more heavily on rather enduring and stable party affiliations (Michigan … 1960). These separate hypotheses suggest a transcendent one; that if information is low, evaluative criteria are likely to be more important than empirical or factual criteria. Another example of possible complementarity is the similarity between the dimensionality of attitudes in legislative voting and leadership and power in legislative bodies and the dimensionality of attitudes and roles among community decision makers. Legislative attitudes, at least among U.S. congressmen, seem to “scale” around several dimensions rather than along a single liberal-conservative continuum (MacRae 1958; Miller & Stokes 1963); that is, if an observer knows a legislator’s vote on one bill, he can predict his vote on other bills in the scale, e.g., social welfare, but not on bills outside the scale, e.g., foreign affairs. Similarly, leadership on decisions is typically multidimensional; one set of leaders prevails on issue A, e.g., civil rights, and another set dominates issue B, e.g., education (Matthews 1960; Robinson 1962i>). This finding is supported in studies of influence and decision making in communities of various size (Katz & Lazarsfeld 1955; Dahl 1961). [SeeLEGISLATION, article onLEGISLATIVE BEHAVIOR.] Still another apparently transcendent proposition is one that incorporates the “cross-pressures hypothesis” in voting and the frequent characterization of committee decisions. Voters receiving conflicting appeals from family, church, work group, and other sources tend to compromise by not voting or by split-ticket voting (Michigan … 1960). Committee or group decisions are similarly said to be compromised and ambiguous versions of originally clear and consistent alternatives (Kelley & Thibaut 1954). Whether such apparently similar propositions have anything more than superficial commonality and whether many such congruities exist are questions for research that would help determine whether decision studies of various units and levels have much in common. [SeeCross Pressure.]
The second kind of effort that would clarify the generality of decision phenomena would be the construction of models of decision that would both generate new hypotheses for empirical investigation and accommodate existing, more or less confirmed propositions. Such attempts are apparent, for example, in the recent books by Downs, Riker, and Campbell et al. Downs, as already noted, has parsimoniously derived from his model a large number of propositions that had previously been investigated as parts of less general theories. Riker, too, has found that electoral and legislative decision studies fit his theory of coalition formation. And Campbell et al. have incorporated previous ecological and sociological voting hypotheses into their model of the American voter. However persuasive postdictive studies may be, that is, however consistent ad hoc propositions about historical events may be with the post hoc models, the stronger and more compelling evidence comes from predictive models from which derived hypotheses may be researched and verified. Decision models of this kind are less apparent in political science than in economics and psychology (Edwards 1954; 1961; Simon 1959).
The concept of occasion for decision
A conception of decision making that includes the identification of a problem or situation and is not confined to choice making treats the occasion for decision as a variable and not as a constant. Different kinds of decision situations involve participants and activate organizational structure in different ways. A number of dimensions of occasion for decision have been identified: uncertainty, risk, routine, unprogrammed (Snyder et al. 1962, p. 81; Simon 1960). Among typologies of decision situations is a three-dimensional one that identifies a range of situations varying between “crisis” and noncrisis. Crisis is a situation that (1) is regarded by decision makers as threatening to their organizational goals, (2) requires a relatively short response time, and (3) is unanticipated (Robinson 19620; C. F. Hermann 1963, pp. 63–65). This conceptualization resembles social-psychological concepts of stress and threat but is more particular to the domain of decision making. It also is a more narrow definition than usually appears in historical and political science literature, which often defines a crisis as only an important event. Further empirical work will be required to validate a workable concept; more than logic or a priori definitions are required [seeCrisis].
Personality and decision making
The relation between “personality” and decision making, historically of interest to political analysts but long confined to anecdotes and speculation, still awaits sustained systematic study. Lasswell’s advocacy of the application of psychoanalytic and psychological concepts in the study of politicians in the 1930s remains to be heeded, except by a few political scientists. Research falls into two categories: (1) that dealing with political socialization and recruitment into decision-making roles, and (2) that relating personal characteristics of decision makers to the content of their decisions.
Political socialization studies are founded on Lasswell’s dictum that everyone is born a politician, but some outgrow it. Three psychologically oriented biographies relate adolescent and preadolescent events to the eventual political careers of President Woodrow Wilson (George & George 1956), Mayor Anton Cermak of Chicago (Gottfried 1962), and James Forrestal, the first U.S. secretary of defense (Rogow 1963). Wilson illustrates the hypothesis that childhood deprivations of affection lead to low self-esteem, for which in adolescence and adulthood one compensates by seeking and exercising power over others. Wilson’s compensation came first as speaker of college debating societies and later as chief executive of Princeton University, the state of New Jersey, and the United States. Efforts to find psychological characteristics to distinguish political types from apolitical types have been small in number and discouraging in results. For example, McConaughy’s study of South Carolina legislators (1950), Hennessy’s survey of Arizona party activities and nonactivists (1959), and Browning and Jacob’s tests of politicians’ motivations (1964) confirmed few of a number of hypotheses that psychological instruments would distinguish politicians from nonpoliticians. However, acknowledged methodological shortcomings in some of these studies make their results indecisive and inconclusive. Wahlke et al. (1962), in a comparison of four state legislatures, found that political interests of the elites they studied were activated at almost any stage of the life cycle, but that the most frequently crucial phase of political socialization occurs at a relatively early age—for many, as early as childhood. Similar conclusions are documented in Hyman (1959), Greenstein (1960), and Easton and Hess (1961; 1962). [SeeSocialization.]
The other line of research on personality and decision making relates personal characteristics to decision content. One class of such studies searches for correlations between social backgrounds and prior experiences on the one hand and decisions or policies on the other. Matthews (1960) found that legislators who adhere more closely to the internal norms or folkways of the Senate are most successful in getting their bills adopted. And, in turn, senators who adhere most closely to the folkways tend to come from noncompetitive, homogeneous states. Nagel (1962) investigated the relations of more than fifteen variables, such as party affiliation, education, occupation, ethnicity, and group affiliations of judges, to a large number of different kinds of judicial decisions, including those involving administrative regulation, civil liberties, taxation, family relations, business, personal injury, and criminal cases. Statistically significant results of correlations of varying sizes have been obtained for many variables.
Another class of such studies relates unconscious motivations of decision makers to the outcomes of their decision process. A. George and J. George’s biography of Wilson (1956) is a case in point, for the president’s ambitions for power obscured his perception of the actual power situations and led him into self-defeating strategies. Almond and Lasswell (1948) reported that the interaction between the dominance or submissiveness of clients and welfare administrators was predictive of the decisions of the administrators; dominant clients were more likely to obtain favorable decisions from submissive agents than vice versa, and submissive clients were more likely to receive favorable decisions from dominant agents than vice versa. Margaret Hermann (1963) revived content analysis of public speeches as an indirect measure of legislators’ motivations toward power, personal security, other people, tolerance of ambiguity, and ethnocentrism, and from knowledge of some of these unconscious motivations of twenty legislators, she successfully predicted their votes on a scale of foreign policy issues around the dimensions of nationalism-internationalism. Milbrath found that highly sociable citizens tended to make more contributions to political campaigns than nonsociable citizens, but he found that sociability was not associated with the behavior of Washington lobbyists (1963).
However elemental these personality studies are, they are more advanced than efforts to relate personal attitudes and values to political decisions. This line of research is only beginning to be opened up in American studies. However, the cross-cultural work of McClelland (1961) suggests strongly that motivations toward and values of achievement are related to a society’s economic development; this finding is supported by an investigation of more than forty societies that is based on content analysis of educational materials and projective tests.
However discouraging the outcomes, to date, of research on personality and decision making may be, the disappointment owes more to the neglect of extensive effort than to any evidence that the task is fruitless or unpromising. The long-standing academic separation of political science from psychology and the diverse kinds of training given to political scientists have hindered the research. Recently completed and published studies once again reveal evidence that the interaction of personality and political decision is a fruitful subject of study. [SeePersonality, Political.]
Organizational context of decision making
With the possible exception of voting, all political decision making occurs in an organizational context. Organization is not easily defined [seeORGANIZATIONS], but it embraces a number of agreed-upon characteristics, including relatively formal rules of procedure, certain impersonal norms, indirect and mediated communication, relatively stable role expectations, and durability beyond the life of its members. Decision makers in organizations act under some influence from the rules, norms, and expectations of the large group to which they belong and in which they participate. Thus, organizational decision making is not merely individual or small-group decision making writ large.
The distinguishing characteristics of organization constitute dimensions that may vary from one organization to another. That is, rules of procedure in some organizations are different from those in others (for example, compare the voting procedures in Congress with decision rules in more hierarchically arranged organizations), and these procedures contribute to variations in the substantive content of policy or decision outcomes (e.g., Shuman 1957; Riker 1958). Communication patterns also vary from organization to organization. Mulder ( 1963, pp. 65–110) has shown experimentally how centralized and decentralized groups differ in their approach to problem solving. Robinson (1962b, pp. 168–190) has related variations of frequency, source, mode, and kind of communication, together with satisfaction with communication processes, to the policy attitudes and votes of legislators.
Political scientists (and also industrial management analysts) have emphasized extent of centralization (and decentralization) as an important dimension of organizations. Some element of hierarchy accompanies all organizations (Simon 1962, pp. 468–469), although there may be variations in the organizational levels at which hierarchy is concentrated and these may be variations in the accuracy, frequency, and receptivity of upper echelons to feedback from lower ranks and vice versa.
Centralization is but one specific characteristic or dimension of organization. Others, such as division of labor and the criteria of recruitment, are increasingly being made operational and are researched in governmental organizations (Hall 1963). Former efforts to construct a priori gross typologies of organizations are yielding to empirical search for individual dimensions, which may or may not correlate highly enough to be thought of as constituting distinguishable types of organizations.
Political scientists studying decision making are concerned with whether differences in organization make any differences for decision outcomes. Probably the most studied problem of this type is the comparison of bureaucratic (i.e., executive or administrative) decision making with legislative decision making. Bureaucracies tend to be more impersonal and to have more “programmed” decision rules and more complex and hierarchical structures than legislatures. In general, executive agencies tend to be more innovative than legislatures, to be more capable of obtaining and processing larger amounts of technical information relevant to the problem for decision, and to exercise greater influence over policy (Robinson 1962b; Banks & Textor 1963, especially characteristics 174 and 179).
In political science, most decisional studies have been cases of particular decisions or individual decision makers. Bailey (1950) inaugurated legislative case studies with the detailed history of the passage of a single bill in Congress. The Inter-university Case Program pioneered in developing more than fifty cases of administrative, legislative, election, and party decisions. However, most case studies have been atheoretical and have not been directed at building a body of confirmed propositions across cases. Indeed, the possibilities of the case method usually have been regarded as limited to illustrating known principles or generating new ones, but as of little or no value for testing hypotheses. Efforts to use cases for testing hypotheses by generalizing across instances have been begun by Munger (1961) and Robinson (1962b, pp. 23–69).
Case studies of individual decision makers through standard biographical techniques continue, but only a few have followed Lasswell’s early examples of psychological inquiry.
Many national and some international political bodies produce large numbers of decisions and record many of them in a form readily susceptible to quantitative analysis. Roll-call data are especially useful for identifying voting blocs among members of these bodies and for identifying patterns of voting for individual members, whether persons or nations. In decision making analysis, such roll-call data may be regarded as dependent variables or as outcomes of decision-making processes. Hypotheses can readily be formulated that link these dependent variables to such independent variables as personality, social backgrounds, economic characteristics, and organizational variables. However, because legislatures, courts, and international organizations are essentially noncrisis organizations (as the term is used to designate one kind of decision occasion), these data are relevant only to a limited number of decision situations. Specific techniques used in organizing and interpreting roll-call data include cluster bloc, Guttman scaling, index of cohesion, factor analysis, party unity scores, and similar techniques familiar throughout the social sciences.
Survey techniques (i.e., a sampling of a population of respondents with whom personal interviews are conducted) have been used largely to study nonelite decision making (e.g., voting). The most ambitious of these studies include “panels” of respondents who are reinterviewed periodically during and/or after an election, and perhaps throughout a series of several elections. Although surveys have been confined mostly to nonelite studies, they are also applicable to investigations of elite decision making by interviews with samples of decision makers on one or more aspects of their processes Robinson 1962b, pp. 168–190, 220–234). [SeeSurvey Analysis.]
Simulation of political decision making has included both man-simulation (e.g., Guetzkow et al. 1963) and computer-simulation (e.g., Pool & Abelson 1961). Both forms constitute operating models of large-scale social processes. The functions of simulation are both heuristic and hypothesis-testing. To formalize models of decision processes requires logical and rigorous statements of the relationships among relevant variables and also of the relationships among propositions containing the variables. And, like any laboratory technique, simulation makes it possible to test hypotheses by controlling some factors while varying others. The use of simulation to study decisions developed later in political science than in the other social sciences, but it promises to be one of the increasingly used methods of the future. [SeeSimulation.]
Mathematics as a method in political science and decision-making analysis is also likely to increase in use, although, to date, its applications to this field have been less numerous than those in other parts of social science. The next generation of political scientists will have more opportunities to obtain training in mathematical analysis, and their training will include new developments in mathematics which make the tools of that subject more helpful for the social sciences. The functions of mathematics for studying political decision making are the same as its functions for studying other aspects of social or political phenomena. First, mathematics constitutes a formal language for making explicit statements of the relationships between variables and hypotheses. Second, it provides for the logical deduction of hypotheses from the rigorous statements that may be empirically researched by other methods, either experimental or field-observational. In short, mathematics serves both to integrate theory and to generate new hypotheses. After the deduction of new hypotheses, however, other techniques must be employed for conducting empirical tests of the hypotheses.
James A. Robinson
Almond, Gabriel A.; and Lasswell, Harold D. 1948 The Participant Observer: A Study of Administrative Rules in Action. Pages 261–278 in Harold Lasswell (editor), The Analysis of Political Behavior: An Empirical Approach. New York: Oxford Univ. Press.
Bailey, Stephen K. 1950 Congress Makes a Law: The Story Behind the Employment Act of 1946. New York: Columbia Univ. Press.
Banks, Arthur S.; and Textor, Robert B. 1963 A Cross-polity Survey. Cambridge, Mass.: M.I.T. Press.
Barnard, Chester I. 1938 The Functions of the Executive. Cambridge, Mass.: Harvard Univ. Press.
Berelson, Bernard; Lazarsfeld, Paul F.; and McPhee, William N. 1954 Voting: A Study of Opinion Formation in a Presidential Campaign. Univ. of Chicago Press.
Bernoulli, Daniel (1738) 1954 Exposition of a New Theory on the Measurement of Risk. Econometrica 22:23–36. → First published in Latin.
Braybrooke, David; and Lindblom, Charles E. 1963 A Strategy of Decision: Policy Evaluation as a Social Process. New York: Free Press.
Browning, Rufus P.; and Jacob, Herbert 1964 Power Motivation and the Political Personality. Public Opinion Quarterly 28:75–90.
Dahl, Robert A. (1961) 1963 Who Governs? Democracy and Power in an American City. New Haven: Yale Univ. Press.
Downs, Anthony 1957 An Economic Theory of Democracy. New York: Harper.
Easton, David; and Hess, Robert D. 1961 Youth and the Political System. Pages 226–251 in Seymour M. Lipset and Leo Lowenthal (editors), Culture and Social Character: The Work of David Riesman Reviewed. New York: Free Press.
Easton, David; and Hess, Robert D. 1962 The Child’s Political World. Midwest Journal of Political Science 6:229–246.
Edwards, Ward 1954 The Theory of Decision Making. Psychological Bulletin 51:380–417.
Edwards, Ward 1961 Behavioral Decision Theory. Annual Review of Psychology 12:473–498.
George, Alexander L.; and George, Juliette L. 1956 Woodrow Wilson and Colonel House. New York: Day.
Gottfried, Alex 1962 Boss Cermak of Chicago: A Study of Political Leadership. Seattle: Univ. of Washington Press.
Greenstein, F. I. 1960 The Benevolent Leader: Children’s Images of Political Authority. American Political Science Review 54:934–943.
Guetzkow, Harold et al. 1963 Simulation in International Relations: Developments for Research and Teaching. Englewood Cliffs, N.J.: Prentice-Hall.
Hall, Richard H. 1963 The Concept of Bureaucracy: An Empirical Assessment. American Journal of Sociology 69:32–40.
Hennessy, Bernard 1959 Politicals and Apoliticals: Some Measurements of Personality Traits. Midwest Journal of Political Science 3:336–355.
Hermann, Charles F. 1963 Some Consequences of Crisis Which Limit the Viability of Organizations. Administrative Science Quarterly 8:61–82.
Hermann, Margaret G. 1963 Some Personal Characteristics Related to Foreign Aid Voting of Congressmen. M.A. thesis, Northwestern Univ.
Hyman, Herbert H. 1959 Political Socialization: A Study in the Psychology of Political Behavior. Glencoe, III.: Free Press.
Katz, Elihu; and Lazarsfeld, Paul F. 1955 Personal Influence: The Part Played by People in the Flow of Mass Communications. Glencoe, III.: Free Press. → A paperback edition was published in 1964.
Kelley, Harold H.; and Thibaut, John W. 1954 Experimental Studies of Group Problem Solving and Process. Volume 2, pages 735–785 in Gardner Lindzey (editor), Handbook of Social Psychology. Cambridge, Mass.: Addison-Wesley.
Lasswell, Harold D. (1930) 1960 Psychopathology and Politics. New ed., with afterthoughts by the author. New York: Viking.
Lasswell, Harold D. 1935 World Politics and Personal Insecurity. New York and London: McGraw-Hill.
Lasswell, Harold D. 1948 Power and Personality. New York: Norton.
Lasswell, Harold D. 1956 The Decision Process: Seven Categories of Functional Analysis. Bureau of Governmental Research, Studies in Government. College Park: Univ. of Maryland.
Luce, R. Duncan; and Raiffa, Howard 1957 Games and Decisions: Introduction and Critical Survey. New York: Wiley.
McClelland, David C. 1961 The Achieving Society. Princeton, N.J.: Van Nostrand.
McConaughy, John B. 1950 Certain Personality Factors of State Legislators in South Carolina. American Political Science Review 44:897–903.
MacRae, Duncan 1958 Dimensions of Congressional Voting: A Statistical Study of the House of Representatives in the Eighty-first Congress. University of California Publications in Sociology and Social Institutions, Vol. 1, No. 3. Berkeley: Univ. of California Press.
March, James G.; and Simon, Herbert A. 1958 Organizations. New York: Wiley. → Contains an extensive bibliography.
Matthews, Donald R. 1960 U.S. Senators and Their World. Chapel Hill: Univ. of North Carolina Press.
Michigan, Universityof, Survey Research Center 1960 The American Voter, by Angus Campbell et al. New York: Wiley.
Milbrath, Lester W. 1963 The Washington Lobbyists. Chicago: Rand McNally.
Miller, Warren E.; and Stokes, Donald E. 1963 Constituency Influence in Congress. American Political Science Review 57:45–56.
Mulder, Mauk (1958) 1963 Group Structure, Motivation and Group Performance. Rev. ed. The Hague and Paris: Mouton. → First published in Dutch.
Munger, Frank J. 1961 Community Power and Metropolitan Decision-making. Pages 305–334 in Roscoe C. Martin et al. (editors), Decisions in Syracuse. Bloomington: Indiana Univ. Press.
Nagel, Stuart S. 1962 Judicial Backgrounds and Criminal Cases. Journal of Criminal Law, Criminology, and Police Science 53:333–339.
Newell, Allen; Shaw, J. C.; and Simon, Herbert A. 1958 Elements of a Theory of Human Problem Solving. Psychological Review 65:151–166.
Pool, Ithiel desola; and Abelson, Robert 1961 The Simulmatics Project. Public Opinion Quarterly 25: 167–183.
Riker, William H. 1958 The Paradox of Voting and Congressional Rules for Voting on Amendments. American Political Science Review 52:349–366.
Riker, William H. 1962 The Theory of Political Coalitions. New Haven: Yale Univ. Press.
Robinson, James A. 1962a The Concept of Crisis in Decision-making. National Institute of Social and Behavioral Science, Symposia Studies Series, No. 11. Washington: The Institute.
Robinson, James A. 1962b Congress and Foreign Policy-making: A Study in Legislative Influence and Initiative. Homewood, III.: Dorsey.
Rogow, Arnold A. 1963 James Forrestal: A Study of Personality, Politics, and Power. New York: Macmill an.
Shuman, Howard E. 1957 Senate Rules and the Civil Rights Bill: A Case Study. American Political Science Review 51:955–975.
Simon, Herbert A. (1947) 1961 Administrative Behavior. 2d ed., with new introduction. New York: Macmillan.
Simon, Herbert A. 1959 Theories of Decision-making in Economics and Behavioral Science. American Economic Review 49:253–283.
Simon, Herbert A. 1960 The New Science of Management Decision. New York: Harper.
Simon, Herbert A. 1962 The Architecture of Complexity. American Philosophical Society, Proceedings 106: 467–482.
Snyder, Richard C.; Bruck, H. W.; and Sapin, Burton 1954 Decision-making as an Approach to the Study of International Politics. Foreign Policy Analysis Series, No. 3. Princeton Univ. Organizational Behavior Section.
Snyder, Richard C. et al. (editors) 1962 Foreign Policy Decision-making: An Approach to the Study of International Politics. New York: Free Press.
Snyder, Richard C.; and Paige, Glenn D. 1958 The
United States Decision to Resist Aggression in Korea: The Application of an Analytical Scheme. Administrative Science Quarterly 3:341–378.
Snyder, Richard C.; and Robinson, James A. 1961 National and International Decision-making: Toward a General Research Strategy Related to the Problem of War and Peace. New York: Institute for International Order.
von Neumann, John; and Morgenstern, Oskar (1944) 1964 Theory of Games and Economic Behavior. 3d ed. New York: Wiley.
Wahlke, John et al. 1962 The Legislative System: Explorations in Legislative Behavior. New York: Wiley.
White, Leonard D. (1926) 1955 Introduction to the Study of Public Administration. 4th ed. New York: Macmillan.
"Decision Making." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (July 23, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/decision-making
"Decision Making." International Encyclopedia of the Social Sciences. . Retrieved July 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/decision-making
Valerie M. Hudson
American foreign policy may be studied from a variety of perspectives. Historical narrative, institutional analysis, issue area examination, rational choice theory, study of ideational and legal evolution, gendered perspectives, and Realpolitik accounts are all valid and useful approaches to understanding not only American foreign policy but the foreign policy of any nation.
DECISION MAKING AND FOREIGN POLICY ANALYSIS
Decision-making approaches and theories fall within the subfield of foreign policy analysis, within the larger field of internation relations. Foreign policy analysis (known as FPA) is distinguished from other theoretical approaches in international relations by its insistence that the explanatory focal point must be the foreign policy decision makers themselves and not larger structural or systemic phenomena. Explanatory variables from all levels of analysis, from the most micro to the most macro, are of interest to the analyst to the extent that they affect the decisionmaking process. Thus, of all subfields in international relations, FPA is the most radically integrative theoretical enterprise. Investigations into the roles that personality variables, perception and construction of meaning, group dynamics, organizational process, bureaucratic politics, domestic politics, culture, and system structure play in foreign policy decision making are the core research agenda of FPA. But as Richard Snyder, one of the founders of FPA, and his colleagues Henry Bruck and Burton Sapin noted in 1954, these are only important as they have an impact on the only true agents in international affairs—human decision makers:
In a sense, then, in the age-old philosophy of social science debate concerning whether agents or structures are the primary determinants of behavior in the social world, FPA comes down squarely on the side of agents. FPA is the agent-centered theory of international relations. Foreign policy analysts argue that without an account of human agency in international relations theory, one cannot develop a satisfactory account of change over time in international affairs. Furthermore, given the immense destructive power inherent in internation relations, explanations that omit an examination of the role and efficacy of human agency in using and containing that power do less than they ought.
Here, then, is yet another difference between FPA approaches and other accepted approaches to understanding international relations. Not only does FPA give an account of agency, but it gives a specific, rather than a general, account of agency. In such approaches as game theory and rational choice explanations of foreign policy, the actor is conceptualized as a generic, rational, utility-maximizing decision maker. In contrast, theories of FPA unpack that generic "black-boxed" actor and discover that the idiosyncrasies of the actor do affect foreign policy choice. To use terms coined by Alexander George, FPA is more interested in "actor-specific" theory than "actor-general" theory.
In sum, then, FPA produces radically integrative, agent-oriented, and actor-specific theory. In these three ways, it remains a unique and easily distinguishable subfield of international relations.
A Word About the Explanandum What is it that foreign policy analysts seek to explain? To use a common phrase, what is the dependent variable in FPA?
Despite attempts to formulate "foreign policy" in terms of consistently operationalized variables, it must be admitted that what is to be explained may vary across research programs within FPA. Some programs focus on foreign policy as an output of decision making; others focus on the decision-making process in foreign policy. For example, the use of events data (discussed below) as one's dependent variable is an example of conceptualizing foreign policy as an output. In this tradition, foreign policy "events" gleaned from news media can be coded for some set of variables, such as the level of commitment implied by the event on the part of the acting nation. Standardized coding then allows for direct comparison of the outputs of various nation-state actors, as well as permitting a longitudinal analysis of the foreign policy behavior of one nation.
It is also possible to take a more process-oriented approach to what is meant by foreign policy. For example, one could use the policy positions of various actors as the dependent variable, tracing how a particular position becomes dominant within a decision-making group over time. One could walk the cat back yet another step and examine how such policy stances crystallize in the first place from basic cognitive processes such as perception, problem representation, and construction of meaning. Another step back would be to ask how the decision-making group comes to be in the first place, how structures and processes of groups are created and changed over time within a society. Role conceptions concerning the nation-state, and concerning various institutions and groups within the nation-state, could also be the focus of inquiry.
Both approaches to the explanandum in FPA have been fruitfully used, and insights from each type of research informs the other. It is true that choice of explanandum affects choice of methodology: aggregate statistical testing may be useful in events data studies, whereas process-tracing and interpretivist analysis might be more helpful with process-oriented conceptualizations of foreign policy.
SURVEY OF FPA THEORETICAL APPROACHES
Individual Psychology and Cognition Characteristics of the individual decision maker may be very important in understanding the decisions ultimately made. Harold and Margaret Sprout, in their pioneering work Man-Milieu Relationship Hypotheses in the Context of International Politics (1956), believe that analysis of this "psychomilieu" is crucial to understanding nation-state foreign policy. Margaret G. Hermann argues that certain conditions increase the probability that the personal characteristics of leaders will affect foreign policy: when the decision maker has wide decision latitude within the governmental system, when the situation is nonroutine, ambiguous, or carries with it very high stakes, or when the policy under discussion is a long-term policy or strategy. In addition to these situational variables, the personality of a leader may also be more influential, according to Hermann, when the leader does not have formal diplomatic training or when the leader is not especially attentive or sensitive to changes in external circumstances.
Furthermore, the analyst must remain aware of the limitations and vulnerabilities of human beings, both in a physical sense and a cognitive sense. Physically, human decision making can be affected by stress levels, lack of sleep, acute or chronic illness, mental pathologies, medications being used, age, and so forth. For example, psychologists have found that decision making tends to be of higher quality when moderate levels of stress are present. Too low a stress level or too high a stress level can be counterproductive. But there are also cognitive limitations inherent in being human. The human brain is so complex that human beings often rely on reasoning shortcuts or heuristics to make decisions. Errors of representation, the "gambler's fallacy" (where the gambler believes that an outcome is more likely to occur if it has not occurred lately), and many other biases may affect choice. Furthermore, a person's ability to handle complexity has an upper limit: psychologists tell us that even the most conceptually complex human reasoner can only hold seven things in mind simultaneously. Robert Jervis explores these factors in depth in Perception and Misperception in International Politics (1976).
Humans are also a diverse lot in terms of their personal belief systems. At birth, each human begins to develop beliefs about how the world works and what is to be valued. In FPA, several scholars have created theoretical frameworks to typologize such belief systems. Margaret Hermann created a set of "foreign policy orientations" based on elements such as nationalism, belief in ability to control events, distrust of others, and task-affect orientation, among others. Alexander George promulgated the tool of "operational code analysis," wherein the analyst determines a leader's beliefs with reference to how best to accomplish goals. David Winter has sought to typologize the motivating forces for individual leaders. Such frameworks of analysis often rely on the methodology of content analysis, where a leader's speeches and writings are analyzed thematically or quantitatively to provide insight into the specifics of his or her belief system. Learning and change in knowledge systems has been a focus of inquiry for Jack Levy, and Matthew Bonham's methodology of cognitive mapping of content-analyzed material can be used to trace changes in knowledge structures over time.
Small-Group Dynamics and Problem Representation It is arguably within the context of small-group deliberations that most foreign policy decisions are made. Thus, the study of group decision making becomes a very important element of FPA. As noted, FPA owes a great debt to Richard Snyder and his colleagues Henry Bruck and Burton Sapin for insisting that researchers look below the nation-state level of analysis to the actual decision-making groups involved.
With regard to small groups in particular, as opposed to larger collectivities such as organizations and bureaucracies, the seminal work is undoubtedly Irving Janis's classic Groupthink (1972). Using examples taken from the annals of American foreign policy, such as the Bay of Pigs invasion, Janis was able to show how the desire to maintain group consensus and subjective feelings of personal acceptance by the group can cause deterioration of decision-making quality. Such groups wind up being "soft-headed" but "hardhearted" as outgroups are dehumanized and ingroup decision processes become sloppier. A hallmark of groupthink is the risky shift, where the group is prepared to make riskier decisions than any individual member of the group would be prepared to make on his own. A sense of group invulnerability and omniscience creates psychological disincentives to rethinking the group's initially preferred policy or even to constructing contingency plans in the event of failure of that policy. A later generation of scholars advanced Janis's work and explored the scope conditions under which groupthink is more or less likely.
The study of how a group comes to an initial representation of the problem at hand, and how, then, the group members aggregate their differing preferences, is another research agenda at this level of analysis. One way of analyzing group problem representation is to view group discussions as the attempt to jointly author a "story" that makes the problem intelligible. Donald A. Sylvan and Deborah Haddad, in the volume Problem Representation in Foreign Policy Decision Making (1998), suggest that such coauthorship allows for the action decision to be made collectively. When rival story lines are offered and collide, the group as a whole must work its way back to a consistent story line through persuasion and analysis. Yuen Foon Khong's important book Analogies at War (1992) demonstrates how the use of the conflicting analogies to frame the problem of Vietnam led to conceptual difficulties in reasoning about policy options. The "Korea" analogy gained ascendance, according to Khong, without sufficient attention paid to the incongruities between the two sets of circumstances. Thus, the debate over metaphors and analogies used to understand a new situation may predispose a group's policy response, possibly with tragic consequences.
How the structure and the process of a group affect decision outcomes, making some outcomes more or less likely, is also an interesting question. The role played by the members—as representatives of a larger group, or as autonomous actors— coupled with the size of the group and the leadership style used, may make deadlock more probable than agreement. These structural variables may in turn be influenced by rules for resolving conflict within the group, such as majority voting, two-thirds voting, or unanimity. Theory on coalition-building and bargaining may be invaluable in understanding how a particular decision is ultimately selected. Furthermore, certain types of leaders prefer certain types of group structures and processes. Theoretical leverage on the most likely outcome for various types of groups may be gained by these types of analysis.
Organizational Process and Bureaucratic Politics Although actual foreign policy decisions may be made primarily in small groups, the policy positions of group members and the subsequent implementation of decisions made by small groups are only well understood when the analyst includes insights at the organizational and bureaucratic levels of analysis. American foreign policy is dominated by several large organizations, such as the Defense Department and the State Department, and the resulting web of organizations—the bureaucracy—may have a political dynamic all its own. Indeed, to see this bureaucracy as merely the executive arm of foreign policy is to underestimate the powerful political forces that drive organizations. These powerful motivations—the desire for expanded "turf," expanded budget, expanded influence vis-à-vis other organizations, as well as the desire to maintain organizational "essence," "morale," and "culture"—may result in a radical undermining of the supposedly rational decision-making process in foreign policy. Morton Halperin's Bureaucratic Politics and Foreign Policy (1974) gives unforgettable examples of this unhappy dynamic with reference to the era of the Vietnam War.
Graham Allison's 1971 Essence of Decision (and its 1999 update) examines not only the subversion of rationality at the decision-making stage but also the subversion of rationality at the implementation end. Large organizations typically develop standard operating procedures (SOPs) that allow for quicker, more efficient responses than would otherwise be possible with collectivities numbering in the thousands or even millions of persons. Unfortunately, these SOPs are fairly insensitive to the nuances of external circumstances as well as to efforts by national leaders to adapt or modify them. Indeed, national leaders may not even comprehend that when they give an executive order it is first translated into a series of sequential SOP steps. This translation may leave much to be desired in terms of flexibility, creativity, and appropriateness. For example, Allison found that the Soviet Strategic Rocket Forces, given the order in 1962 to construct ballistic missile sites in Cuba, used the same SOP they had developed for doing so in the Soviet Union. This SOP did not include a provision for camouflaging the construction sites. Thus, American U-2s were able to photograph the sites, and American analysts were immediately able to identify what they were, giving President John F. Kennedy and his advisers a crucial window of time in which to compel the Soviets to abandon their intentions.
Domestic Politics Surely there is no better case than that of American foreign policy with which to demonstrate the influence of domestic political considerations on policymaking. The chief executive must stand for election every four years and as a result may be constrained in his ability to act as he otherwise would absent such electoral considerations. The Congress and the judiciary also have unique roles to play in American foreign policy, and players there may also be facing political imperatives. The two-party system of American politics also plays havoc with the rationality of decision making, as actors must not only think of their own well-being but the relative standing of their party vis-à-vis the other. Furthermore, the variety of vocal special interest and lobbying groups, not only national but also transnational in nature, is positively dizzying.
Robert Putnam suggests that we understand foreign policy as a "two-level game" being played by the leadership of the nation. At one level, the leadership is trying to retain its domestic political standing and enhance its electoral prospects and the electoral prospects of its allies. At another level, the leadership is trying to negotiate with foreign powers to achieve foreign policy objectives. A bad move at either level can imperil one's prospects at the other level. The astute leader attempts to create opportunities whereby moves at one level directly translate into advantage at the other. Interestingly, one counterintuitive finding is that the more constrained the leader can claim to be in the domestic arena, the more insistent he can be in the foreign arena. Thus, the threat that the U.S. Congress would never ratify a particular treaty can be used by administration officials to successfully maneuver other foreign actors to move closer to their own preferred bargaining position.
Of course, the reverse is also a recognizable phenomenon in international relations. Sometimes international situations or policies are used by the government to deflect domestic criticism and bolster support among its citizenry. The oftnoted "rally 'round the flag" effect, wherein an international crisis involving confrontation with a hostile power increases the approval rating of a president, is one that sometimes is purposefully used by an embattled regime. Both Argentina and Great Britain arguably used the Falklands controversy of 1982 for this purpose.
Joe D. Hagan has attempted to create a cross-national database of the fragmentation and vulnerability of political regimes, with special reference to executive and legislative structures. His data set includes ninety-four regimes for thirty-eight nations over a ten-year period. He was able to assess whether foreign policy behavior, such as level of commitment in policy, is affected by political opposition. He discovered, among many other findings, that military or party opposition to the regime does indeed constrain possible foreign policy action.
In addition to more formal political group influence, there has been a robust research agenda tracing the relationship between U.S. public opinion and U.S. foreign policy. After World War II but before the Vietnam War era, it was an American truism that public opinion did not drive foreign policy, as "politics stopped at the water's edge." The Vietnam trauma undermined that consensus, and this was accelerated by the end of the Cold War and the rise of the global economy. Now Americans could see plainly that what happened in Thailand, for instance, might affect their pensions. International arrangements such as the North American Free Trade Agreement and the World Trade Organization could be seen to have local effects. The responsiveness of the national leadership to public opinion can be seen most plainly in the swift retreat of U.S. forces in Somalia following the downing of two American helicopters in Mogadishu, with television footage of a paratrooper's body being dragged through the streets by an angry mob. Truly, as many have said, there is now a tangible "CNN effect" that must be taken into account when studying American foreign policy.
Culture and Ideational Social Construction Those FPA scholars who study the effects of culture and ideational social construction on foreign policy justifiably assert that what large collectivities believe to be true and believe to be good affects what those collectivities then do. The world is not only material—it is ideational—and often the ideational can be a more powerful force than the material.
One way to examine this issue is to investigate the effects of differences in culture on resultant foreign policy. Each national culture constructs a unique web of beliefs, meanings, values, and capabilities based on their idiosyncratic historical experiences. The "heroic history" of a nation is replete with lessons in what is precious and how one best protects those values. Such differences may aggravate internation hostilities, sometimes even unintentionally. For example, on the eve of serious negotiations, American negotiators are likely to state what they think an acceptable compromise would be and view their task as persuading the other party of the correctness of this view. Chinese negotiators, on the other hand, are likely to denounce their negotiating partner on the eve of serious negotiations, suggesting there can be no compromise at all. Unless each party understands the cultural proclivities of the other, fundamental misunderstanding and heightened hostilities may result.
Value differences may lead to misunderstanding as well. For example, Americans proudly proclaim that they would never negotiate with terrorists or give in to their demands. The Japanese, on the other hand, see no shame at all in negotiating with terrorists. When faced with a threat by another nation, the American response is to isolate and threaten that nation. However, in a 1999 study, Valerie A. Hudson found the Russian response was to befriend and trade with that nation so that the threat might be erased in a peaceful manner.
How do these differences in culture arise in the first place? They arise through a shared national experience that is interpreted by human agents who then undertake the task of persuading their compatriots that this interpretation is a good and appropriate one. Scholarly work has been done on each of these elements.
Helen Purkitt has used the methodology of "think-aloud protocol" to study how it is that an individual comes to an interpretation of a situation. Experimental subjects, including policymakers, were asked by Purkitt to verbalize their thought processes as they deliberated on policy issues. Purkitt thus was able to "see" which aspects of a situation were salient for which persons, how they synthesized uncertainty with analogy in their interpretations, and how soon it took for a particular interpretation to become accepted and treated as a natural interpretation for a situation. G. R. Boynton used textual exegesis of congressional hearings to investigate crystallization of understanding among committee members, finding that members would attempt to narrate a version of the events under question to each other and build a coherent narrative of the whole through smaller pieces upon which all could agree. Only when testimony had been translated into recognizable elements from this jointly constructed narrative were committee members able to fully understand the events.
In addition to the construction of meaning for individuals and small groups of decision makers, meanings may be constructed and shared among larger groups as well. National identity is continually evolving. Although the roots of national identity may lie in the history of the nation, it is current interpretation of that identity that may be more useful to the analyst. One important theoretical framework in this area of inquiry is the national-role-conception approach first developed by Kal Holsti in 1970. He argues that any social system, including a social system made up of nation-states, creates a set of differentiable roles that include both privileges and responsibilities. A variety of factors, including domestic conditions, distribution of power within the system of states, history, legal precedent, and others, help determine which nations gravitate toward which roles. A nation-state then develops a distinctive national role conception, which renders that nation-state's behavior more intelligible and predictable to the analyst. So, for example, though the United States may see itself in the role of a "bloc leader" (leader of the Western bloc), France views itself as a "regional leader" in Europe. Such self-conceptions may clash, as they often do in the case of France and the United States. National-role-conception analysis may uncover differences that might otherwise go unnoticed; for example Marijke Breuning points out that although Americans might lump Belgium and the Netherlands together as nations with very similar attributes, the Dutch tend to see themselves playing a proactive role in encouraging development in less developed countries due to their heroic history of involvement in exploration and colonization. Belgium, on the other hand, a creation of the major European powers, never took such an initiative and became a particularly indifferent former colonial power.
National identity or national role conceptions do change over time. Tracking that change involves detailed analysis of speeches and texts by those who help form opinion within society. Ideas are very useful to policy entrepreneurs, and identifying who is pushing what idea for what reason may help the analyst keep his or her finger on the course of identity evolution within a nation-state. Often ideas must be couched in the language of historical national identity to find favor with larger national audiences. For example, Hellmut Lotz noted in 1997 that on the eve of the Al Gore–Ross Perot debate over NAFTA, a sizable percentage of Americans were undecided over whether the agreement to include Mexico in a free trade agreement with the United States and Canada was a good thing or not. During the course of the televised debate, both men made reference to key themes of American national identity: the American Dream, American exceptionalism, American strength, American vulnerability, American isolationism, and so forth. Lotz found that the audience of undecideds resonated overwhelmingly with the Gore portrayal of America as strong and fearless rather than with the Perot portrayal of America as weak and needing to protect itself from foreigners. Thus, despite the fact that both men were speaking in the context of shared meaning concerning America, Gore was the more successful policy entrepreneur, for he was able to sway voters to his position by means of his selective emphasis on strategically chosen aspects of American identity. In a similar vein, in a 1993 article Jeffrey Checkel was able to reconstruct the trail of policy-entrepreneur intervention in the development of Mikhail Gorbachev's perestroika policies. To trace the positions and the network contacts of persons holding particular ideas is a formidable task for the analyst, but one which is very rewarding if the focus is on possible change in, rather than continuity of, foreign policy direction.
STRUCTURAL AND SYSTEMIC FACTORS
Although foreign policy analysis places its focus on decision makers and decision making, government officials are not ignorant of salient features of the international system. Characteristics of the international system may constrain what policymakers feel they can do, but simultaneously may provide opportunities to advance their nation-states' purposes. Furthermore, nation-states can attempt to reflexively shape the system in such a way that their nation-state is more secure.
Two broad approaches to this topic may be differentiated. One approach is to examine the national attributes of nations, including a comparison of such attributes. Nation-state behavior and internation interaction may then be explained by reference to these attributes. Second, one could look at the system in a more abstract manner, by investigating non-unit-specific factors such as anarchy within a system, the existence of international regimes on particular issues, evolving conceptualizations of legitimate and illegitimate behavior within the system, and so forth.
Regarding the first, attribute-centered approach, one of the classic works of foreign policy analysis is the "pre-theory" framework of James Rosenau. In a seminal 1966 article, Rosenau suggests that the key to understanding a nation-state's behavior is to uncover its "genotype." That is, every nation has particular attributes that may make certain factors more determinant of its foreign policy than others. Using three dichotomous variables (size: small/large; wealth: developed/underdeveloped; accountability: open/closed), Rosenau posited eight genotypes: small/developed/open; large/underdeveloped/closed, and so on. Depending on the genotype of the nation under scrutiny, Rosenau then posits that certain factors would be more important for the analyst to investigate. So, for example, in a large/developed/open nation-state, Rosenau asserts that role variables (akin to national role conception) would be the most important factor, with systemic and idiosyncratic (for example, personal characteristics of leaders) being least important. On the other hand, in a small/underdeveloped/closed nation-state, idiosyncratic and systemic factors would be precisely those of greatest significance.
A more modern variant of this attributional approach is the theory of the democratic peace, which highlights the empirical fact that democracies virtually never go to war with one another. Although some have debated the definition of democracy (is Yugoslavia a democracy?), and others have suggested that democracy here is serving as a stand-in for a more fundamental factor such as cultural similarity, the behavior of nation-states is again being explained in terms of their attributes. Indeed, one might claim this approach can also subsume Marxist-Leninist explanations of war, which focus on the profit imperatives of imperialist states to explain the bellicosity of the European powers.
The second approach—more oriented to the structure of the system itself as opposed to the attributes of its members—is a well-established research tradition in international relations. Here, no matter what the attributes of states, the system itself may express properties that can be determinant of state behavior. So, for example, in the work of Kenneth Waltz, the primary factor affecting state behavior is the anarchy of the international system. In the absence of a world government, benign or tyrannical, able to enforce a code of behavior, states must help themselves, for they cannot trust other states in the system not to deflect from pledges of cooperation that they may have made. An emphasis on deterrence, a search for primacy and power, and a notable lack of cooperation even on important issues are all hallmarks of anarchy. Even states that actively desire to behave otherwise may be constrained by the straitjacket of system structure in this theory.
There may be other elements of the system that fine-tune the effects of anarchy. For example, the existence or nonexistence of intergovernmental organizations, the strength of international legal precedent, the number of poles of power in the system (bipolar, tripolar, multipolar), the degree of globalization of interaction (including trade and communication), patterns of internation dependency, relevant technology (such as the development of weapons of mass destruction and the ballistic missiles to hurl them half a world away), may all play a role in modifying the effects of anarchy. Some theorists assert that anarchy can be overcome among states dealing with particular issues, and real sacrifice and cooperation can then be expressed by the nation-state. Agreements on ozone depletion, destruction of chemical and biological weapons, renunciation of land mines, and so forth can be seen as examples of this interpretation.
Other scholars would go even farther and claim that nation-states can proactively shape and mold the international system. As one such theorist, Alexander Wendt, put it in 1992, "anarchy is what states make of it." Drawing upon the more ideational literature mentioned, Wendt and others believe that shared meanings develop between governments in the system and that such shared meanings can transform what happens in the world system. For example, norms against genocide, assassination, rape in war, torture, slavery, use of land mines, and so forth have arisen within the international system and are becoming a robust basis for international action. Before the 1990s it would have been inconceivable for the former president of Chile, Augusto Pinochet, to have been held for an extradition hearing in Britain on the order of a Spanish judge. Yet by the turn of the twenty-first century Americans were wondering whether their own leaders might be tried for war crimes in a new international criminal court. These evolving norms did not arise from material conditions but from the formation of an ideational consensus or near-consensus.
In sum, then, the international systemic context of decision making must be factored into the theories of leader personality, small group dynamics, and bureaucratic and domestic politics that we have already examined.
Events Data No discussion of foreign policy analysis or decision-making theories would be complete without at least a cursory mention of events data. In most of the theories pitched at the subnational level of analysis mentioned, data indicative of the process of decision making hold center stage. Thus, content analysis of speeches and texts, analysis of tapes or records of group discussions, simulations and experiments, and process-tracing and other methodologies that lay bare the actual mechanics of decisions and their antecedents and consequences typically predominate the empirics of these theoretical efforts. We have previously mentioned that one could also focus on the outcomes of decision-making processes. The events data movement was designed to do just that.
Conceived when the social sciences were enamored of aggregate statistical testing of generalized hypotheses, the foreign policy "event" was to be the intellectual counterpart of the "vote" in the study of American politics. Votes could be tabulated, and a variety of statistical tests could be performed, to determine whether voting correlated with possible explanatory factors such as race, gender, ethnicity, socioeconomic status, and so forth. The foreign policy event was to have the same utility for international relations.
One could imagine tabulating every action every nation took every day, whether those actions be of a more diplomatic or rhetorical nature or something more concrete, such as the use of military force. A number of variables would naturally be coded, such as the identity of the acting nation, the identity of the recipients of the action, the date, and some scaling or categorization of the action itself. These events would be gleaned from open source material, such as newspapers and wire services. Once thousands of events had been collected, aggregate statistical testing for robust correlations, longitudinal tracking of the evolving relationships between any given nations, patterns preceding the outbreak of violence in the system, and a host of other potentially interesting questions could then be addressed.
Even the U.S. government was sufficiently interested in the potential of this data collection effort to fund various projects to the tune of several million dollars in the 1960s and 1970s. Some of these event data sets continued to be updated into the twenty-first century, some by computer programs. The more famous events data sets include WEIS (World Event/Interaction Survey), COPDAB (Conflict and Peace Data Bank), CREON (Comparative Research on the Events of Nations), and KEDS (Kansas Events Data Set).
Events data began to lose its appeal to the wider subfield as it became understood that much of the richness and complexity of decision making was simply missing in an events data format. Thus foreign policy analysis largely returned to the study of process variables more conducive to a focus on decision making.
Integrative Efforts We have spoken of foreign policy analysis as a radically integrative intellectual exercise that requires the analyst to know quite a bit about phenomena at a variety of levels of analysis, from leader personality to system characteristics. All of this information must then be filtered through one's model of the decision makers and the decision-making process in order to gauge what policy choices are most likely in a given situation. This is a fairly tall order, and yet the foreign policy and national security establishment of the U.S. government must make these types of analyses every day.
It would be desirable for the foreign policy analysis scholarly community to offer some advice on how this is to be done in a theoretical sense. How does one actually accomplish such integration? What are the possible outputs of such an integrative exercise? Perhaps not surprisingly, FPA scholars struggle to provide such advice. When Rosenau wrote his "pre-theory" article in the 1960s, with its goal of tracing the influence of factors across several levels of analysis, the most he could offer was a ranking of the importance of each level of analysis based on the genotype of the nation under study. Frankly, FPA has not progressed much past Rosenau's offering; almost forty years later very little self-consciously integrative theoretical work existed in FPA. The CREON II project (1971–1993) developed a creative theoretical framework for integration. Three fundamental components comprised the model. In the first, called Situational Imperative, the details of the situation at hand, including the type of problem, the relationships between the nation-states involved in the problem, the power distribution across the involved nations, and other variables would provide a macro-level "cut" at what the probable foreign policy choice would be. A second component, called Societal Structure and Status, would offer information from the levels of domestic politics and culture. The third and central component was termed the Ultimate Decision Unit. Into this component, representing the actual decision makers in their decision-making setting, information from the first two components of the model would be introduced.
Three types of ultimate decision units were envisioned. Each type of unit came with its own set of most important decision-making variables. For example, in a predominant leader decision unit, variables relating to the head of state and his or her interaction with advisers and style of processing information about the situation at hand would be most important. In the second type of unit, the single group decision unit, theoretical literature about small-group structures and processes, groupthink, and coalition building become crucial. In the third type of unit, the multiple autonomous actor decision unit, literature about bargaining and conflict resolution would be most salient. In another innovative move, country experts would be asked to provide the majority of the inputs for the models. Several empirical cases analyzed by the decision units model were presented in a special issue of the journal Political Psychology in 2001.
The Question of Evaluation Before concluding this survey of foreign policy analysis literature, it must be noted that although the contemporary focus is on explanation of foreign policy, the subfield began with the aspiration that the insights of FPA could be used to evaluate and improve the quality of foreign policy decision making. Many early FPA scholars, such as Irving Janis and Morton Halperin, understood that the price of low-quality foreign policy decisions was death for innocent persons, both combatants and noncombatants. Indeed, in an era of nuclear weapons, the scale of such tragic deaths could be massive. If no other subfield of international relations attempts it, at least FPA should shoulder the responsibility of revealing the true and large extent to which human agency shapes international affairs. By such revelation, useful lessons and insights for the policymakers of today and tomorrow might be drawn.
Although much of scholarly FPA has turned from that task, at the turn of the twenty-first century there were still scholars dedicated to its fulfillment. Alexander George, in particular, through such works as Bridging the Gap (1993), has done much to keep this normative agenda alive. Works in this vein, such as Good Judgment in Foreign Policy (2001), edited by Stanley Renshon and Deborah Welch Larsen, continued to appear. It is hoped that a fuller engagement of agential questions will arise again within the subfield of FPA. From such an engagement, greater policy relevance will be achieved, which would be a positive step forward.
The decision-making approaches and theories associated with the subfield of foreign policy analysis are unique in international relations for their attention to the specific human agents behind every foreign policy choice. Rather than agent-general deductive systems, such as found in game theory, a more detailed and particularistic account of human agency is sought. In addition to this agential focus, a decision-making approach mandates that information from multiple levels of analysis be collected and synthesized in a parallel fashion as the actual decision makers collect and synthesize such information. FPA thus becomes a profoundly integrative theoretical enterprise as well.
This type of approach is noteworthy for its potential not only to integrate disparate variables from distinct levels of analysis but also to integrate currently disconnected domains of human knowledge and activity concerning international affairs. Two notable examples are the disconnect between international relations and comparative politics within the discipline of political science, and the disconnect between international relations and comparative politics as found in the academy on the one hand and the foreign policymaking establishment of the government on the other.
Decision-making approaches and foreign policy analysis can provide some needed connections here. By integrating variables at the supernational and national levels of analysis (the traditional purview of international relations) with variables at subnational levels of analysis (the traditional realm of comparative politics), FPA provides theoretical and empirical linkages that demonstrate how each subfield could usefully inform the other. By emphasizing the decision maker and the decision-making process, by exploring the agency inherent in foreign policy making, by pointing out useful lessons from the study of past foreign policy decision making's successes and failures, FPA has the potential to render the knowledge of the academy useful to real practitioners. Given the immense destructive power that can be unleashed at the international level, it is surely incumbent upon the academy to "bridge the gap" and offer its best insights as a contribution to the peace and safety of the world.
Allison, Graham T. Essence of Decision: Explaining the Cuban Missile Crisis. Boston, 1971. One of the best works of foreign policy analysis.
Allison, Graham T., and Philip Zelikow. Essence of Decision: Explaining the Cuban Missile Crisis. New York, 1999. An update of an important work using material declassified in the 1990s.
Bonham, G. Matthew, Victor M. Sergeev, and Pavel B. Parhin. "The Limited Test-Ban Agreement: Emergence of New Knowledge Structures in International Negotiations." International Studies Quarterly 41 (1997): 215–240.
Boynton, G. R. "The Expertise of the Senate Foreign Relations Committee." In Valerie M. Hudson, ed. Artificial Intelligence and International Politics. Boulder, Colo., 1991.
Breuning, Marijke. "Culture, History, and Role: Belgian and Dutch Axioms and Foreign Assistance Policy." In Valerie M. Hudson, ed. Culture and Foreign Policy. Boulder, Colo., 1997.
Callahan, Patrick, Linda Brady, and Margaret G. Hermann, eds. Describing Foreign Policy Behavior. Beverly Hills, Calif., 1982.
Checkel, Jeffrey T. "Ideas, Institutions, and the Gorbachev Foreign Policy Revolution." World Politics 45 (1993): 271–300.
East, Maurice A., Stephen A. Salmore, and Charles F. Hermann, eds. Why Nations Act: Theoretical Perspectives for Comparative Foreign Studies. Beverly Hills, Calif., 1978.
George, Alexander L. "The 'Operational Code': A Neglected Approach to the Study of Political Leaders and Decision-making." International Studies Quarterly 13 (1969): 190–222. Another seminal article for those interested in cognitive approaches.
——. Bridging the Gap: Theory and Practice in Foreign Policy. Washington, D.C., 1993. An impassioned call for the academy to inform foreign policy decision making.
Hagan, Joe D. "Regimes, Political Oppositions, and the Comparative Analysis of Foreign Policy." In Charles F. Hermann, Charles W. Kegley, Jr., and James N. Rosenau, eds. New Directions in the Study of Foreign Policy. Boston, 1986.
Halperin, Morton. Bureaucratic Politics and Foreign Policy. Washington, DC, 1974. A classic on the subject.
Hermann, Charles F. "Decision Structure and Process." In Maurice East et al. Why Nations Act. Beverly Hills, Calif., 1978.
Hermann, Margaret G. "Personality and Foreign Policy Decision Making: A Study of 53 Heads of Government." In Donald A. Sylvan and Steve Chan, eds. Foreign Policy Decision Making: Perception, Cognition, and Artificial Intelligence. New York, 1984.
Holsti, Kal J. "National Role Conceptions in the Study of Foreign Policy." International Studies Quarterly 14 (1970): 233–309. The seminal article on national role conception that started a new research agenda that continues to this day.
Hudson, Valerie M. "Cultural Expectations of One's Own and Other Nations' Foreign Policy Action Templates." Political Psychology 20 (1999): 767–802.
Hudson, Valerie M., with Christopher A. Vore. "Foreign Policy Analysis Yesterday, Today, and Tomorrow." Mershon International Studies Review 39 (1995): 209–238. A good and fairly comprehensive overview of the subfield of foreign policy analysis.
Janis, Irving. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. 2d ed. Boston, 1982. A classic that is well known in social science circles.
Jervis, Robert. Perception and Misperception in International Politics. Princeton, N.J., 1976. Another classic work on perception.
Khong, Yuen Foong. Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965. Princeton, N.J., 1992.
Levy, Jack. "Learning and Foreign Policy: Sweeping a Conceptual Minefield." International Organization 48 (1994): 279–312.
Lotz, Hellmut. "Myth and NAFTA: The Use of Core Values in U.S. Politics." In Valerie M. Hudson, ed. Culture and Foreign Policy. Boulder, Colo., 1997.
"Political Psychology." Special issue on the Decision Units model of International Studies Review, 2002.
Purkitt, Helen E. "Problem Representations and Political Expertise: Evidence from 'Think Aloud' Protocols of South African Elite." In Donald A. Sylvan and James F. Voss, eds. Problem Representation in Foreign Policy Decision Making. Cambridge and New York, 1998.
Putnam, Robert. "Diplomacy and Domestic Politics: The Logic of Two-Level Games." International Organization 42 (1998): 427–486. A foundational article that should be read by those seeking to link foreign and domestic politics.
Renshon, Stanley A., and Deborah Welch Larsen, eds. Good Judgment in Foreign Policy. New York, 2001.
Rosenau, James N. "Pre-Theories and Theories of Foreign Policy." In R. Barry Farrell, ed. Approaches in Comparative and International Politics. Evanston, Ill., 1966.
Schrodt, Philip A. "Event Data in Foreign Policy Analysis." In Laura Neack, Jeanne Hey, and Patrick J. Haney, eds. Foreign Policy Analysis: Continuity and Change in Its Second Generation. Englewood Cliffs, N.J., 1995.
Snyder, Richard C., H. W. Bruck, and Burton Sapin. Foreign Policy Decision-Making: An Approach to the Study of International Politics. Glencoe, Ill., 1962. Arguably the work that began the decision-making theoretical enterprise in foreign policy.
Sprout, Harold, and Margaret Sprout. Man-Milieu Relationship Hypotheses in the Context of International Politics. Princeton, N.J., 1956. Another classic, emphasizing the need to integrate models of human decision making with models of the context in which those decisions are being made.
Sylvan, Donald A., and James F. Voss, eds. Problem Representation in Foreign Policy Decision Making. Cambridge and New York, 1998.
'T Hart, Paul, Eric K. Stern, and Bengt Sundelius, eds. Beyond Groupthink: Political Group Dynamics and Foreign Policy-Making. Ann Arbor, Mich., 1997.
Waltz, Kenneth N. Theory of International Politics. Reading, Mass., 1979. A classic of more mainstream international relations.
Wendt, Alexander. "Anarchy Is What States Make of It: The Social Construction of Power Politics." International Organization 46 (1992): 391–425. A good introduction to the constructivist turn in international relations theory.
Winter, David G. The Power Motive. New York, 1973.
See also The Behavioral Approach to Diplomatic History; Public Opinion .
DECISION MAKING AND CUBA
It is interesting to note that several of the most important works in foreign policy analysis use the same case study involving U.S. foreign policy toward Cuba during the Kennedy administration. Specifically, the Bay of Pigs invasion of 1961 and the Cuban missile crisis of 1962 have received more attention by foreign policy analysts than any other cases. The two crises provide a neat set of intellectual bookends: How could the same president, surrounded by approximately the same advisers, mess up so royally in April 1961 and yet acquit himself so heroically and save the world from nuclear holocaust sixteen months later? Two of the most important works in the foreign policy analysis tradition, Graham Allison's Essence of Decision and Irving Janis's Groupthink, use these cases to demonstrate the crucial role of decision making in international affairs.
Although one might think that such scholarly attention would eventually wane as the crises recede historically, events have sparked renewed interest. For example, the release in 1997 of tapes made by Kennedy during ExCom (Executive Committee) deliberations prompted a whole new wave of theoretical analysis of the Cuban missile crisis. In the spring of 2001, the main participants in the Bay of Pigs, including Fidel Castro, rebel commanders, and Central Intelligence Agency handlers, convened for a first-ever conference in Havana, and information heretofore secret, such as transcripts of Castro's radio communications from the field, were made public at that time.
One of the best ways to view the immense reconceptualization of these cases that all of this new information has brought about is to read Graham Allison's original Essence of Decision (1971) side by side with the latest version of the book by Allison and Philip Zelikow (1999). An important lesson to be gleaned is that our understanding of decision making rests in large part upon our understanding of the empirical historical realities of decision making. When you change the latter, you inevitably change the former.
"Decision Making." Encyclopedia of American Foreign Policy. . Encyclopedia.com. (July 23, 2017). http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/decision-making
"Decision Making." Encyclopedia of American Foreign Policy. . Retrieved July 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/decision-making
Decision making is a vital component of small business success. Decisions based on a foundation of knowledge and sound reasoning can lead the company into long-term prosperity; conversely, decisions made on the basis of flawed logic, emotionalism, or incomplete information can quickly put a small business out of commission (indeed, bad decisions can cripple even big, capital-rich corporations over time). All businesspeople recognize the painful necessity of choice. Furthermore, making these choices must be done in a timely fashion, for as most people recognize, indecision is in essence a choice in and of itself—a choice to take no action. Ultimately, what drives business success is the quality of decisions and their implementation. Good decisions mean good business.
The concept of decision making has a long history; choosing among alternatives has always been a part of life. But sustained research attention to business decision making has developed only in recent years. Contemporary advances in the field include progress in such elements of decision making as the problem context; the processes of problem finding, problem solving, and legitimation; and procedural and technical aids.
THE ELEMENTS OF DECISION MAKING
The Problem Context
All decisions are about problems, and problems shape context at three levels. The macrocontext draws attention to global issues (exchange rates, for example), national concerns (the cultural orientations toward decision processes of different countries), and provincial and state laws and cultures within nations. The mesocontext attends to organizational cultures and structure. The microcontext addresses the immediate decision environment—the organization's employees, board, or office.
Decision processes differ from company to company. But all companies need to take these three context levels into consideration when a decision needs to be made. Fortunately, economical ways to obtain this information are available and keep the cost of preparing for decisions from becoming prohibitive.
Problem Finding and Agenda Setting
An important difficulty in decision making is failure to act until one is too close to the decision point—when information and options are greatly limited. Organizations usually work in a "reactive" mode. Problems are "found" only after the issue has begun to have a negative impact on the business. Nevertheless, processes of environmental scanning and strategic planning are designed to perform problem reconnaissance to alert business people to problems that will need attention down the line. Proactivity can be a great strength in decision making, but it requires a decision intelligence process that is absent from many organizations.
Moreover, problem identification is of limited use if the business is slow to heed or resolve the issue. Once a problem has been identified, information is needed about the exact nature of the problem and potential actions that can be taken to rectify it. Unfortunately, small business owners and other key decision makers too often rely on information sources that "edit" the data—either intentionally or unintentionally—in misleading fashion. Information from business managers and other employees, vendors, and customers alike has to be regarded with a discerning eye, then.
Another kind of information gathering reflects the array and priority of solution preferences. What is selected as possible or not possible, acceptable or unacceptable, negotiable or non-negotiable depends upon the culture of the firm itself and its environment. A third area of information gathering involves determining the possible scope and impact that the problem and its consequent decision might have. Knowledge about impact may alter the decision preferences. To some extent, knowledge about scope dictates who will need to be involved in the decision process.
Problem solving—also sometimes referred to as problem management—can be divided into two parts—process and decision. The process of problem solving is predicated on the existence of a system designed to address issues as they crop up. In many organizations, there does not seem to be any system. In such businesses, owners, executives, and managers are apparently content to operate with an ultimately fatalistic philosophy—what happens, happens. Business experts contend that such an attitude is simply unacceptable, especially for smaller businesses that wish to expand, let alone survive. The second part of the problem management equation is the decision, or choice, itself. Several sets of elements need to be considered in looking at the decision process. One set refers to the rationales used for decisions. Others emphasize the setting, the scope and level of the decision, and the use of procedural and technical aids.
Organizational decision makers have adopted a variety of styles in their decision making processes. For example, some business leaders embrace processes wherein every conceivable response to an issue is examined before settling on a final response, while others adopt more flexible philosophies. The legitimacy of each style varies in accordance with individual business realities in such realms as market competitiveness, business owner personality, acuteness of the problem, etc.
Certainly, some entrepreneurs/owners make business decisions without a significant amount of input or feedback from others. Home-based business owners without any employees, for example, are likely to take a far different approach to problem-solving than will business owners who have dozens of employees and/or several distinct internal departments. The latter owners will be much more likely to include findings of meetings, task forces, and other information gathering efforts in their decision making process. Of course, even a business owner who has no partners or employees may find it useful to seek information from outside sources (accountants, fellow businesspeople, attorneys, etc.) before making important business decisions. "Since the owner makes all the key decisions for the small business, he or she is responsible for its success or failure," wrote David Karlson in Avoiding Mistakes in Your Small Business. "Marketing and finance are two of several areas in which small business owners frequently lack sufficient experience, since they previously worked as specialists for other people before they started their own businesses. As a result, they generally do not have the experience needed to make well-informed decisions in the areas with which they are unfamiliar. The demands of running and growing a small business will soon expose any Achilles heel in a president/owner. It is best to find out your weaknesses early, so you can develop expertise or get help in these areas."
Scope and Level
Finally, attention must be paid to problem scope and organizational level. Problems of large scope need to be dealt with by top levels of the organization. Similarly, problems of smaller scope can be handled by lower levels of the organization. This is a failing of many organizations, large and small. Typically, top level groups spend much too much time deciding low-level, low-impact problems, while issues of high importance and organizational impact linger on without being addressed or resolved.
Procedural and Technical Aids
In recent years, a number of procedural and technical aids have been developed to help business managers in their decision making processes. Most of these have taken the form of software programs that guide individuals or groups through the various elements of the decision making process in a wide variety of operational areas (budgeting, marketing, inventory control, etc.). Leadership seminars and management training offer guidance in the decision making process as well.
Whatever decision making process is utilized, those involved in making the decision need to make sure that a response has actually been arrived at. All too often, meetings and other efforts to resolve outstanding business issues adjourn under an atmosphere of uncertainty. Participants in decision making meetings are sometimes unsure about various facets of the decision arrived at. Some meeting participants, for example, may leave a meeting still unsure about how the agreed-upon response to a problem is going to be implemented, while others may not even be sure what the agreed-upon response is. Indeed, business researchers indicate that on many occasions, meeting participants depart with fundamentally different understandings of what took place. It is up to the small business owner to make sure that all participants in the decision making process fully understand all aspects of the final decision.
The final step in the decision making process is the implementation of the decision. This is an extremely important element of decision making; after all, the benefits associated with even the most intelligent decision can be severely compromised if implementation is slow or flawed.
FACTORS IN POOR DECISION MAKING
Several factors in flawed decision making are commonly cited by business experts, including the following: limited organizational capacity; limited information; the costliness of analysis; interdependencies between fact and value; the openness of the system(s) to be analyzed; and the diversity of forms on which business decisions actually arise. Moreover, time constraints, personal distractions, low levels of decision making skill, conflict over business goals, and interpersonal factors can also have a deleterious impact on the decision making capacities of a small (or large) business.
A second category of difficulties is captured in a number of common pitfalls of the decision procedure. One such pitfall is "decision avoidance psychosis," which occurs when organizations put off making decisions that need to be made until the very last minute. A second problem is decision randomness. This process was outlined in the famous paper called "A Garbage Can Model of Organizational Choice" by Cohen, March and Olsen. They argued that organizations have four roles or vectors within them: problem knowers (people who know the difficulties the organization faces): solution providers (people who can provide solutions but do not know the problems); resource controllers (people who do not know problems and do not have solutions but control the allocation of people and money in the organization) and a group of "decision makers looking for work" (or decision opportunities). For effective decision making, all these elements must be in the same room at the same time. In reality, most organizations combine them at random, as if tossing them into a garbage can.
Decision drift is another malady that can strike at a business with potentially crippling results. This term, also sometimes known as the Abilene Paradox in recognition of a famous model of this behavior, refers to group actions that take place under the impression that the action is the will of the majority, when in reality, there never really was a decision to take that action.
Decision coercion, also known as groupthink, is another very well known decision problem. In this flawed decision making process, decisions are actually coerced by figures in power. This phenomenon can most commonly be seen in instances where a business owner or top executive creates an atmosphere where objections or concerns about a decision favored by the owner/executive are muted because of fears about owner/executive reaction.
IMPROVING DECISION MAKING
Business consultants and experts agree that small business owners and managers can take several basic steps to improve the decision making process at their establishments.
Improve the setting. Organizing better meetings (focused agenda, clear questions, current and detailed information, necessary personnel) can be a very helpful step in effective decision making. Avoid the garbage can; get the relevant people in the same room at the same time. Pay attention to planning and seek closure.
Use Logical Techniques. Decision making is a simple process when approached in a logical and purposeful manner. Small businesses that are able to perceive the problem, gather and present data, intelligently discuss the data, and implement the decision without succumbing to emotionalism are apt to make good ones that will launch the firm on a prosperous course.
Evaluate decisions and decision making patterns. Evaluation tends to focus the attention, and make individuals and teams more sensitive to what they are actually doing in their decision making tasks. Evaluation is especially helpful in today's business environment because of the interdependency of individuals and departments in executing tasks and addressing goals.
Determine appropriate levels of decision making. Business enterprises need to make sure that operational decisions are being made at the right level. Keys to avoiding micromanagement and other decision making pitfalls include: giving problems their proper level of importance and context; addressing problems in an appropriate time frame; and establishing and shifting decision criteria in accordance with business goals.
Burke, Lisa A., and Monica K. Miller. "Taking the Mystery Out of Intuitive Decision Making." Academy of Management Executive. November 1999.
Cohen, M. James, G. March, and J. Olsen. "A Garbage Can Model of Organizational Choice." Administrative Science Quarterly. March 1972.
Graham, John R. "Avoiding Dumb and Dumber Business Decisions: Why Even the Experts Make Mistakes." American Salesman. April 1997.
Gunn, Bob. "Decisions, Decisions." Strategic Finance. January 2000.
Karlson, David. Avoiding Mistakes in Your Small Business. Crisp, 1994.
Magasin, Michael, and Frieda L. Gehlen. "Unwise Decisions and Unanticipated Consequences." Sloan Management Review. Fall 1999.
Roe, Amy. "One of the Most Ticklish Jobs is Decision Making." The Business Journal. 2 June 1997.
Selin, Cynthia. "Trust and The Illusive Force of Scenarios." Futures. February 2006.
Hillstrom, Northern Lights
updated by Darnay, ECDI
"Decision Making." Encyclopedia of Small Business. . Encyclopedia.com. (July 23, 2017). http://www.encyclopedia.com/entrepreneurs/encyclopedias-almanacs-transcripts-and-maps/decision-making
"Decision Making." Encyclopedia of Small Business. . Retrieved July 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/entrepreneurs/encyclopedias-almanacs-transcripts-and-maps/decision-making
Decision making is a term used to describe the process by which families make choices, determine judgments, and come to conclusions that guide behaviors. That the process is called family decision-making implies that it requires more than one member's input and agreement (Scanzoni and Polonko 1980). The family decision-making process is a communication activity—it rests on the making and expression of meaning. The communication may be explicit (as when families sit down and discuss a prospective decision) or implicit (as when families choose an option based on their past decisions or some other unspoken rationale). Families are confronted with a myriad of decisions, including the purchase of products, the selection of educational practices, the choice of recreational activities, the use of disciplinary practices, and the deployment of limited resources. Decision making is an unavoidable, daily process.
Family decision making is a process that can be filled with tension, extremely pleasant and rewarding, both, or somewhere in between. In the decision-making process, families can address the differences among members (Galvin and Brommel 2000) and negotiate their needs for closeness and independence (Baxter and Montgomery 1996). Further, as James Atkinson and Timothy Stephen (1990) observed, decision making is inextricably bound to values. In decision making "values are communicated within the family group and [they] will become part of a family's assumptive foundation as its members coordinate future action" (Atkinson and Stephen, p. 5). Thus, family decision-making spans many family goals and practices.
Family Decision-Making Processes
Decisions within families may be classified into several types: instrumental, affective, social, economic, and technical. Instrumental decisions are those which rest on functional issues such as providing money, shelter, and food for the family members (Epstein, Bishop, and Baldwin 1982). Affective decisions deal with choices related to feelings and emotions. Decisions such as whether to get married are affective. Social decisions (Noller and Fitzpatrick 1993) are those related to the values, roles, and goals of the family, such as decisions about whether one parent will stay at home while the children are preschool age. Economic decisions focus on choices about using and gathering family resources. Whether an eighteen-yearold child should get a job and contribute to the family income is an economic decision. Technical decisions relate to all the subdecisions that have to be made to carry out a main decision. For instance, if a family decides that one member will quit work and go to college, then a variety of technical decisions must be made to enact that decision (Noller and Fitzpatrick 1993).
Families use a variety of processes for actually reaching a decision. Many families have a habitual process that they use regularly whenever they need to make a decision. Other families vary in the way they approach decision making depending on the type of decision, their mood, and their stage of development. Researchers often discuss five possible processes that families use in reaching decisions. These include appeals to authority and status, rules, values, use of discussion and consensus, and de facto decisions.
Authority and Status
This approach allows family decisions to occur as a result of the will of the person in the family with the most status and/or authority. For example, in some traditional families, decision making may be vested in the father. The other members of the family are thus guided by what he says is right. If a family is discussing where they should go for a family summer vacation, for instance, and the father decides that a camping trip is the best decision, the rest of the family concurs because of his authority. This method of decision making works for a family as long as all the members agree about who has the most status and authority. If the family members do not agree that the father has the authority to make decisions, they may engage in serious conflict rather than allowing the father to make a decision for them.
Further, the authority approach may be more complex than the previous discussion implies. Many families may have divided family decisionmaking domains. In so doing, they designate certain types of decisions as the province of one member and other types that belong to other family members. For example, many households divide the labor and then delegate authority based on who is in charge of a particular area. If a husband is in charge of maintaining the family finances, he may have authority over major buying decisions. However, he may have no authority over issues concerning the children; for instance, the decision about bedtimes might be out of his jurisdiction. In this process, everyone in the family might have authority over some decision-making concerns.
Some families grant authority and status to members based on expertise. Thus, if an adolescent knows a great deal about computers and the Internet or about automobiles, the adolescent may be the one who decides about major expenditures such as what type of computer to buy for the family, what Internet provider to use, or which car to purchase.
Finally, the complexity involved in understanding decision making by authority is revealed in examining the communication process involved in making decisions. As Kay Palan and Robert Wilkes (1997) observe, the interactions between adolescents and parents often influence the decision outcome even though a parent may seem to make the final decision. Palan and Wilkes found that teenagers used a wide variety of strategies that allowed them to influence decisions in their families.
Many families use rules to ease decision making. Rules in general create structures that help families to function. Some specific rules may provide guidance for decisions about dividing family resources. For instance, if a family is confronted with an inheritance without specific assignments, as in a will that states generally that the possessions should be divided among the children, a system of rules can be useful in dividing the estate. A system of rules for this situation could be as follows: heirs would alternate in choosing something they wished to keep. If someone else wanted what had been chosen they could offer to trade, but the first person has the right of refusal. This process guides decision making by providing a system to which all of the family agrees. Sometimes parents use rules like this when they instruct one child to divide a treat like a pie and then allow the second child first choice among those portions.
Rules may also structure decision-making discussions. For example, some families maintain rules about equal participation in a decisionmaking conversation. They will not come to a decision until all family members involved have an approximately equal say about the topic. Some families have a rule specifying that each member of the family has to say something before a decision can be reached. Other families have rules setting time limits for the process and a decision has to be reached when the time has lapsed.
Decisions based on values are exercised in families that have strongly articulated principles. These principles may be explicitly stated or indirectly communicated, perhaps through family stories or other meaning-making practices. Some of these principles may derive from organized religion, a commitment to social justice, racial equality, or some other cherished value. For example, when parents are deciding about schooling for their children, some may choose religious education or may choose to homeschool, based on a dedication to their values. Additionally, families may choose to give volunteer time, donate money, or take in foster children as a result of their value system.
Discussion and Consensus
Decisions founded in discussion and consensus are related to decisions based on values. Families that use discussion and consensus as their mode of reaching a decision are committed to the principle of democratic process. It is important to these families that all members have a voice and that members feel that they contributed to the eventual decision. Families utilizing discussion and consensus often convene family meetings to discuss a potential decision. If a family wanted to adopt this process, they would call a family meeting and let everyone have a voice in discussing the decision to be made. The process of consensus necessitates that the family would continue discussing the decision until all the members were satisfied with the eventual decision.
A family follows this decision-making process when they talk about their separate positions on a decision and continue talking until they reach an acceptable compromise. This type of decisionmaking process works best when the family is comfortable with power sharing.
This type of decision occurs when the family fails to actively engage in a specific process, and the decision gets made by default. For example, when Todd and Ellen want to buy a new car, they discuss the decision. They find a car at a price they can afford, but they cannot absolutely agree to buy it. While they wait, trying to decide about the purchase, the car is sold, and they cannot find another that suits them at the right price. In another example, Roberto is trying to decide about taking a new job and moving his family to another state. He is unsure about whether this is a good idea, both personally and professionally. Further, he receives conflicting input from his family about the decision. If he lets the deadline pass for acting on the job offer, the decision is, in effect, made without the family actually stating that they have decided not to move. De facto decisions allow family members to escape responsibility for the repercussions of a decision since no one actively supports the course of action taken.
Some families discuss their processes and have an overt, preferred mode for decision making. Other families simply fall into one or another process without thinking about it much. Additionally, many families may say they prefer to reach a decision through a discussion of all the members, yet the power relations in the family are such that discussion only confirms what the father, for example, wants as the decision. In this manner, the family may preserve an illusion of openness while actually using an authoritarian process for coming to a decision. Barbara J. Risman and Danette Johnson-Summerford (2001) talk about manifest power and latent power. Manifest power is present in decision making by authority because it involves enforcing one's will against others. Latent power, sometimes called unobtrusive power, exists when the "needs and wishes of the more powerful are anticipated and met" (p. 230). When families profess a democratic style of decision making, but really acquiesce to the will of an authority figure, latent power is being exercised. Families make countless decisions using power relations and these various processes: authority, rules, values, discussion, and de facto. Often the process engaged in by the family reveals more about them and affects them more profoundly than the outcome.
See also:Communication: Couple Relationships; Communication: Family Relationships; Conflict: Couple Relationships; Conflict: Family Relationships; Conflict: Parent-Child Relationships; Equity; Family Business; Family Life Education; Health and Families; Hospice; Nagging and Complaining; Power: Family Relationships; Power: Marital Relationships; Problem Solving; Resource Management; Sexual Communication: Couple Relationships
atkinson, j., and stephen, t. (1990). "reconceptualizingfamily decision-making: a model of the role of outside influences." paper presented at the annual meeting of the speech communication association, chicago, il.
baxter, l. a., and montgomery, b. m. (1996). relating: dialogues and dialectics. new york: guilford press.
epstein, n. b.; bishop, d. s.; and baldwin, l. m. (1982)."mcmaster model of family functioning." in normal family processes, ed. f. walsh. new york: guilford press.
galvin, k. m., and brommel, b. j. (2000). family communication: cohesion and change, 5th edition. new york: longman.
noller, p., and fitzpatrick, m. a. (1993). communication in family relationships. englewood cliffs, nj: prentice-hall.
palan, k. m., and wilkes, r. e. (1997). "adolescent-parentinteraction in family decision making." journal of consumer research 24:159–169.
risman, b. j., and johnson-summerford, d. (2001). "doing it fairly: a study of post-gender marriages." in men and masculinity, ed. t. f. cohen. belmont, ca: wadsworth.
scanzoni, j., and polonko, k. (1980). "a conceptual approach to explicit marital negotiation." journal of marriage and the family 42:31–44.
lynn h. turner
"Decision Making." International Encyclopedia of Marriage and Family. . Encyclopedia.com. (July 23, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/decision-making
"Decision Making." International Encyclopedia of Marriage and Family. . Retrieved July 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/decision-making
Decision making, also referred to as problem solving, is the process of recognizing a problem or opportunity and finding a solution to it. Decisions are made by everyone involved in the business world, but managers typically face the most decisions on a daily basis. Many of these decisions are relatively simple and routine, such as ordering production supplies, choosing the discount rate for an order, or deciding the annual raise of an employee. These routine types of decisions are known as programmed decisions, because the decision maker already knows what the solution and outcome will be. However, managers are also faced with decisions that can drastically affect the future outcomes of the business. These types of decisions are known as nonprogrammed decisions, because neither the appropriate solution nor the potential outcome is known. Examples of nonprogrammed decisions include merging with another company, creating a new product, or expanding production facilities.
Decision making typically follows a six-step process:
- Identify the problem or opportunity
- Gather relevant information
- Develop as many alternatives as possible
- Evaluate alternatives to decide which is best
- Decide on and implement the best alternative
- Follow-up on the decision
In step 1, the decision maker must be sure he or she has an accurate grasp of the situation. The need to make a decision has occurred because there is a difference between the desired outcome and what is actually occurring. Before proceeding to step 2, it is important to pinpoint the actual cause of the situation, which may not always be obviously apparent.
In step 2, the decision maker gathers as much information as possible because having all the facts gives the decision maker a much better chance of making the appropriate decision. When an uninformed decision is made, the outcome is usually not very positive, so it is important to have all the facts before proceeding.
In step 3, the decision maker attempts to come up with as many alternatives as possible. A technique known as "brainstorming," whereby group members offer any and all ideas even if they sound totally ridiculous, is often used in this step.
In step 4, the alternatives are evaluated and the best one is selected. The process of evaluating the alternatives usually starts by narrowing the choices down to two or three and then choosing the best one. This step is usually the most difficult, because there are often many variables to consider. The decision maker must attempt to select the alternative that will be the most effective given the available amount of information, the legal obstacles, the public relations issues, the financial implications, and the time constraints on making the decision. Often the decision maker is faced with a problem for which there is no apparent good solution at the moment. When this happens, the decision maker must make the best choice available at the time but continue to look for a better option in the future.
Once the decision has been made, step 5 is performed. Implementation often requires some additional planning time as well as the understanding and cooperation of the people involved. Communication is very important in the implementation step, because most people are resistant to change simply because they do not understand why it is necessary. In order to ensure smooth implementation of the decision, the decision maker should communicate the reasons behind the decision to the people involved.
In step 6, after the decision has been implemented, the decision maker must follow-up on the decision to see if it is working successfully. If the decision that was implemented has corrected the difference between the actual and desired outcome, the decision is considered successful. However, if the implemented decision has not produced the desired result, once again a decision must be made. The decision maker can decide to give the decision more time to work, choose another of the generated alternatives, or start the whole process over from the beginning.
STRATEGIC, TACTICAL, AND OPERATIONAL DECISIONS
People at different levels in a company have different types of decision-making responsibilities. Strategic decisions, which affect the long-term direction of the entire company, are typically made by top managers. Examples of strategic decisions might be to focus efforts on a new product or to increase production output. These types of decisions are often complex and the outcomes uncertain, because available information is often limited. Managers at this level must often depend on past experiences and their instincts when making strategic decisions.
Tactical decisions, which focus on more intermediate-term issues, are typically made by middle managers. The purpose of decisions made at this level is to help move the company closer to reaching the strategic goal. Examples of tactical decisions might be to pick an advertising agency to promote a new product or to provide an incentive plan to employees to encourage increased production.
Operational decisions focus on day-to-day activities within the company and are typically made by lower-level managers. Decisions made at this level help to ensure that daily activities proceed smoothly and therefore help to move the company toward reaching the strategic goal. Examples of operational decisions include scheduling employees, handling employee conflicts, and purchasing raw materials needed for production.
It should be noted that in many "flatter" organizations, where the middle management level has been eliminated, both tactical and operational decisions are made by lower-level management and/or teams of employees.
Group decision making has many benefits as well as some disadvantages. The obvious benefit is that there is more input and therefore more possible solutions to the situation can be generated. Another advantage is that there is shared responsibility for the decision and its outcome, so one person does not have total responsibility for making a decision. The disadvantages are that it often takes a long time to reach a group consensus and that group members may have to compromise in order to reach a consensus. Many businesses have created problem-solving teams whose purpose is to find ways to improve specific work activities.
see also Management
Boone, Louis E., and Kurtz, David L. (2005). Contemporary Business (11th ed.). Mason, OH: Thomson/South-Western.
Bounds, Gregory M., and Lamb, Charles W., Jr. (1998). Business. Cincinnati, OH: South-Western College Publishing.
Clancy, Kevin J., and Shulman, Robert S. (1994). Marketing Myths That are Killing Business: The Cure for Death Wish Marketing. New York: McGraw-Hill.
French, Wendell L. (2003). Human Resources Management (5th ed.). Boston: Houghton Mifflin Co.
Madura, Jeff (1998). Introduction to Business (3rd ed.). Belmont, CA: Thomson/South-Western.
Nickels, William G., McHugh, James M., and McHugh, Susan M. (2004). Understanding Business (7th ed.). Boston: McGraw-Hill.
Pride, William M., Hughes, Robert J., and Kapoor, Jack R. (1999). Business (6th ed.). New York: Houghton Mifflin.
"Decision Making." Encyclopedia of Business and Finance, 2nd ed.. . Encyclopedia.com. (July 23, 2017). http://www.encyclopedia.com/finance/finance-and-accounting-magazines/decision-making
"Decision Making." Encyclopedia of Business and Finance, 2nd ed.. . Retrieved July 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/finance/finance-and-accounting-magazines/decision-making