Expected Utility Theory

views updated

Expected Utility Theory

BIBLIOGRAPHY

Expected utility theory is a model that represents preference over risky objects, by weighted average of utility assigned to each possible outcome, where the weights are the probability of each outcome.

The primary motivation for introducing expected utility, instead of taking the expected value of outcomes, is to explain attitudes toward risk. Consider for example a lottery, which gives $100 and $0 with even chances, and a sure receipt of $50. Here typically one chooses the sure receipt, whereas the two alternatives yield the same expected return. Another example is the Saint Petersburg paradox. Consider a game of flipping a fair coin until one has a tail. When the number of flips obtained is k, one receives 2 k, which happens with probability (1/2)k. The expected return of this game is , which is infinity. However, a typical decision maker is willing to pay only a finite amount for playing this game.

The theory resolves this problem by taking risk attitude into account. Here a risky object is a probability distribution over outcomes, denoted by p. Then the expected utility representation takes the form U (p ) = Σ u (xk ) p k where p k is the probability that outcome xk is realized, and function u expresses the utility assigned to each outcome. Notice that u (x ) may not be x as it is, and the curvature of u explains the decision makers risk attitude. When the graph of u is convex to the top, one has the formula 0.5u (100) + 0.5u (0) < u (50), which explains the first example (similarly for the second). When this is the case, the decision maker is said to be risk averse. Expected utility theory enables empirical analysis of choice under uncertainty such as financial decision, by quantifying the degree of curvature of u.

The theory originates from Daniel Bernoulli (17001782), an eighteenthcentury mathematician, and was given an axiomatic foundation by John von Neumann and Oskar Morgenstern in the 1940s. They started from a preference ranking of probability distributions over outcomes, and provided the condition for its expected utility representability. The condition consists of three axioms: weak order, continuity, and independence. The most prominent axiom is independence: when the decision maker prefers distribution p to distribution q, then he or she prefers the distribution made by mixing p and any another distribution r with proportion λ : 1λ, that is λp + (1λ ) r, to the distribution made by mixing q and r with the same proportion, that is λq + (1λ ) r. Here λp + (1λ ) r refers to the distribution that assigns probability λpk + (1λ ) rk on each outcome xk respectively. Informally speaking, when p is preferred to q then having p with probability λ and r with probability 1λ will be preferred to having q with probability λ and r with probability 1λ, since the difference lies only in p and q.

The theory is extended to subjective expected utility theory, where the probabilities are not given objectively, but the decision maker is to hold a subjective belief over relevant events.

Various criticisms to the expected utility theory motivate further developments, two of which are explained in this entry. The first criticism is that the independence axiom may be violated systematically, which is referred to as the Allais paradox. Consider for example a bet, which gives $120.00 with probability 0.9 and $0 with 0.1, and a sure receipt of $100.00. The typical choice here is to take the sure receipt. Now consider two bets, one gives $120.00 with probability 0.45 and $0 with 0.55, the other gives $100.00 with probability 0.5 and $0 with 0.5. Here the typical choice is to take the first bet. This violates independence since the second two bets are made by mixing the first two with the lottery that gives $0 for sure, with even proportion. One explanation of this is called certainty effect, that an outcome is overweighed when it is sure than when uncertain.

The second criticism is that risk attitudes may depend on status quo points, whereas the theory assumes that only the distributions over final outcomes matter. Suppose for example that the decision maker is given $1,000 initially and faces two alternatives, one gives $200 more and $0 (no change) with even chances, the other gives $100 more for sure. The typical choice here is to take the sure gain, which exhibits risk aversion. On the other hand, suppose one is given $1,200 initially and faces two alternatives, one yields a $200 loss and $0 with even chances, the other yields a $100 loss for sure. Now the typical choice is to take the risk, which exhibits risk loving, while the distributions over final outcomes are identical across the two comparisons.

These anomalies, together with other ones, motivate various models of nonexpected utility.

SEE ALSO Expected Utility Subjective

BIBLIOGRAPHY

Machina, Mark. 1987. Choice under Uncertainty: Problems Solved and Unsolved. Journal of Economic Perspectives 1 (1): 121154.

Takashi Hayashi