## theory of games

**-**

## Game Theory

# GAME THEORY

## Games of Complete Information

A game is an abstract, formal description of a strategic interaction. Any strategic interaction involves two or more decision makers (players), each with two or more ways of acting (strategies), such that the outcome depends on the strategy choices of all the players. Each player has well-defined preferences among all the possible outcomes, enabling corresponding utilities (payoffs) to be assigned. A game makes explicit the rules governing players' interaction, the players' feasible strategies, and their preferences over outcomes. Game theory describes games by means of mathematical concepts (e.g., sets, functions, and relations).

### normal form

A possible representation of a game is in normal form. A normal form game is completely defined by three elements that constitute the structure of the game: a list of players *i* = 1, …, *n* ; for each player *i* ; a finite set of pure strategies *S* _{i }; and a payoff function *u* _{i } that gives player *i* 's payoff *u* _{i }(*s* )_{n } for each n-tuple of strategies (*s* _{1}, …, *s* _{n }), where *u _{i}* :

*X*. A player may choose to play a pure strategy or instead to randomize over his or her pure strategies; a probability distribution over pure strategies is called a mixed strategy and is denoted by

_{j = 1}S_{j}→R*σ*

_{i }. Each player's randomization is assumed to be statistically independent of that of his or her opponents, and the payoffs to a mixed strategy are the expected values of the corresponding pure strategy payoffs. A different interpretation of mixed strategies, based on the idea that players do not always randomize over their feasible actions, is that the probability distribution

*σ*

_{i }represents other players' uncertainty about what player will do. A mixed strategy is thus thought of as other players' conjecture about a player's plans of action. The conjectures depend on the player's private information, which is left unspecified in the model. A problem with this interpretation is that if there are reasons behind the choices a player makes, they should be included in the model, since they are likely to be payoff relevant.

The two-by-two matrix in Figure 1 depicts the two-player normal form representation of the famous Prisoner's dilemma game, where *C* stands for *cooperate* and *D* for *defect*. The numbers in the cell of the matrix denote players' payoffs: the first number is the payoff for the row player, the second for the column player. Each player picks a strategy independently, and the outcome, represented in terms of players' payoffs, is the joint product of these two strategies. Notice that in the game of Figure 1, each player is better off defecting no matter what the other player does. For example, if the column player cooperates, the row player gets a payoff of 3 by defecting and a payoff of 2 by cooperating, while if the column player defects, the row player gains a payoff of 1 by defecting and of 0 by cooperating. When, regardless of what other players do, a strategy yields a player a (strictly) inferior

payoff than some other strategy, it is called a *dominated strategy*. When a strategy yields the same payoff of another undominated strategy, but it has an inferior payoff against at least one opponent's strategy, it is called a *weakly dominated strategy*.

The game of Figure 1 is one of complete information, in that the players are assumed to know the rules of the game (which include players' strategies) and other players' payoffs. If players are allowed to enter into binding agreements before the game is played, one can say that the game is cooperative. Noncooperative games instead make no allowance for the existence of an enforcement mechanism that would make the terms of the agreement binding on the players. What strategies should rational players choose? What could be rightly called the central dogma of game theory states that rational players will always jointly maximize their expected utilities, or play a Nash equilibrium (compare Nash 1996). Informally, a Nash equilibrium specifies players' actions and beliefs such that (1) each player's action is optimal given his or her beliefs about other players' choices; (2) players' beliefs are correct. Thus, an outcome that is not a Nash equilibrium requires either that a player chooses a suboptimal strategy or that some players misperceive the situation.

More formally, a Nash equilibrium is a vector of strategies (*σ* *_{1}, …, *σ* *_{n }), one for each of the *n* players in the game, such that each *σ* *_{i } is optimal given (or is a best reply to) *σ* *_{−i}. That is*u* _{i }(*σ* *_{i }, *σ* *_{−i}) ≥ *u* _{i }(*σ* _{i }, *σ* *_{−i }) for all mixed strategies of player *i σ* _{i }

Note that optimality is only conditional on a fixed? σ_{−i }, not on all possible *σ* _{−i }. A strategy that is a best reply to a

given combination of the opponents' strategies may fare poorly vis-à-vis another strategy combination.

In a game like the one depicted in Figure 2 the row player gains a payoff of 1 if the toss of two coins results in two heads or two tails and loses 1 otherwise, and vice versa for the column player.

This game has no Nash equilibrium in pure strategies. Nash proved that—provided certain restrictions are imposed on strategy sets and payoff functions—every game has at least one equilibrium in mixed strategies. In a mixed strategy equilibrium, the equilibrium strategy of each player makes the other indifferent between the strategies on which he or she is randomizing. In particular, the game in Figure 2 has a unique Nash equilibrium in which both players randomize between their strategies with probability ½. Then, if the first player plays *σ* _{1} = (½ H, ½ T), his or her expected payoff is ½ 1 + ½ − 1 = 0 regardless of the strategy of the second player.

The players (and the game theorist) can predict that a specific equilibrium will be played just in case they have enough information to infer players' choices. The standard assumptions in game theory are:

CK1. The structure of the game is common knowledge

CK2. The players are rational (i.e., they are expected utility maximizers) and this is common knowledge

The concept of common knowledge was introduced by David K. Lewis (1969) in his study on convention, which is arguably the first major philosophical work in which game theory plays a central role as a modeling tool. Simply put, the idea of common knowledge is that a certain proposition *p* is common knowledge among two players if both of them know *p*, both of them know that they know *p*, and so on ad infinitum. The previous assumptions may allow the players to predict an opponent's strategy. For example, in the prisoner's dilemma

game of Figure 1 rational players would never choose the strictly dominated strategy *C*. CK1 and CK2, then, allow the players to predict that the opponent will play *D*. However (compare Bicchieri 1993), the previous CK assumptions do not always guarantee that a prediction of play can be made. For one, even if the game has a unique equilibrium, the set of strategies that, under the assumptions CK1 and CK2, players may choose need not contain the equilibrium strategies only. Moreover, predictability is hampered by another common problem encountered in game theory: multiple Nash equilibria.

Suppose two players have to divide $100 among them. They must restrict their proposals to integers, and each has to independently propose a way to split the sum. If the total proposed by both is equal or less than $100, each gets what he or she proposed, otherwise they get nothing. This game has 101 Nash equilibria. Is there a way to predict which one will be chosen? In real life, many people would go for the fifty-fifty split. It is simple and it seems equitable. In Thomas C. Schelling's (1960) words, it is a focal point. Unfortunately, mere salience is not enough to provide a player with a reason for choice. In this example, only if it is common knowledge that the fifty-fifty split is the salient outcome does it become rational to propose $50. Game theory, however, filters out any social or cultural information regarding strategies, leaving players with the task of coordinating their actions on the sole basis of common knowledge of rationality (and of the structure of the game).

A different approach to the problem of indeterminacy is to start by considering the set of Nash equilibria and ask whether some of them should be eliminated because they are in some sense unreasonable. This is the approach taken by the refinement program (Kohlberg 1990, van Damme 1987). Consider the game in Figure 3:

The game has two Nash equilibria in pure strategies: (*a,c* ) and (*b,d* ). The equilibrium (*a,c* ) is *Pareto dominant*, since it gives both players a higher payoff than any other equilibrium in the game. However, common knowledge of rationality and of the structure of the game does not force the column player to expect the row player to eliminate the weakly dominated strategy *b*, nor is the row player forced to conclude that the column player will discard *d*. Prudence, however, may suggest that one should never be too sure of the opponents' choices. Even if the players have agreed to play a given equilibrium, some uncertainty remains. If so, one should try to model this uncertainty in the game. R. Selten's (1965) insight was to treat perfect rationality as a limit case. His "trembling hand" metaphor presupposes that deciding and acting are two separate processes, in that even if one decides to take a particular action, one may end up doing something else by mistake. An equilibrium strategy should be optimal not only against the opponents' strategies but also against some small probability ℇ > 0 that the opponents make mistakes. Such an equilibrium is *trembling-hand perfect*.

Is the equilibrium (*b,d* ) perfect? If so, *b* must be optimal against *c* being played with probability ℇ and *d* being played with probability 1 − ℇ for some small ℇ > 0. But in this case the expected payoff to *a* is 2ε whereas the payoff to *b* is?. Hence for all ℇ > 0, *a* is a better strategy choice. The equilibrium (*b,d* ) is not perfect, but (*a,c* ) is. Therefore, a prudent player would discard (*b,d* ). In this simple game, checking perfection is easy, since only one mistake is possible. With many strategies, there usually are many more possible mistakes to take into account. Similarly, with many players one may need to worry about who is more likely to make a mistake.

### extensive form

A different representation of a game is the *extensive form*. It specifies the following information: a finite set of players *i* = 1, …, *n* ; the order of moves; the players' choices at each move; and what each player knows when he or she has to choose. The order of play is represented by a game tree *T*, which is a finite set of partially ordered nodes *t* ∈ *T* satisfying a precedence relation <. A *subgame* is a collection of branches of a game such that they start from the same node and the branches and the node together form a game tree by itself. A tree representation is sequential, because it shows the order in which actions are taken by the players. It is natural to think of sequential-move games as being ones in which players choose their strategies one after the other, and of simultaneous-move games as ones in which players choose their strategies at the same time. What is important, however, is not the temporal order of events per se, but whether players know about other players' actions when they have to choose their own. In the normal form representation, players' information about other players' choices is not represented. This is the reason a normal form game could represent any one of several extensive form games. When the order of play is irrelevant to a game's outcome, then restricting oneself to the normal form is justifiable. When the order of play is relevant, however, the extensive form must be specified.

In an extensive form game the information a player has when he or she is choosing an action is explicitly represented using information sets, which partition the nodes of the tree. If an information set contains more than one node, the player who has to make a choice at that information set will be uncertain as to which node he or she is at. Not knowing at which node one is means that the player does not know which action was chosen by the preceding player. If a game contains information sets that are not singletons, the game is one of *imperfect information*.

A strategy for player *i* is a complete plan of action that specifies an action at every node at which it is *i* 's turn to move. Note that a strategy specifies actions even at nodes that will never be reached if that strategy is played. Consider the game in Figure 4. It is a finite game of perfect information in which player 1 moves first. If he chooses *D* at his first node, the game ends and player 1 nets a payoff of 1, whereas player 2 gets 0. But choosing *D* at the first node is only part of a strategy for player 1. For example, it can be part of a strategy that recommends "play *D* at your first node, and *x* at your last node." Another strategy may instead recommend playing *D* at his first node, and *y* at his last decision node. Though it may seem surprising that a strategy specifies actions even at nodes that will not be reached if that strategy is played, one must remember that a strategy is a full contingent plan of action. For example, the strategy *Dx* recommends playing *D* at the first node, thus effectively ending the game. It is important, however, to be able to have a plan of action in case *D* is not played. Player 1 may, after all, make a mistake and, because of player 2's response, find himself called to play at his last node. In that case, having a plan helps. Note that a strategy cannot be changed during the course of the game. Though a player may conjecture about several scenarios of moves and countermoves before playing the game, at the end of deliberation a strategy must be chosen and followed throughout the game.

The game of Figure 4 has two Nash equilibria in pure strategies:(*Dx,d* ) and (*Dy,d* ). Is there a way to solve the indeterminacy?

Suppose player 1 were to reach his last node. Since he is by assumption rational, he will choose *x*, which guarantees

him a payoff of 4. Knowing (by assumption) that player 1 is rational, player 2—if she were to reach her decision node—would play *d*, since by playing *a* she would net a lower payoff. Finally, since (by assumption) player 1 knows that player 2 is rational and that she knows that player 1 is rational, he will choose *D* at his first decision node. The equilibrium (*Dy,d* ) should therefore be ruled out, since it recommends an irrational move at the last node. In the normal form, both equilibria survive. The reason is simple: Nash equilibrium does not constrain behavior out of equilibrium. In this example, if player 1 plans to choose *D* and player 2 plans to choose *d*, it does not matter what player 1 would do at his last node, since that node will never be reached.

The sequential procedure one has used to conclude that only (*Dx,d* ) is a reasonable solution is known as backward induction. In finite games of perfect information with no ties in payoffs, backward induction always identifies a unique equilibrium. The premise of the backward induction argument is that mutual rationality and the structure of the game are common knowledge among the players. It has been argued by Ken Binmore (1987), Cristina Bicchieri (1989, 1993), and Philip J. Reny (1992) that under certain conditions common knowledge of rationality leads to inconsistencies. For example, if player 2 were to reach her decision node, would she keep thinking that player 1 is rational? How would she explain player 1's move? If player 1's move is inconsistent with common knowledge of rationality, player 2 will be unable to predict future play; as a corollary, what constitutes an optimal choice at her node remains undefined. As a consequence of the previous criticisms, the usual premises of backward induction arguments have come to be questioned (compare Pettit and Sugden 1989, Basu 1990, Bonanno 1991). There are a number of further equilibrium refinements for games in extensive form. Their multiplicity makes it impossible to delve into details here. The interested reader can consult Bicchieri (1993, chapter 3).

## Games of Incomplete Information

In games of incomplete information certain elements of the game are not common knowledge among the players. The knowledge and beliefs of the players have to be incorporated into the game-theoretic model, as one usually does in extensive form games, and an appropriate equilibrium concept has to be devised. The approach is based on the seminal work of John C. Harsanyi (1968). In the Bayesian approach adopted by Harsanyi, a player's uncertainty about variables that are relevant for his or her decision ought to be made explicit by means of probability distributions representing his or her beliefs. Moreover, second-order beliefs (beliefs about other players' beliefs) can be represented by further probability distributions, and third-order beliefs about second-order ones, and so on. The flexibility of Harsanyi's model allows one to incorporate all such infinite sequence of higher-order beliefs without an explicit representation of it.

The main idea is that the payoffs associated to each strategy profile depend on certain parameters *θ* _{1}, …, *θ* _{n }, one for each player 1, …, *n*. Each parameter is drawn from a set *Θ* _{i } = (*a* _{i }, *b* _{i }, … ) associated with each player i. The composition of the sets *Θ* _{i } is known, yet the true value of the parameter *θ* _{i } is not (at least for one of the players). The parameter *θ* _{i } is called *i* 's type and the set *Θ* _{i } represents, intuitively, the other player's ignorance about *i* 's characteristics. A type amounts to a specification of certain variables: a player's strategy set, a player's preferences and payoff function, and so on, that make up the private information of a player. Although it is convenient to refer to "the type *a* _{i } of player *i* " as if it was a separate individual, one should keep in mind that types represent players' knowledge (and uncertainty about others) only. As mentioned earlier, in a Bayesian approach uncertainties are represented by probability distributions. Hence, each player has an initial probability distribution *μ* _{i } = (*μ* _{i } (*a* _{−i }), *μ* _{i } (*b* _{-i }), … ) over the types of any other player other than i. Since in a Bayesian game the choices of a player depend on his or her type, the concept of Nash equilibrium has to be generalized accordingly.

Note that all that a player knows, except from the game itself (and the priors), is his own type, and the fact that the other players do not know his own type as well. As their best responses depend on the players' actual types, a player must see himself through his opponents' eyes and plan a best reply against the possible strategies of his opponents for each potential type of his own. Thus, a strategy in a Bayesian game of incomplete information must map each possible type of each player into a plan of actions. Then, since the other players' types are unknown, each player forms a best reply against the expected strategy of each opponent, where he averages over the (well-specified) reactions of all possible types of an opponent, using his prior probability measure on the type space. Such a profile of type-dependent strategies which are unilaterally unimprovable in expectations over the competing types' strategies forms a Bayesian Nash equilibrium. In other words, a Bayesian Nash equilibrium is a Nash equilibrium "at the interim stage" where each player selects a best response against the average best responses of the competing players.

In the framework provided by Harsanyi (1968) it is possible to reduce a game of incomplete information to one of imperfect information. "Nature" is called to make the first move of the game, as if it was an actual player. Nature's random moves determine the type of each player, with a fixed probability that represents the prior probability attached to the events that player i is of type *θ* _{i }. Priors are assumed to be common knowledge, and players observe their own type only. Players then pick their strategies in this extended game, and it is possible to show that the equilibrium of such a game corresponds to the Bayesian Nash equilibrium of the game with incomplete information. In particular, the choice function *s* _{i } yields the action *σ* _{i }(*θ* _{i }) if and only if (iff) that is the action that player *i* chooses in the game with Nature when she observes her type *θ* _{i }.

## Epistemic Foundations of Game Theory

An important development of game theory is the so-called epistemic approach. In the epistemic approach to game theory strategic reasoning is analyzed on the basis of hypothesis about what players know about the game, about other players' knowledge, and about other players' rationality. Since Robert J. Aumann's (1976) formalization, the idea of common knowledge, and the analysis of what players choose depending on what their beliefs about each other are, began to play an increasingly important role in game theory. In particular, one can evaluate solution concepts by examining the epistemic assumptions and hypotheses from which they can be derived (compare Battigalli and Bonanno 1999). Such epistemic hypotheses are treated formally using the tools provided by interactive epistemology (compare Aumann 1999).

To formalize players' knowledge states, one considers a space set Ω whose elements are possible worlds. An event is then represented by a subset of Ω. For example, the proposition "it is sunny in Philadelphia" is represented by the set of all possible worlds in which it is sunny in Philadelphia. For each player, there exists an information function that partitions the space set. Intuitively, a player cannot distinguish among worlds belonging to the same cell of his or her information partition. Thus, in a possible world *ω*, player *i* knows an event *E* iff the set *E* (of possible worlds in which *E* obtains) includes the cell of his or her information partition containing *ω*. The intuition behind this is that if a player cannot distinguish among all the worlds in which *E* is true, then he or she knows that *E* is the case. It is possible to define a knowledge function *K* _{i } for each player *i* so that, when given *E* as an argument, it returns as a value the set of those worlds such that, for each one of them, the cell of *i* 's information partition that contains it is a subset of *E*. That is to say, *K* _{i }*E* is the event that *i* knows *E*.

By imposing certain conditions on the *K* _{i }'s, one can force the epistemic functions to possess certain properties. For example, by requiring that *K* _{i }*E* be a subset of E, one requires that what players know is true, since in every possible world in which *K* _{i }*E* obtains, *E* obtains as well; similarly, by requiring that *K* _{i }K_{i}E be a subset of K_{i}*E*, one establishes that players know what they know, and by requiring that *Ki−KiE* be a subset of −*K* _{i}E that they know what they do not know (where − is the usual set-theoretical operation of complementation). The first condition is often referred to as the truth axiom, the second as the positive introspection axiom, and the third as the negative introspection axiom. Note that this setup has an equivalent formulation in terms of modal logics (compare Fagin et al. 1995, Meyer and van der Hoek 2004). To see the equivalence of the two approaches, consider that modal formulas express propositions whose semantic interpretation is given in terms of Kripke structures of possible worlds. It is then possible to establish a correspondence between formulas of the modal logic and events in the approach described earlier. In a Kripke model, then, an event corresponds to the set of those possible worlds that satisfy the formula expressing the proposition associated to that event.

Knowledge functions can be iterated, thus they can represent mutual and higher-order knowledge, and Aumann (1976) provides a mathematical definition of the idea of common knowledge in the setup sketched earlier. A proposition *p* is common knowledge between, say, two players *i* and *j* iff the set of worlds representing *p* includes the cells of *i* 's and *j* 's partitions meet that contain *p*, where the meet of two partitions is the finest common coarsening of them. An application of the definition is the theorem proved in the same article, in which it is shown that if players have common priors, and their posteriors are common knowledge, then the posteriors are equal, even if the players derived them by conditioning on different information. Or, in other words, that one cannot "agree to disagree." As mentioned earlier, Aumann formalized Lewis's (1969) definition of common knowledge. However, it is currently debated whether Aumann's seminal definition is a faithful rendition of Lewis's informal characterization of common knowledge (compare Vanderschraaf 1998, Cubitt and Sugden 2003, Sillari 2005).

In such a framework it is possible to investigate which strategy profiles are compatible with certain epistemic assumptions about the players. For example, CK1 and CK2 imply that players would never choose strictly dominated strategies. The first contributions in this sense are David G. Pearce (1984) and B. Douglas Bernheim (1984), in which a procedure is devised to eliminate all the players' strategies that are not rationalizable, that is, not supported by internally consistent beliefs about other players' choices and beliefs. In general, it can be proved that certain epistemic conditions are only compatible with the strategy profiles yielded by a certain solution concept, hence providing an epistemic foundation for that solution concept. For example, Aumann and Adam Brandenburger (1995) proved that, for two-person games, mutual knowledge (i.e., first-order knowledge among all the players) of the structure of the game, of rationality, and of the players' chosen strategies implies.

### correlated equilibrium

So far it has been assumed that players' strategies are independent, as though each player receives a private, independent signal and chooses a (mixed) strategy after having observed his or her own signal. However, signals need not be independent. For example, players can agree to play a certain strategy according to the outcome of some external jointly observed event, for example, a coin toss. If the agreement is self-fulfilling, in that players have no incentive to deviate from it, the resulting strategy profile is an equilibrium in correlated strategies or, in short, a correlated equilibrium (compare Aumann 1974, 1987). For any Nash equilibrium in mixed strategies, a correlation device can be set such that it generates a probability distribution over the possible outcomes of the game yielding such an equilibrium profile. Note, however, that the set of correlated equilibria of a game is much larger than the corresponding set of Nash equilibria. If the correlation signal is common knowledge among the players, one speaks of perfect correlation. However, players may correlate their strategies according to different signals (less than perfect correlation). The idea is that players have information partitions whose cells include more than one possible outcome, since they ignore which signals are received by other players. To represent the fact that players receive different signals (i.e., they ignore which strategies will be chosen by other players), it is required that in every cell of the information partition of player *i* his or her strategy does not change. It is then possible to calculate the expected payoff of playing the strategy indicated by the correlation device versus the expected payoff obtained by playing a different strategy. If the players have no incentive to deviate from the indicated strategy, the profile yielded by the correlation device is an equilibrium. Correlation by means of private signals may generate outcomes more efficient than those obtained by playing a Nash equilibrium. An important philosophical application of correlated equilibrium is found by Peter Vanderschraaf (1998, 2001), in which conventions as defined by Lewis (1969) are shown to be correlated equilibria of coordination games.

## Evolutionary Game Theory

A Nash equilibrium need not be interpreted as a unique event. If one thinks of it as an observed regularity, one wants to know by what process such an equilibrium is reached and what accounts for its stability. When multiple equilibria are possible, one wants to know why players converged to one in particular and then stayed there. An alternative way of dealing with multiple equilibria is to suppose that the selection process is made by nature.

Evolutionary theories are inspired by population biology (e.g., see Maynard Smith 1982). These theories dispense with the notion of the decision maker, as well as with best responses/optimization, and use in their place a natural selection, a "survival of the fittest" process (with mutations) to model the frequencies with which various strategies are represented in the population over time. In a typical evolutionary model players are preprogrammed for certain strategies and are randomly matched with other players in pairwise repeated encounters. The relative frequency of a strategy in a population is simply the proportion of players in that population who adopt it. The theory focuses on how the strategy profiles of populations of such agents evolve over time, given that the outcomes of current games determine the frequency of different strategies in the future.

As an example, consider the game in Figure 5 and suppose that there are only two possible behavioral types: hawk and dove.

A hawk always fights and escalates contests until it wins or is badly hurt. A dove sticks to displays and retreats if the opponent escalates the conflict; if it fights with another dove, they will settle the contest after a long time. Payoffs are expected changes in fitness due to the outcome of the game. Fitness here means just reproductive success (e.g., the expected number of offspring per time unit).

Suppose injury has a payoff in terms of loss of fitness equal to *C*, and victory corresponds to a gain in fitness *B*. If hawk meets hawk, or dove meets dove, each has a 50 percent chance of victory. If a dove meets another dove, the winner gets *B* and the loser gets nothing, so the average increase in fitness for a dove meeting another dove is *B* /2. A dove meeting a hawk retreats, so his or her fitness is unchanged, whereas the hawk gets a gain in fitness *B*. If a hawk meets another hawk, they escalate until one wins. The winner has a fitness gain *B*, the loser a fitness loss *C*. So the average increase in fitness is (*B* − *C)* /2. The latter payoff is negative, since one assumes the cost of injury is greater than the gain in fitness obtained by winning the contest. One can also assume that players will be randomly paired in repeated encounters, and in each encounter they will play the stage game of Figure 5.

If the population were to consist predominantly of hawks, selection would favor the few doves, since hawks would meet mostly hawks and end up fighting with an average loss in fitness of (*B* − *C* )/2, and 0 > (*B* − *C* /2). In a population dominated by doves, hawks would spread, since every time they meet a dove (which would be most of the time) they would have a fitness gain of *B*, whereas doves on average would only get *B* /2. Evolutionary game theory wants to know how strategies do on average when games are played repeatedly between individuals who are randomly drawn from a large population. The average payoff to a strategy depends on the composition of the population, so a strategy may do well (in terms of fitness) in an environment and poorly in another. If the frequency of hawks in the population is *q* and that of doves correspondingly (1 − *q* ), the average increase in fitness for the hawks will be *q* (*B* − *C* )/2 + (1 − *q* )*B*, and (1 − *q* )*B* /2 for the doves. The average payoff of a strategy in a given environment determines its future frequency in the population. In this example, the average increase in fitness for the hawks will be equal to that for the doves when the frequency of hawks in the population is *q* = *B* /*C*. At that frequency, the proportion of hawks and doves is stable. If the frequency of hawks is less that *B* /*C*, then they do better than doves and will consequently spread; if their frequency

is larger than *B* /*C*, they will do worse than doves and will shrink.

Note that if *C* > B then (*B* − *C* )/2 < 0, so the game in Figure 5 has two pure-strategy Nash equilibria: (*H, D* ) and (*D, H* ). There is also a mixed strategy equilibrium in which hawk is played with probability *q* = *B* /*C* and dove is played with probability (1 − *q* ) = *C* − *B* /*C*. If the game of Figure 5 were played by rational agents who choose which behavior to display, one would be at a loss in predicting their choices. From common knowledge of rationality and of the structure of the game, the players cannot infer that a particular equilibrium will be played. In the hawk-dove example, however, players are not rational and do not choose their strategies. So if an equilibrium is attained it must be the outcome of some process very different from rational deliberation. The process at work is natural selection: High-performing strategies increase in frequency whereas low-performing strategies' frequency diminishes and eventually goes to zero.

One has seen that in a population composed mostly of doves, hawks will thrive, and the opposite would occur in a population composed mainly of hawks. So for example, if hawks dominate the population, a mutant displaying dove behavior can invade the population, since individuals bearing the dove trait will do better than hawks. The main solution concept used in evolutionary game theory is the evolutionarily stable strategy (ESS) introduced by John Maynard Smith and George R. Price (1973). A strategy or behavioral trait is evolutionarily stable if, once it dominates in the population, it does strictly better than any mutant strategy, and hence it cannot be invaded. In the hawk-dove game, neither of the two pure behavioral types is evolutionarily stable, since each can be invaded by the other. One knows, however, that a population in which there is a proportion *q* = *B* /*C* of hawks and (1 − *q* ) = *C* − *B* /*C* of doves is stable. This means that the type of behavior that consists in escalating fights with probability *q* = *B* /*C* cannot be invaded by any other type, hence it is an ESS. An ESS is a strategy that, when it dominates the population, is a best reply against itself. Therefore, an evolutionarily stable strategy such as (*B* /*C, C* − *B* /*C* ) is a Nash equilibrium. Though every ESS is a Nash equilibrium, the reverse does not hold; in our stage game, there are three Nash equilibria, but only the mixed strategy equilibrium (*B* /*C, C* − *B* /*C* ) is an ESS.

Evolutionary games provide one with a way of explaining how agents that may or may not be rational and—if so—subject to severe information and calculation restrictions, achieve and sustain a Nash equilibrium. Philosophical implications and applications can be found in the works of Brian Skyrms (1990, 1996, 2004). When there exist evolutionarily stable strategies (or states), one knows which equilibrium will obtain, without the need to postulate refinements in the way players interpret off-equilibrium moves. Yet we need to know much more about processes of cultural transmission and to develop adequate ways to represent payoffs, so that the promise of evolutionary games is actually fulfilled.

** See also ** Decision Theory; Philosophy of Biology; Philosophy of Economics.

## Bibliography

Aumann, Robert J. "Agreeing to Disagree." *The Annals of Statistics* 4 (6) (1976): 1236–1239.

Aumann, Robert J. "Correlated Equilibrium as an Expression of Bayesian Rationality." *Econometrica* 55 (1987): 1–18.

Aumann, Robert J. "Interactive Epistemology I: Knowledge." *International Journal of Game Theory* 28 (1999): 263–300.

Aumann, Robert J. "Subjectivity and Correlation in Randomized Strategies." *Journal of Mathematical Economics* 1 (1974): 67–96.

Aumann, Robert J., and Adam Brandenburger. "Epistemic Conditions for Nash Equilibrium." *Econometrica* 63 (1995): 1161–1180.

Basu, Kaushik. "On the Non-existence of a Rationality Definition for Extensive Games." *International Journal of Game Theory* 19 (1990): 33–44.

Battigalli, Pierpaolo, and Giacomo Bonanno. "Recent Results on Belief, Knowledge, and the Epistemic Foundations of Game Theory." *Research in Economics* 503 (1999): 149–226.

Bernheim, B. Douglas. "Rationalizable Strategic Behavior." *Econometrica* 52 (4) (1984): 1007–1028.

Bicchieri, Cristina. *Rationality and Coordination*. New York: Cambridge University Press, 1993.

Bicchieri, Cristina. "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge." *Erkenntnis* 20 (1989): 69–85.

Binmore, Ken. "Modeling Rational Players I." *Economics and Philosophy* 3 (1987): 179–214.

Bonanno, Giacomo. "The Logic of Rational Play in Games of Perfect Information." *Economics and Philosophy* 7 (1991): 37–61.

Cubitt, Robin, and Robert Sugden. "Common Knowledge, Salience, and Convention: A Reconstruction of David Lewis's Game Theory." *Economics and Philosophy* 19 (2003): 175–210.

Damme, Eric van. *Stability and Perfection of Nash Equilibria*. Berlin: Springer, 1987.

Fagin, Ronald, et al. *Reasoning about Knowledge*. Cambridge, MA: MIT Press, 1995.

Harsanyi, John C. "Games with Incomplete Information Played by 'Bayesian' Players, Parts I, II and III." *Management Science* 14 (1967–1968): 159–182, 320–334, 486–502.

Kohlberg, Elon. "Refinement of Nash Equilibrium: The Main Ideas." In *Game Theory and Applications*, edited by Tatsuro Ichiischi, Abraham Neyman, and Yair Tauman. San Diego, CA: Academic Press, 1990.

Lewis, David K. *Convention: A Philosophical Study*. Cambridge, MA: Harvard University Press, 1969.

Maynard Smith, John. *Evolution and the Theory of Games*. New York: Cambridge University Press, 1982.

Maynard Smith, John, and George R. Price. "The Logic of Animal Conflict." *Nature* 246 (1973): 15–18.

Meyer, John-Jules, and Wiebe van der Hoek. *Epistemic Logic for AI and Computer Science*. New York: Cambridge University Press, 2004.

Nash, John F., Jr. *Essays on Game Theory*. Cheltenham, U.K.: E. Elgar, 1996.

Pearce, David G. "Rationalizable Strategic Behavior and the Problem of Perfection." *Econometrica* 52 (4) (1984): 1029–1050.

Pettit, Philip, and Robert Sugden. "The Backward Induction Paradox." *Journal of Philosophy* 86 (1989): 169–182.

Reny, Philip J. "Rationality in Extensive Form Games." *Journal of Economic Perspectives* 6 (1992): 103–118.

Schelling, Thomas C. *The Strategy of Conflict*. Cambridge, MA: Harvard University Press, 1960.

Selten, R. "Spieltheoretische Behandlung eines Oligopolmodels mit Nachfragetragheit." *Zeitschrift fur die gesamte Staatwissenschaft* 121 (1965): 301–324.

Sillari, Giacomo. "A Logical Framework for Convention." *Knowledge, Rationality, and Action*. 2005.

Skyrms, Brian. *The Dynamics of Rational Deliberation*. Cambridge, MA: Harvard University Press, 1990.

Skyrms, Brian. *Evolution of the Social Contract*. New York: Cambridge University Press, 1996.

Skyrms, Brian. *The Stag Hunt and the Evolution of Social Structure*. New York: Cambridge University Press, 2004.

Vanderschraaf, Peter. "Knowledge, Equilibrium, and Convention." *Erkenntnis* 49 (1998): 337–369.

Vanderschraaf, Peter. *Learning and Coordination: Inductive Deliberation, Equilibrium, and Convention*. New York: Routledge, 2001.

*Cristina Bicchieri (2005)*

*Giacomo Sillari (2005)*

## Game Theory

# Game Theory

I. Theoretical Aspects*Oskar Morgenstern*

II. Economic Applications*Martin Shubik*

## I THEORETICAL ASPECTS

The theory of games is a mathematical discipline designed to treat rigorously the question of optimal behavior of participants in games of strategy and to determine the resulting equilibria. In such games each participant is striving for his greatest advantage in situations where the outcome depends not only on his actions alone, nor solely on those of nature, but also on those of other participants whose interests are sometimes opposed, sometimes parallel, to his own. Thus, in games of strategy there is conflict of interest as well as possible cooperation among the participants. There may be uncertainty for each participant because the actions of others may not be known with certainty. Such situations, often of extreme complexity, are found not only in games but also in business, politics, war, and other social activities. Therefore, the theory serves to interpret both games themselves and social phenomena with which certain games are strictly identical. The theory is normative in that it aims at giving advice to each player about his optimal behavior; it is descriptive when viewed as a model for analyzing empirically given occurrences. In analyzing games the theory does not assume rational behavior; rather, it attempts to determine what “rational” can mean when an individual is confronted with the problem of optimal behavior in games and equivalent situations.

The results of the interlocking individual actions are expressed by numbers, such as money or a numerically defined utility for each player transferable among all. Games of strategy include games of chance as a subcase; in games of chance the problem for the player is merely to determine and evaluate the probability of each possible outcome. In games of strategy the outcome for a player cannot be determined by mere probability calculations. Specifically, no player can make mere statistical assumptions about the behavior of the other players in order to decide on his own optimal strategy.

But nature, when interfering in a game through chance events, is assumed to be indifferent with regard to the player or players affected by chance events. Since the study of games of chance has given rise to the theory of probability, without which modern natural science could not exist, the expectation is that the understanding of the far more complicated games of strategy may gradually produce similar consequences for the social sciences.

**History.** In 1710 the German mathematician-philosopher Leibniz foresaw the need and possibility of a theory of games of strategy, and the notion of a minimax strategy (see section on “Two-person, zero-sum games,” below) was first formulated two years later by James Waldegrave. (See the letter from Waldegrave in the 1713 edition of Montmort 1708; see also Baumol & Goldfeld 1967.) The similarity between games of strategy and economic processes was occasionally mentioned, for example, by Edgeworth in his *Mathematical Psychics* (1881). Specialized theorems, such as Ernst Zer-melo’s on chess, were stated for some games; and Emile Borel developed a limited minimax strategy, but he denied the possibility of a general theorem. It was not until John von Neumann (1928) proved the fundamental theorem that a true theory of games emerged (see section on “Two-person, zero-sum games,” below). In their *Theory of Games and Economic Behavior,* von Neumann and Morgen stern (1944) extended the theory, especially to games involving more than two players, and gave applications of the theory in economics. Since then, throughout the world a vast literature has arisen in which the main tenets of the theory have been widened and deepened and many new concepts and ideas introduced. The four-volume *Contributions to the Theory of Games* (Kuhn & Tucker 1950-1959) and *Advances in Game Theory* (Dresher, Shapley, & Tucker 1964) give evidence of this continuing movement. These works contain extensive bibliographies, but see especially Volume 4 of *Contributions to the Theory of Games.*

### Game theory concepts

Games are described by specifying possible behavior within the rules of the game. The rules are in each case unambiguous; for example, certain moves are allowed for specific pieces in chess but are forbidden for others. The rules are also inviolate. When a social situation is viewed as a game, the rules are given by the physical and legal environment within which an individual’s actions may take place. (For example, in a market individuals are permitted to bargain, to threaten with boycotts, etc., but they are not permitted to use physical force to acquire an article or to attempt to change its price.) The concrete occasion of a game is called a play, which is described by specifying, out of all possible, allowable moves, the sequence of choices actually made by the players or participants. After the final move, the umpire determines the payments to each player. The players may act singly, or, if the rules of the game permit it and if it is advantageous, they may form coalitions. When a coalition forms, the distribution of the payments to the coalition among its members has to be established. All payments are stated in terms of money or a numerically defined utility that is transferable from one player to another. The payment function is generally assumed to be known to the players, although modifications of this assumption have been introduced, as have other modifications—for example, about the character of the utilities and even about the transferability of payments.

The “extensive” form of a game, given in terms of successive moves and countermoves, can be represented mathematically by a game tree, which describes the unfolding of the moves, the state of information of the players at the moment of each choice, and the alternatives for choices available to each player at each occasion. This description can, in a strict mathematical sense, be given equiv alent^ in a “normalized” form: each player, uninformed about the choices made by any other player, chooses a single number that identifies a “strategy” from his given finite or infinite set of strategies. When all personal choices and a possible random choice are made (simultaneously), the umpire determines the payments. Each strategy is a complete plan of playing, allowing for all contingencies as represented by the choices and moves of all other players and of nature. The payoff for each player is then represented by his mathematical expectation of the outcome for himself. The final description of the game therefore involves only the players’ strategies and no further chance elements.

The theory explicitly assumes that each player, besides being completely informed about the alternative payoffs due to all moves made or strategies chosen, can perform all necessary computations needed to determine his optimal behavior. (This assumption of complete information is also com monplace in current economic theory, although seldom stated explicitly.)

The payments made by all players may add up to zero, as in games played for entertainment. In this case the gains of some are exactly balanced by the losses of others. Such games are called zero-sum games. In other instances the sum of all payments may be a constant (different from zero) or may be a variable; in these cases all players may gain or lose. Applications of game theory to economic of political problems require the study of these games, since in a purchase, for example, both sides gain. An economy is normally productive so that the gains outweigh any losses, whereas in a war both sides may lose.

If a player chooses a particular strategy as iden tified by its number, he selects a *pure* strategy; if he allows a chance mechanism, specified by himself, to make this selection for him, he chooses a *mixed* or *statistical* strategy. The number of pure strategies for a player normally is finite, partly because the rules of games bring the play to an end after a finite number of moves, partly because the player is confronted with only a finite number of alternatives. However, it is possible to treat cases with infinitely many strategies as well as to consider even the borderline case of games with infinitely many players. These serve essentially to study pathological examples or to explore certain mathematical characteristics.

Game theory uses essentially combinatorial and set-theoretical concepts and tools, since no specific calculus has as yet evolved—as happened when differential and integral calculus were invented simultaneously with the establishment of classical mechanics. Differential calculus is designed to determine maxima and minima, but in games, as well as in politics, these are not defined, because the out come of a player’s actions does not depend on his actions alone (plus nature). This applies to all players simultaneously. A maximum (or minimum) of a function can be achieved only when all variables on which the maximum (minimum) depends are under the complete control of the wouldbe maximizer. *This is never the case in games of strategy.* Therefore, in the equivalent business, political, or military operations there obtains no maximum (minimum) problem, whether with or with out side conditions, as assumed in the classical literature of these fields; rather one is confronted there with an entirely different conceptual structure, which the theory of games analyzes.

### Two-person, zero-sum games

The simplest game of strategy is a two-person, zero-sum game, in which players A and B each have a finite number of strategies and make their choices unknown to each other. Let P be the payoff to the first player, and let —P be the payoff to the second player. Then P is greater than, equal to, or loses than 0, depending on whether A wins, draws, or loses. Let *A _{1}, A_{2},^{…}, A_{n}* be the strategies available to player A and

*B*

_{1},B_{2},^{…}, B

_{m}be the strategies available to player B. In the resulting

*n*x

*m*array of numbers, each row represents a pure strategy of A, each column a pure strategy of B. The intersections of the rows and columns show the payoffs to player A from player B. The first player wishes to maximize this payoff, while the second wishes to minimize it. This array of numbers is called the payoff matrix, an example of which is presented in Table 1, where payments go from B to A. Player A’s most desirable payoff is 8; B’s is —10. Should player A pick strategy A,, either of these two events may happen depending on B’s action. But if A picks A

_{1}, B in his own interest would want to pick B

_{3}, which would mean that A would have to pay 10 units to B instead of receiving 8. The row minima represent the worst that could happen to A for each of his strategies, and it is natural that he would want to make as great as possible the least gain he can expect from each; that is, he seeks the maximum of the row minima, or the

*maximin,*which in Table 1 is -1 (strategy

*A*Conversely, B will wish to minimize the column maxima—that is, seek the

_{3}).Table 1 – Payoff matrix for a two-person, zero-sumgame | ||||
---|---|---|---|---|

B’s Strategy \ A’s Strategy | B_{1} | B_{2} | B_{2} | Row minimo |

A_{1} | 8 | −3 | −10 | −10 |

A_{2} | 0 | −2 | 6 | −2 |

A_{3} | 4 | −1 | 5 | −1 |

Column maximo | 8 | −1 | 6 |

*minimax*—which is also —1 (strategy B_{2}). We would say that each player is using a minimax strategy—that is, each player selects the strategy that minimizes his maximum loss. Any deviation from the optimal strategies A, and *B.* is fraught with danger for the deviating player, so that each will choose the strategy that contains the so-called *saddle point of the payoff function.* The saddle point is defined as the point at which the maximin equals the minimax. At this point the least that A can secure for himself is equal to the most that B may have to part with. (In the above example A has to pay one unit to B.) If there is more than one saddle point in the payoff matrix, then they are all equal to each other. Games possessing saddle points in pure strategies are called *specially strictly determined.* In these games it is immaterial whether the choice of the pure strategy by either player is made openly before the other makes his choice. Games of *perfect* information—that is, games in which each player at each move is always informed about the entire previous history of the play, so that what is preliminary to his choice is also anterior to it—are always specially strictly determined. Chess belongs in this class; bridge does not, since each of the two players (one “player” being the north-south team, the other the east-west team) is not even completely informed about himself—for example, north does not know precisely what cards south holds.

Most games will have no saddle points in pure strategies; they are then said to be not strictly determined. The simplest case is matching pennies. The payoff matrix for this game is presented in Table 2. Here, if one player has to choose openly before the other does, he is sure to lose. Each player will therefore strive to prevent information about his choice from flowing to the other. This is accomplished by the player’s choice of a chance mechanism, which selects from among the available pure strategies with probabilities determined by the player. In matching pennies, the chance mechanism should select “heads” with probability ½ and “tails” with probability ½. This randomization may be achieved by tossing the coin before showing it. If there is a premium, say on matching heads over matching tails, the payoff matrix would reflect this, and the probabilities with which the two sides of the coin have to be played in order to prevent disclosure of a pattern of playing to the benefit of the opponent would no longer be ½ for heads and ½ for tails. Thus, when there is no saddle point in pure strategies a randomization by a chance mechanism is called for. The players are then said to be using mixed, or statistical, strategies. This does *not* transform

Table 2 – Payoff matrix for matching pennies | |||
---|---|---|---|

B’s penny \ A’s penny | Heads | Tails | Row minima |

Heads | 1 | −1 | −1 |

Tails | −1 | 1 | −1 |

Column maxima | 1 | 1 |

a game of strategy into a game of chance: the strategic decision is the specification of the randomization device and the assignment of the proper probabilities to each available pure strategy. Whether pure or mixed strategies are needed to assure a saddle point, the theory at no point requires that the players make assumptions about each other’s intelligence, guesses, and the like. The choice of the optimal strategy is independent of all such considerations. Strategies selected in this way are perfect from the defensive point of view. A theory of true offensive strategies requires new ideas and has not yet been developed.

Von Neumann proved that each matrix game can be made strictly determined by introducing mixed strategies. This is the *fundamental theorem* of game theory. It shows that each zero-sum, two-person game has a saddle point in mixed strategies and that optimal mixed strategies exist for each of the two players. The original proof of this theorem made use of rather complex properties of set theory, functional calculus, and combinatorics. Since the original proof was given, a number of alternative, simplified versions have been given by various authors. The numerical solution of a matrix game with *m* columns and *n* rows demands the solution of a system of linear inequalities of *m + n +* 1 un knowns, the *m + n* probabilities for the strategies of players A and B and the minimax value. There exist many techniques for solving such systems; notably, an equivalence with solving dual linear programs has proved to be of great importance [*see*Programming]. High-speed computers are needed to cope with the rapid rise of the required arithmetical operations. A more modest view of mixed strategies is the notion of behavioral strategies, which are the probability distributions over each player’s information sets in the extensive form of the game. For games such as chess, even the optimal pure strategy cannot be computed, although the existence of a saddle point in pure strategies can be proved and either white or black has a winning pure strategy no matter what the other does (or both have pure strategies that enforce a draw). The problems of finding further computational techniques are actively being investigated.

### n-Person, zero-sum games

When the number of players increases to *n* > 3, new phenomena arise even when the zero-sum restriction remains. It is now possible that cooperation will benefit the players. If this is not the case, the game is called inessential. In an essential game the players will try to form *coalitions* and act through these in order to secure their advantage. Different coalitions may have different strength. A winning coalition will have to divide its proceeds among its members, and each member must be satisfied with the division in order that a stable solution obtains [*see*Coalitions].

Any possible division of payments among all players is called an *imputation,* but only some of all possible imputations will be contained in a *solution.* An inessential game has precisely one imputation that is better than any other, that is, one that *dominates* all others. This unique imputation forms the solution, but this uniqueness is trivial and applies only to inessential games. There is no cooperation in inessential games.

A solution of an essential game is characteristically a nonempty set of several imputations with the following properties: (1) No imputation in the set is dominated by another imputation in the set. (2) All imputations not in the set are dominated by an imputation contained in the set. There may be an infinite number of imputations in a solution set, and there may be several solution sets, each of which has the above properties. Furthermore, it should be noted that every imputation in a solution set is dominated by some imputation not in that set, but property (2) assures that such a dominating imputation is, in turn, dominated by an imputation in the solution set.

To be considered as a member of a coalition, a player may have to offer *compensations* or side payments to other prospective members. A compensation or side payment may even take the form of giving up privileges that the rules of the game may attribute to a player. A player may be admitted to a coalition under terms less favorable than those obtained by the players who form the initial core of a coalition (this happens first when *n =* 4), Also, coalitions of different strength can be distin guished. *Discrimination* may occur; for example, some players may consider others “taboo”—that is, unworthy as coalition partners. This leads to the types of discriminatory solutions that already occur when *n* = 3. Yet discrimination is not neces sarily as bad for the affected player as defeat is for a nondiscriminated player, because cooperation against the discriminated player may not be perfect. A player who by joining a coalition does not contribute more to it than what he can get by playing for himself merely has the role of a dummy.

The fundamental fact of cooperation is that the players in a coalition can each obtain more than they could obtain by playing alone. This expresses the nonadditivity—specifically, the superadditivity —of value, the explanation of which has long been recognized as a basic problem in economics and sociology. In spite of many efforts, no solution was found, but it is now adequately described by the characteristic function *v(S),* a numerical set function that states for any cooperative n-person game the proceeds of the coalition S, and an imputation that describes the distribution of all payments among all players (von Neumann & Morgenstern 1944, chapter 6).

Since there may be many solutions to a cooperative (essential) n-person game, the question arises as to which of them will in fact prevail. Each solution may correspond to a specific mode of behavior of the players or a specific form of social organization. This expresses the fact that in the same physical setting different types of social organization can be established, each one consistent in itself but in contradiction with other organizations. For example, we observe that the same tech nology allows the maintenance of varying economic systems, income distributions, and so on. If a *stable standard of behavior* exists (a mode of behavior accepted by society), then it can be argued that the only relevant solution is the one corresponding to this standard.

The choice of an imputation *not* in the solution set, while advantageous to each of those in the particular coalition that is able to enforce this im putation, cannot be maintained because another coalition can enforce another imputation, belonging to the solution set, that dominates the first one. Hence, a standard is set and proposals for imputations that are not in the solution will be rejected. The theory cannot state which imputation of all those belonging to the standard of behavior actually will be chosen—that is, which coalition will form. Work has been done to introduce new assumptions under which this may become feasible. No imputation contained in the solution set guarantees stability by itself, since each is necessarily dominated from the outside. But in turn each imputation is always protected against threats by another one *within* the solution set that dominates the imputation *not* in the solution set.

Since an imputation is a division of proceeds among the players, these conditions define a certain fairness, such that the classical problems of fair division (for example, cutting a cake) become amenable to game-theoretic analysis.

This conceptual structure is more complicated than the conventional view that society could be organized according to some simple principle of maximization. The conventional view would be valid only if there were inessentiality—that is, if there were no advantage in cooperation, or if cooperation were forbidden, or, finally, if a supreme authority were to do away with the entire imputation problem by simply assigning shares of income to the members of the society. Inessentiality would be the case for a strictly communistic society, which is formally equivalent to a Robinson Crusoe economy. This, in turn, is the only formal setup under which the classical notion of marginal utility is logically valid. Whether cooperation through formation of coalitions is advantageous to participants in a society, whether such cooperation, although advantageous, is forbidden, or whether compensations or side payments are ruled out by some authority although coalitions may be entered—these are clearly empirical questions. The theory should take care of all eventualities, and current investigations explore the different avenues. In economic life, mergers, labor unions, trade associations, car tels, etc., express the powerful tendencies toward cooperation. The cooperative case with side paments is the most comprehensive, and the theory was originally designed to deal with this case. Important results have been obtained for cooperative games without side payments (Aumann & Peleg 1961), and the fruitful idea of “bargaining sets” has been introduced (Aumann & Maschler 1964).

All indications point overwhelmingly to the benefits of cooperation of various forms and hence to the empirical irrelevance of those noncooperative, inessential games with uniquely determined solutions consisting only of one single imputation dominating all others (as described in the Lausanne school’s general economic equilibrium).

Cooperation may depend on a particular flow of information among the players. Since the required level may not in fact be attainable, noncooperative solutions become important. Economic markets in which players act independently and have no incentive to deviate from a given state have been studied (Nash 1950). *Equilibrium points* can be determined as those points for which unilateral changes in strategy are unprofitable to everyone. As Nash has shown, every finite game, or the domain of mixed strategies, has at least one equilibrium point. If there is more than one equilibrium point, an intermixture of strategy choices need not give another equilibrium point, nor is the payoff to players the same if the points differ from each other.

There is no proof, as yet, that every cooperative n-person, zero-sum game for any *n* > 4 has a solution of the specified kind. However, every individual game investigated, even with arbitrarily large n, has been found to possess a solution. The indications are that the proof for the general case will eventually be given. Other definitions of solutions— still differing from that of the Lausanne-Robinson Crusoe convention—are possible and somewhat narrow the field of choices. They are inevitably based on further assumptions about the behavior of the participants in the game, which have to be justified from case to case.

### Simple games

In certain n-person games the sole purpose is to form a *majority* coalition. These games are the “simple” games in which voting takes place. Ties in voting may occur, and weights may differ from one player to another; for example, the chairman of a committee may have more than one vote. A player’s presence may therefore mean the difference between victory or defeat. Games of this nature can be identified with classical cases of production, where the players represent factors of production. It has been proven that even in relatively simple cases, although complete substitutability among players may exist, substitution rates may be undetermined and values are attributed to the players (factors) only by virtue of their *relation* to each other and not by virtue of their individual contribution. Thus, contrary to current economic doctrine, substitutability does not necessarily guarantee equality as far as value is concerned.

Simple games are suited for interpretation of many political situations in that they allow the determination of the weights, or power, of participants in decision processes. A particular power index has been proposed by Shapley. It is based on the notion of the average contribution a player can make to the coalitions to which he may belong, even considering, where necessary, the order in which he joins them. The weight of a senator, a congressman, and the president in the legislative process has been calculated for the United States. The procedure is applicable to other political systems—for example, the Security Council of the United Nations (Shapley 1953).

### Composition of games

Every increase in the number of players brings new phenomena: with the increase from two to three players, coalitions become possible, from three to four, ties may occur among coalitions, etc. There is no guarantee that for very large *n* an asymptotic convergence of solutions will occur, since coalition formation always reduces large numbers of individual players to small numbers of coalitions acting upon each other. Thus, the increase in the number of players does not necessarily lead to a simplification, as in the case of an enlargement of the numbers of bodies in a physical system, which then allows the introduction of classical methods of statistical averages as a simplification. (When the game is inessential, the number of participants is irrelevant in any case.)

An effective extension of the theory by the enlargement of numbers can be achieved by viewing games played separately as one composite game and by introducing contributions to, or withdrawals from, the proceeds of a given game by a group of players outside the game under consideration. These more complicated notions involve constantsum games and demonstrate, among other things, how the coalition formation, the degree of cooperation among players, and consequently the distribution of the proceeds among them are affected by the availability of amounts in excess of those due to their own strategies alone. Strategy is clearly greatly influenced by the availability of greater pay ments than those that can be made by only the other players. Thus, coalitions—namely, social structures—cannot be maintained if outside con tributions become larger than specified amounts, such that as a consequence no coalition can exhaust the amounts offered. It can also be shown that the outside source, making contributions or withdrawals, can never be less than a group of three players.

These concepts and results are obviously of a rather complicated nature; they are not always directly accessible to intuition, as corresponds to a truly mathematical theory. When that level is reached, confidence in the mathematical results must override intuition, as the experience in the natural sciences shows. The fact that solutions of n-person games are not single numbers or single sets of numbers—but that the above-mentioned, more complicated structures emerge—is not an im perfection of the theory: it is a fundamental property of social organization that can be described only by game-theoretic methods.

### Nonzero-sum games

Nonzero-sum games can be reduced to zero-sum games—which makes that entire theory applica ble—by the introduction of a fictitious player, so that an n-person, nonzero-sum game becomes equiv alent to an (n + l)-person, zero-sum game. The fictitious player is either winning or losing, but since he is fictitious he can never become a member of a coalition. Yet he can be construed as proposing alternative imputations, thereby influencing the players’ strategies and thus the course of the play. He will lose according to the degree of cooperation among the players. If the players cooperate per fectly, the maximum social benefit will be attained. In these games there is an increased role of threats, and their costs to the threatening player, although threats already occur in the zero-sum case.

The discriminatory solutions, first encountered for the three-person, zero-sum game, serve as in struments to approach these problems. Most ap plications to economics involve gains by the com munity—an economy being productive and there being no voluntary exchange unless both sides profit—while many other social phenomena fall under the domain of zero-sum games. The non zero-sum theory is so far the part of game theory least developed in detail, although its foundations seem to be firmly established by the above proce dure.

### Applications

Game theory is applicable to the study of those social phenomena in which there are agents striving for their own advantage but not in control of all the variables on which the outcome depends. The wide range of situations of which this is true is obvious: they are economic, political, military, and strictly social in nature. Applications have been made in varying degree to all areas; some have led to experiments that have yielded important new insights into the theory itself and into special processes such as bargaining. Finally, the possi bility of viewing the basic problem of statistics as a game against nature has given rise to modern statistical decision theory (Wald 1950). The influ ence of game theory is also evident in philosophy, information theory, cybernetics, and even biology.

Oskar Morgenstern

[*See also the biography of*Von Neumann.]

## BIBLIOGRAPHY

Aumann, R. J.; and P eleg, B. 1961 Von Neumann-Morgenstern Solutions to Cooperative Games Without Side Payments. American Mathematical Society, *Bul letin* 66:173–179.

Aumann, R. J.; and Maschler, M. 1964 The Bargaining Set for Cooperative Games. Pages 443-476 in M. Dresher, L. S. Shapley, and A. W. Tucker (editors), *Advances in Game Theory.* Princeton Univ. Press.

Baumol, William J.; and Goldfeld, Stephen M. (edi tors) 1967 Precursors in Mathematical Economics. Unpublished manuscript. → To be published in 1967 or 1968 by the London School of Economics and Political Science. Contains the letter from Waldegrave to Remond de Montmort, first published in the second (1713) edition of Montmort (1708), describing his formulation, and a discussion by Harold W. Kuhn of the identity of Waldegrave.

Berge, Claude 1957 *Theorie generale des jeux a* n *personnes.* Paris: Gauthier-Villars.

Blackwell, David; and Girshick, M. A. 1954 *Theory of Games and Statistical Decisions.* New York: Wiley.

Braithwaite, Richard B. 1955 *Theory of Games as a Tool for the Moral Philosopher.* Cambridge Univ. Press.

Burger, Ewald (1959) 1963 *Introduction to the The ory of Games.* Englewood Cliffs, N.J.: Prentice-Hall. → First published in German.

Dresher, Melvin 1961 *Games of Strategy: Theory and Applications.* Englewood Cliffs, N.J.: Prentice-Hall.

Dresher, Melvin; Shapley, L. S.; and Tucker, A. W. (editors) 1964 *Advances in Game Theory.* Annals of Mathematic Studies, Vol. 32. Princeton Univ. Press.

Edgeworth, Francis Y. (1881)1953 *Mathematical Psy chics: An Essay on the Application of Mathematics to the Moral Sciences.* New York: Kelley.

Frechet, Maurice; and Von Neumann, John 1953 Commentary on the Three Notes of Emile Borel. *Econometrica* 21, no. 1:118–127.

Karlin, Samuel 1959 *Mathematical Methods and The ory in Games, Programming and Economics.* 2 vols. Reading, Mass.: Addison-Wesley.

Kuhn, Harold W.; and Tucker, A. W. (editors) 1950-1959 *Contributions to the Theory of Games.* 4 vols. Princeton Univ. Press.

Luce, R. Duncan;and Raiffa, Howard 1957 *Games and Decisions: Introduction and Critical Survey.* A Study of the Behavioral Models Project, Bureau of Applied Social Research, Columbia University. New York. → First published in 1954 as A *Survey of the Theory of Games,* Columbia University, Bureau of Ap plied Social Research, Technical Report No. 5.

Mckinsey, John C. C. 1952 *Introduction to the Theory of Games.* New York: McGraw-Hill.

[Montmort, Pierre Remond DE] (1708) 1713 Essay *d’analyse sur les jeux de hazard.* 2d ed. Paris: Quillau. → Published anonymously.

Morgenstern, Oskar 1963 *Spieltheorie und Wirt-schaftswissenschaft.* Vienna: Oldenbourg.

Nash, John F. Jr. 1950 Equilibrium in n-Person Games. National Academy of Sciences, *Proceedings* 36:48–49.

Princeton University Conference 1962 *Recent Ad vances in Game Theory.* Princeton, N.J.: The Con ference.

Shapley, L. S. 1953 A Value for n-Person Games. Vol ume 2, pages 307-317 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ. Press.

Shapley, L. S.; and Shubik, Martin1954 A Method for Evaluating the Distribution of Power in a Com mittee System. *American Political Science Review* 48: 787–792.

Shubik, Martin(editor) 1964 *Game Theory and Related Approaches to Social Behavior: Selections.* New York: Wiley.

Suzuki, Mitsuo1959 *Gemu no riron.* Tokyo: Keisho Shobo.

Ville, Jean 1938 Sur la theorie generale des jeux ou intervient l’habilite des joueurs. Pages 105-113 in Emile Borel (editor), *Traite du calcul des probability’s et de ses applications.* Volume 4: Applications diverses et conclusion. Paris: Gauthier-Villars.

Vogelsang, Rudolf1963 *Die mathematische Theorie der Spiele.* Bonn: Dummler.

Von Neumann, John (1928) 1959 On the Theory of Games of Strategy. Volume 4, pages 13-42 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ. Press.→First published in German in Volume 100 of the *Mathematische Annalen.*

Von Neumann, John;and Moegenstern, Oskar(1944) 1964 *Theory of Games and Economic Behavior.* 3d ed. New York: Wiley.

Vorob’ev, N. N. (editor) 1961 *Matrichnye igry.* Mos cow: Gosudarstvennoe Izdatel’stvo Fiziko-Matemati-cheskoi Literatury. → A collection of translations into Russian from foreign-language publications.

Wald, Abraham (1950)1964 *Statistical Decision Functions.* New York: Wiley.

Williams, John D. 1954 *The Corn-pleat Strategyst: Being a Primer in the Theory of Games and Strategy.* New York: McGraw-Hill.

## II ECONOMIC APPLICATIONS

The major economic applications of game theory have been in oligopoly theory, bargaining theory, and general equilibrium theory. Several distinct branches of game theory exist and need to be identified before our attention is limited to economic behavior. John von Neumann and Oskar Morgen-stern, who first explored in depth the role of game theory in economic analysis (1944), presented three aspects of game theory which are so funda mentally independent of one another that with a small amount of editing their opus could have been published as three independent books.

The first topic was the description of a game, or interdependent decision process, in extensive form. This provided a phraseology (“choice,” “decision tree,” “move,” “information,” “strategy,” and “pay off”) for the precise definition of terms, which has served as a basis for studying artificial intelligence, for developing the behavioral theory of the firm (Cyert & March 1963), and for considering statis tical decision making [*see*Decision Theory]. The definition of “payoff” has been closely associated with developments in utility theory [*see*Utility].

The second topic was the description of the two-person, zero-sum game and the development of the mathematical theory based upon the concept of the minimax solution. This theory has formal mathematical connections with linear programming and has been applied successfully to the analysis of problems of pure conflict; however, its application to the social sciences has been limited because pure conflict of interests is the exception rather than the rule in social situations [*see*Programming].

The third subject to which von Neumann and Morgenstern directed their attention was the development of a static theory for the n-person (n ≥ 3), constant-sum game. They suggested a set of stability and domination conditions which should hold for a cooperative solution to an n-person game. It must be noted that the implications of this solution concept were developed on the assumption of the existence of a transferable, interpersonally comparable linear utility which provides a mecha nism for side payments. Since the original work of von Neumann and Morgenstern, twenty to thirty alternative solution concepts for the n-person, non-constant-sum game have been suggested. Some have been of purely mathematical interest, but most have been based on considerations of bargain ing, fair division, social stability, and other aspects of human affairs. Many of the solution concepts do not use the assumption of transferable utility.

### Oligopoly and bargaining

Markets in which there are only a few sellers (oligopoly), two sellers (duopoly, a special case of oligopoly), one seller and one buyer (bilateral mo nopoly), and so on, lend themselves to game-theo retic analyses because the fate of each participant depends on the actions taken by the other partici pant or participants. The theory of games has pro vided a unifying basis for the mathematical and semimathematical works dealing with such situations and has also provided some new results. The methodology of game theory requires explicit and detailed definition of the strategies available to the players and of the payoffs associated with the strategies. This methodology has helped to clarify the different aspects of intent, behavior, and mar ket structure in oligopolistic markets (Shubik 1957). So-called conjectural variations and lengthy statements regarding an oligopolist’s (or duopolist’s or bargainer’s) moves and countermoves can be investigated in a unified way when expressed in terms of strategies.

#### Oligopoly

Perhaps the most pervasive concept underlying the writings on oligopoly is that of a non-cooperative equilibrium. A group of individuals is in a state of noncooperative equilibrium if, in the individual pursuit of his own self-interest, no one in the group is motivated to change his strategy. This concept is basic in the works of Cournot, Ber-trand, Edgeworth, Chamberlin, von Stackelberg, and many others. Nash (1951) has presented a general theory of noncooperative games, based on the equilibrium-point solution. This theory is directly related to Chamberlin’s theory of monopo listic competition, among others.

The outcome given by a solution is called Pareto optimal if no participant can be made better off without some other participant’s being made worse off. Noncooperative solutions, whose outcomes need not be Pareto optimal, have been distinguished from cooperative solutions, whose outcomes must be Pareto optimal. Also, equilibrium points are distinguished on the basis of whether the oligopoly model studied is static or dynamic. In much of the literature on oligopoly, quasi-cooperative solutions have been advanced and quasi-dynamic models have been suggested. Thus, while the Chamberlin large-group equilibrium can be interpreted as the outcome of a static noncooperative game, the small-group equilibrium and the market resolution suggested by Fellner (1949) are cast in a quasi-dy namic, quasi-cooperative framework. A limited amount of development of games of survival (Milnor & Shapley 1957) and games of economic survival (Shubik & Thompson 1959) has provided a basis for the study of multiperiod situations and for an extension of the noncooperative equilibrium concept to include quasi-cooperative outcomes.

*New results.* The recasting of oligopoly situations into a game-theory context has produced some new results in oligopoly theory (see, for example, May-berry, Nash, & Shubik 1953; Shubik 1959a). Nash (1953) and Shubik (1959a) have developed the definition of “optimum threat” in economic war fare. The kinky oligopoly demand curve and the more general problem of oligopolistic demand have been re-examined and interpreted. Other results concern stability and the Edgeworth cycle in price-variation oligopoly; duopoly with both price and quantity as independent variables; and the development of diverse concepts applicable to cartel be havior, such as blocking coalitions (Scarf 1965), discriminatory solutions, and decomposable games.

Selten (1965) has been concerned with the problem of calculating the noncooperative equilib ria for various classes of oligopolistic markets. His work has focused on both the explicit calculation and the uniqueness of equilibrium points. Vickrey (1961), Griesmer and Shubik (1963), and others have studied a class of game models applicable to bidding and auction markets. Working from the viewpoint of marketing and operations research, Mills (1961) and others have constructed several noncooperative game-theoretic models of competition through advertising. Jacot (1963) has considered problems involving location and spatial com petition.

*Behavioristic findings.* Game theory can be given both a normative and a behavioristic interpretation. The meaning of “rational behavior” in situations involving elements of conflict and cooperation is not well defined. No single set of normative criteria has been generally accepted, and no universal behavior has been validated. Closely related to and partially inspired by the developments in game theory, there has been a growth in experimental gaming, some of which has been in the context of economic bargaining (Siegel & Fouraker 1960) or in the simulated environment of an oligopolistic market (Hoggatt 1959). Where there is no verbal or face-to-face communication, there appears, un der the appropriate circumstances, to be some evi dence in favor of the noncooperative equilibrium.

#### Bargaining

The theory of bargaining has been of special interest to economists in the context of bilateral monopoly, which can involve two firms, a labor union and a firm, or two individuals en gaged in barter in the market place or trying to settle a joint estate. Any two-person, nonconstant-sum situation, be it haggling in the market or in ternational negotiations, can be formally described in the same game-theoretic framework. However, there are several substantive problems which limit application of this framework and which have re sulted in the development of different approaches. In nonconstant-sum games communication between the players is of considerable importance, yet its role is exceedingly hard to define. In games such as chess and even in many oligopolistic mar kets, a move is a well-defined physical act—moving a pawn in a definite manner or changing a price or deciding upon a production rate; in bargaining it may be necessary to interpret a statement as a move. The problem of interpreting words as moves in negotiation is critical to the description and un derstanding of bargaining and negotiation proc esses. This “coding” problem has to be considered from the viewpoint of many other disciplines, as well as that of game theory.

A desirable property of a theoretical solution to a bargaining problem is that it predicts a unique outcome. In the context of economics this would be a unique distribution of resources (and unique prices, if prices exist at all). Unfortunately, there are few concepts of solution pertaining to economic affairs which have this property. The price system and distribution resulting from a competitive mar ket may in general not be unique; Edgeworth’s so lution to the bargaining problem was the contract curve, which merely predicts that the outcome will be some point among an infinite set of possibilities.

The contract curve has the property that any point on it is jointly optimal (both bargainers can not improve their position simultaneously from a point on this curve) and individually rational (no point gives an individual less than he could obtain without trading). The Pareto-optimal surface is larger than the contract curve, for it is restricted only by the joint optimality condition. If it is assumed that a transferable comparable utility exists, then the Pareto-optimal surface (described in the space of the traders’ utilities) is flat; if not, it will generally be curved. Any point on the Pareto-optimal surface that is individually rational is called an imputation. In the two-person bargain the Edge-worth contract curve coincides with two game-theoretic solutions, the *core* and the *stable set.* The core consists of all undominated imputations (it may be empty). A stable set is a set of imputations which do not dominate each other but which to gether dominate all other imputations. An imputa-tation, α, is said to *dominate* another imputation, β, if (1) there exists a coalition of players who, acting jointly but independently of the others, could guarantee for themselves at least the amounts they would receive if they accepted α, and (2) each player obtains more in α than in β. The core and stable-set solutions can be defined with or without the assumption of transferable utilities. Neither of these solution concepts predicts a unique outcome.

One approach to bilateral monopoly has been to regard it as a “fair-division” problem, and several solution concepts, each one embodying a formalization of concepts of symmetry, justice, and equity, have been suggested (Nash 1953; Shapley 1953; Harsanyi 1956). These are generally known as *value* solutions, since they specify the amount that each participant should obtain. For the two-person case, some of the fair-division or arbitration schemes do predict unique outcomes. The Nash fair-division scheme assumes that utilities of the players are measurable, but it does not need as sumptions of either comparability or transferability of utilities (Shubik 1966). Shapley’s scheme does utilize the last two assumptions. Other schemes have been suggested by Raiffa (1953), Braithwaite (1955), Kuhn (in Shubik 1967), and others.

Another approach to bargaining is to treat it in the extensive form, describing each move explicitly and showing the time path taken to the settlement point. This involves attempting to parametrize qualities such as “toughness,” “flexibility,” etc. Most of the attempts to apply game theory in this manner belong to studies in social psychology, political science, and experimental gaming. However, it has been shown (Harsanyi 1956) that the dynamic process suggested by Zeuthen (1930) is equivalent to the Nash fair-division scheme.

### General equilibrium

Game theory methods have provided several new insights in general equilibrium economics. Under the appropriate conditions on preferences and production, it has been proved that a price system that clears the market will exist, provided that each individual acts as an independent maximizer. This result holds true independently of the number of participants in the market; hence, it cannot be interpreted as a limiting phenomenon as the number of participants increases. Yet, in verbal discussions contrasting the competitive market with bi lateral monopoly, the difference generally stressed is that between the market with many participants, each with little if any control over price, and the market with few participants, where the interactions of each with all the others are of maximum importance.

The competitive equilibrium best reflects the spirit of “the invisible hand” and of decentralization. The use of the word “competitive” is counter to both game-theoretic and common-language implications. It refers to the case in which, if each individual considers himself an isolated maximizer operating in an environment over which he has no control, the results will be jointly optimal.

#### Game-theoretic solutions

The power and appeal of the concept of competitive equilibrium appears to be far greater than that of mere decentralization. This is reflected in the finding that under the appropriate conditions the competitive equilibrium may be regarded as the limit solution for several conceptually extremely different game-theo retic solutions.

*Convergence of the core.* It has been noted that for bilateral monopoly the Edgeworth contract curve is the core. Edgeworth had suggested and presented an argument to show that if the number of traders is increased on both sides of the market, the contract curve would shrink (interpreted appropriately, given the change in dimensions). Shubik (1959b) observed the connection between the work of Edgeworth and the core; he proved the convergence of the core to the competitive equilibrium in the special case of the two-sided market with transferable utility and conjectured that the result would be generally true for any number of markets without transferable utility. This result was proved by Scarf (the proof, although achieved earlier, is described in Scarf 1965); Debreu and Scarf improved upon it (1963). Using the concept of a continuum of players (rather than considering a limit by replicating the finite number of players in each category, as was done by Shubik, Scarf, and Debreu), Aumann (1966) proved the convergence of the core under somewhat different conditions. When transferable utility is assumed, the core converges to a single point and the competitive equilibrium is unique. Otherwise it may split and converge to the set of competitive equilibria.

The convergence of the core establishes the existence of a price system as a result of a theory which makes no mention of prices. The theory’s prime concern is with the power of coalitions. It may be looked upon as a formalization of countervailing power, inasmuch as it rules out imputations which can be dominated by any group in the society.

Shapley and Shubik (1966) have shown the convergence of the value in the two-sided market with transferable utility. In unpublished work Shapley has proved a more general result for any number of markets, and Shapley and Aumann have worked on the convergence of a nontransferable utility value recently defined by Shapley. Harsanyi (1959) was able to define a value that generalized the Nash two-person fair-division scheme to situations involving many individuals whose utilities are not transferable. This preceded and is related to the new value of Shapley, and its convergence has not been proved.

There are several other value concepts (Selten 1964), all of which make use of symmetry axioms and are based upon some type of averaging of the contributions of an individual to all coalitions.

If one is willing to accept the value as reflecting certain concepts of symmetry and fairness, then in an economy with many individuals in all walks of life, and with the conditions which are required for the existence of a competitive equilibrium satisfied, the competitive equilibria will also satisfy these symmetry and fairness criteria.

*Noncooperative equilibrium.* One of the important open problems has been the reconciliation of the various noncooperative theories of oligopolistic competition with general equilibrium theory. The major difficulty is that the oligopoly models are open in the sense that the customers-are usually not considered as players with strategic freedom, while the general equilibrium model considers every individual in the same manner, regardless of his position in the economy. Since the firms are players in the oligopoly models, it is necessary to specify the domain of the strategies they control and their payoffs under all circumstances. In a general equilibrium model no individual is considered a player; all are regarded as individual maximizers. Walras’ law is assumed to hold, and supply is assumed to equal demand.

When an attempt is made to consider a closed economic model as a noncooperative game, considerable difficulties are encountered in describing the strategies of the players. This can be seen im-mediately by considering the bilateral monopoly problem; each individual does not really know what he is in a position to buy until he finds out what he can sell. In order to model this type of situation as a game, it may be necessary to consider strategies which do not clear the market and which may cause a player to become bankrupt—i.e., unable to meet his commitments. Shapley and Shubik (in Shubik 1967) have successfully modeled the closed two-sided two-commodity market without side payments and have shown that the noncooperative equilibrium point converges from below the Pareto-optimal surface to the competitive equilibrium point. They also have considered more goods and markets on the assumption of the existence of a transferable (but not necessarily comparable) utility.

When there are more than two commodities and one market, the existence of a unique competitive equilibrium point appears to be indispensable in defining the strategies and payoffs of players in a noncooperative game. No one has succeeded in constructing a satisfactory general market model as a noncooperative game without using a side-payment mechanism. The important role played by the side-payment commodity is that of a strategy decoupler. It means that a player with a supply of this type of “money” can decide what to buy even though he does not know what he will sell.

In summary, it appears that, in the limit, at least three considerably different game-theoretic solutions are coincidental with the competitive equilibrium solution. This means that by considering different solutions we may interpret the com petitive market in terms of decentralization, fair division, the power of groups, and the attenuation of power of the individual.

The stable-set solution of von Neumann and Morgenstern, the bargaining set of Aumann and Maschler (1964), the “self-policing” properties of certain imputation sets of Vickrey (1959), and several other related cooperative solutions appear to be more applicable to sociology, and possibly anthropology, than to economics. There has been no indication of a limiting behavior for these solutions as numbers grow; on the contrary, it is conjectured that in general the solutions proliferate. When, however, numbers are few, as in cartel arrangements and in international trade, these other solutions provide insights, as Nyblen has shown in his work dealing with stable sets (1951).

#### Nonexistence of competitive equilibrium

When conditions other than those needed for the existence of a competitive equilibrium hold, such as external economies or diseconomies, joint ownership, increasing returns to scale, and interlinked tastes, then the different solutions in general do not converge. There may be no competitive equilibrium; the core may be empty; and the definition of a non-cooperative game when joint property is at stake will call for a statement of the laws concerning damages and threats. (Similarly, even though the conditions for the existence of a competitive equi librium are satisfied, the various solutions will be different if there are few participants.) When the competitive equilibrium does not exist, we must seek another criterion to solve the problem of distribution or, if possible, change the laws to rein-troduce the competitive equilibrium. The other solutions provide different criteria. However, if a society desires, for example, to have its distribution system satisfy conditions of decentralization and fair division, or of fair division and limits on power of groups, it may be logically impossible to do so.

Davis and Whinston (1962), Scarf (1964), and Shapley and Shubik (1964) have investigated applications of game theory to external economies, to increasing returns to scale, and to joint ownership. In the case of joint ownership the relation between economics and politics as mechanisms for the distribution of the proceeds from jointly owned resources is evident.

It must be noted that the “many solutions” approach to distribution is in contrast to the type of welfare economics that considers a community welfare function or social preferences, which are not necessarily constructed from individual preferences.

### Other applications

Leaving aside questions of transferable utility, there is a considerable difference between an econ omy in which there is only barter or a passive shadow price system and one in which the government, and possibly others, have important monetary strategies. Faxen (1957) has considered financial policy from a game-theoretic viewpoint.

There have been some diverse applications of game theory to budgeting and to management science, as can be seen in the articles by Bennion (1956) and Shubik (1955).

Nyblen (1951) has attempted to apply the von Neumann and Morgenstern concept of stable set to problems of macroeconomics. He notes that the Walrasian system bypasses the problem of individual power by assuming it away. He observes that in game theory certain simple aggregation procedures do not hold; thus, the solutions to a four-person game obtained by aggregating two players in a five-person game may have little in common with the solutions to the original five-person game. He outlines an institutional theory of the rate of interest based upon a standard of behavior and (primarily at a descriptive level) links the concepts of discriminatory solution and excess to inflation and international trade.

Martin Shubik

[*The reader who is not familiar with oligopoly theory and general equilibrium theory should consult*Eco NomicEquilibrium; Oligopoly; Welfare Economics.]

## BIBLIOGRAPHY

Aumann, Robert J. 1966 Existence of Competitive Equilibria in Markets With a Continuum of Traders. *Econometrica* 34:1–17.

Aumann, R. J.; and Maschxer, M. 1964 The Bargaining Set for Cooperative Games. Pages 443-476 in M. Dresher, Lloyd S. Shapley, and A. W. Tucker (editors), *Advances in Game Theory.* Princeton Univ. Press.

Bennion, E. G. 1956 Capital Budgeting and Game Theory. *Harvard Business Review* 34:115—123.

Braithwaite, Richard B. 1955 *Theory of Games as a Tool for the Moral Philosopher.* Cambridge Univ. Press.

Cyert, Richard M.; and March, James G. 1963 *A Behavioral Theory of the Firm.* Englewood Cliffs, N.J.: Prentice-Hall.

Davis, Otto A.; and Whinston, A. 1962 Externalities, Welfare, and the Theory of Games. *Journal of Political Economy* 70:241–262.

Debreu, Gerard; and Scarf, Herbert 1963 A Limit Theorem on the Core of an Economy. *International Economic Review* 4:235–246.

Faxen, Karl O. 1957 *Monetary and Fiscal Policy Under Uncertainty.* Stockholm: Almqvist & Wiksell.

Fellner, William J. 1949 *Competition Among the Few: Oligopoly and Similar Market Structures.* New York: Knopf.

Griesmer, James H.; and Shubik, Martin 1963 To wards a Study of Bidding Processes. *Naval Research Logistics Quarterly* 10:11-21, 151-173, 199–217.

Harsanyi, John C. 1956 Approaches to the Bargaining Problem Before and After the Theory of Games. *Eco nometrica* 24:144–157.

Harsanyi, John C. 1959 A Bargaining Model for the Cooperative n-Person Game. Volume 4, pages 325— 356 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ. Press. → Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Hoggatt, A. C. 1959 An Experimental Business Game. *Behavioral Science* 4:192–203.

Jacot, Simon-Pierre 1963 *Strategic et concurrence de I’application de la theorie des jeux a I’analyse de la concurrence spatiale.* Paris: SEDES.

Mayberry, J. P.; Nash, J. F.; and Shubik, Martin 1953 A Comparison of Treatments of a Duopoly Situation. *Econometrica* 21:141–154.

Mills, H. D. 1961 A Study in Promotional Competition. Pages 245-301 in Frank M. Bass et al. (editors), *Mathematical Models and Methods in Marketing.* Homewood, III.: Irwin.

Milnok, John W.; and Shapley, Lloyd S. 1957 On Games of Survival. Volume 3, pages 15-45 in Harold W. Kuhn and A. W. Tucker (editors), Contributions *to the Theory of Games.* Princeton Univ. Press. → Volume 3 was edited by M. Dresher, A. W. Tucker, and P. Wolfe.

Nash, John F. Jr. 1951 Non-cooperative Games. *Annals of Mathematics* 54:286–295.

Nash, John F. Jr. 1953 Two-person Cooperative Games. *Econometrica* 21:128–140.

Nyblen, Goren 1951 *The Problem of Summation in Economic Sciences.* Lund (Sweden): Gleerup.

Raiffa, Howard 1953 Arbitration Schemes for Generalized Two-person Games. Volume 2, pages 361-387 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ. Press.

Scarf, H. 1964 Notes on the Core of a Productive Economy. Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Scarf, H. 1965 The Core of an n-Person Game. Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Selten, Reinhard 1964 Valuation of n-Person Games. Pages 577-626 in M. Dresher, Lloyd S. Shapley, and A. W. Tucker (editors), *Advances in Game Theory.* Princeton Univ. Press.

Selten, Reinhard 1965 Value of the n-Person Game. → Paper presented at the First International Game Theory Workshop, Hebrew University of Jerusalem.

Shapley, Lloyd S. 1953 A Value for n-Person Games. Volume 2, pages 307-317 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ. Press.

Shapley, Lloyd S.; and Shubik, Martin 1964 *Owner ship and the Production Function.* RAND Corporation Research Memorandum, RM-4053-PR. Santa Monica, Calif.: The Corporation.

Shapley, Lloyd S.; and Shubik, Martin 1966 *Pure Competition, Coalition Power and Fair Division.* RAND Corporation Research Memorandum, RM-4917. Santa Monica, Calif.: The Corporation.

Shubik, Martin 1955 The Uses of Game Theory in Management Science. *Management Science* 2:40–54.

Shubik, Martin 1957 Market Form, Intent of the Firm and Market Behavior. *Zeitschrift fur Nationalokon-omie* 17:186–196.

Shubik, Martin 1959a *Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games.* New York: Wiley.

Shubik, Martin 1959b Edgeworth Market Games. Volume 4, pages 267-278 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ Press.→Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Shubik, Martin 1966 Measureable, Transferable, Comparable Utility and Money Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Shubik, Martin (editor) 1967 *Essays in Mathematical Economics in Honor of Oskar Morgenstern.* Princeton Univ. Press. *→* See especially Harold W. Kuhn, “On Games of Fair Division” and Lloyd S. Shapley and Martin Shubik, “Concept and Theories of Pure Com petition.”

Shubik, Martin; and Thompson, Gerald L. 1959 Games of Economic Survival. *Naval Research Logistics Quarterly* 6:111–123.

Siegel, S.; and Fouraker, L. E. 1960 *Bargaining and Group Decision Making: Experiments in Bilateral Monopoly.* New York: McGraw-Hill.

Vickrey, William 1959 Self-policing Properties of Certain Imputation Sets. Volume 4, pages 213-246 in Harold W. Kuhn and A. W. Tucker (editors), *Contributions to the Theory of Games.* Princeton Univ Press. → Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Vickrey, William 1961 Counterspeculation, Auctions and Competitive Sealed Tenders. *Journal of Finance* 16:8–37.

Von Neumann, John; and Morgenstern, Oskar (1944) 1964 *Theory of Games and Economic Behavior.* 3d ed. New York: Wiley.

Zeuthen, F. 1930 *Problems of Monopoly and Economic Warfare.* London: Routledge.

## Game Theory

# GAME THEORY

Game theory is the analysis of choices made by individuals, institutions, or governments, which are termed players; the results of one player's choice depend on the choices made by the others. Anticipations by players about how others may respond or may anticipate their actions thus influence choices of actions. An important attempt to use game theory involved the formation of nuclear deterrence strategy by the United States during the Cold War (1945–1990). However, game theory has many more general implications that go beyond those involving intentional choice.

Despite the fact that game theory matured only toward the end of the twentieth century, it has become a central tool in some of the behavioral sciences and doubtless will extend its influence into all disciplines that attempt to explain the behavior of living organisms. Indeed, game theory provides a language that transcends and potentially unites the various disciplines that deal with human behavior. Moreover, it provides an experimental methodology that allows for the rigorous construction and testing of strategic interaction because it forces an experimenter to be explicit in defining the actions available to the subjects, the payoffs, and the distribution of information among the subjects.

## An Illuminating Example

A fox is chasing a rabbit through a wooded area. Foxes are faster than rabbits, and so if the rabbit runs in a straight line, it will be caught and eaten. The rabbit therefore periodically veers left or right, gaining ground on the fox. If the rabbit changes course too rapidly, its average forward movement will be so slow that it will be caught, but if it changes course too slowly, the fox will be so close that a small misstep by the rabbit will lead to its immediate demise. Therefore, the rabbit must choose the average rate of veering to optimize its probability of escaping.

In game theory it is said that the rabbit has *actions*: *R*_{t} = "Veer Right after t seconds" and *L*_{t} = "Veer Left after t seconds." The rabbit also wants to randomize its choice of Veer Right and Veer Left, because if the fox discovers a pattern in the rabbit's movement, it may be able to anticipate the rabbit's next move, thereby gaining ground on it. The proper mix of Veer Left and Veer right is doubtless 50 percent Left and 50 percent Right, for the fox potentially could exploit an imbalance in either direction.

However, suppose that there is an open field some distance to the east of the wood and that foxes run much faster than rabbits do in an open field. Then the fox might run constantly a little to the west of the rabbit, forcing the rabbit to turn east more often than it turns west. The rabbit in turn may risk being caught by veering west more frequently than it would otherwise, trying to keep away from the open field. It can be seen that both the rabbit and the fox choose actions to maximize the probability of winning, with each anticipating the effect of its actions on the other. This is the type of situation studied in game theory.

How important is game theory? It is central to understanding life in all its varied forms. This may sound excessive, but one must step back from this interaction between a rabbit and a fox to ask more basic questions. For example, why are rabbits bilaterally symmetrical about the axis along which their movement is most rapid and energy-efficient (left leg and right leg symmetrically placed and equally strong, left eye and right eye symmetrically placed and of equal size and discriminating capacity, and single external body parts such as the nose and tail arrayed along the axis of movement)? The answer is that if rabbits had strength biased to the right, it would be easier for them to jump left than jump right, and that would give an advantage to their natural predators, the foxes. Foxes are bilaterally symmetrical for similar reasons. Game theory thus explains important facts about life that otherwise appear arbitrary and incomprehensible.

This simple game theoretic argument explains a major fact about the organization of life. Animals that run to escape predators or capture prey have body forms that are bilaterally symmetrical about a vertical axis along the direction of their most rapid motion. This applies to most animals and fish but not to plants, which do not run and are radially symmetrical, or to squid, octopuses, and other sea creatures whose primary motion is up and down.

To avoid the conclusion that game theory deals only with conflict, one can consider an example that is called the Cooperation Game. A group of ten hunters in a village spread out in the jungle every day to look for large game. They hunt individually, climbing tall trees and waiting quietly and attentively for long hours until the prey appears. At the end of the day the hunters share the day's kill equally. Of course, each hunter could spend the day sleeping in a tree. Suppose that by working each hunter adds an average of 3,000 calories to the total kill, of which his share is 300, but expends 500 calories of energy hunting as opposed to sleeping. A selfish hunter thus will sleep rather than hunt, saving 200 calories but costing the other group members 2,700 calories. This is a game in which there are *n* players and each player (i.e., each hunter) has two actions: Work or Shirk. If *m* hunters Work, the Shirkers' payoff is 3,000*m/n* each, whereas the Workers' payoff is 3,000*m/n* -500.

A *best response* of a player in a game can be defined as a strategy (in this case an action) that maximizes that player's payoff in light of the strategies of the other players. It is easy to see that a self-interested player's best response in this game is to Shirk no matter what the other players do. A *Nash equilibrium* of a game is defined as a choice of strategies made by the players such that each is a best response to the other players' choices. It is clear that in the Cooperation Game there is only one Nash equilibrium, in which everyone shirks (*m* = 0) and no one eats.

Suppose another rule is added to the game. If a hunter is caught shirking, he is punished by being prohibited from hunting and sharing the kill for two days. Further, suppose the probability of being caught shirking is 0.50. To see that having everyone hunt is now a Nash equilibrium, one must decide whether a single hunter in a group of ten could do better by shirking and risking getting caught. The hunter saves 200 calories by shirking, but half the time he is caught and then loses two days' payoff, which is 5,400 calories. Thus, that hunter loses an average of 2,500 calories a day by shirking, and so his best response to hunt with the others. The conclusion is that with this new punishing mechanism full cooperation by each hunter becomes a Nash equilibrium.

## History and Analytics of Game Theory

Game theory presupposes rational choice theory because it assumes that players have rational preferences in regard to the game's outcomes. It also presupposes rational decision theory because choice under conditions of uncertainty is the rule in most game situations. Because rational choice theory and decision theory were codified only in the late twentieth century, it is not surprising that game theory is still an incomplete and rather underdeveloped science. Before about 1950 games were assumed to be zero-sum; that means that what one player loses, the other player wins. The rabbit-fox game described earlier is zero-sum, but the hunter game is not because with the proper strategies all the hunters gain by cooperating.

With the zero-sum assumption cooperation never leas to a gain, and this would undercut some of the major contributions of game theory to the understanding of cooperation in biology and economics. Moreover, the three mathematicians who developed game theory—Ernst Zermelo (1871–1953), Stefan Banach (1892–1945), and John von Neumann (1903–1957)—assumed that each player will choose a strategy that minimizes the maximum gain for an opponent. This so-called *minimax* analysis cannot be extended to more general strategic contexts.

Modern game theory was born in 1950 after the publication of a paper by the young Princeton mathematician John F. Nash, Jr. (b. 1928; winner of a Nobel Prize in economics in 1994), who introduced the novel idea of a game equilibrium as a set of mutual best responses. The central term in modern game theory, the Nash equilibrium, acknowledges his work. Several conceptual problems had to be cleared up before game theory could attain a central position in the behavioral sciences. In 1965 Reinhard Selten (b. 1930; winner of a Nobel Prize in economics in 1994) developed the concept of *equilibrium refinement*, which showed why certain Nash equilibria are likely to be of empirical relevance and others are not. In 1967 and 1968 John Harsanyi (1920–2000; winner of a Nobel Prize in economics in 1994) showed how to apply game theory when the players have incomplete knowledge of the other players and the payoffs.

Until the 1980s it was believed by many people that game theory could be applied only to highly intelligent, so-called rational players because an analysis of the best responses is intellectually demanding. However, in 1972 the biologist John Maynard Smith (1920–2004) applied game theoretic notions to explaining animal conflict, a process that culminated in his publication of *Evolution and the Theory of Games* (1982). The innovation here is the idea that evolution can provide an alternative to high-level mental reasoning. For instance, rabbits veer optimally when chased by foxes not because each rabbit logically compares and empirically tests the alternatives but because running behavior is encoded in a rabbit's genes and those genes which render the rabbit most capable of eluding the fox are favored by natural selection in successive generations of rabbits. Inefficient genes simply become fox food.

## The Ultimatum Game and Altruistic Preferences

An example of such research is the *ultimatum game*, in which under conditions of complete anonymity two players separately are shown a sum of money, say, $10. One of the players, called the proposer, is instructed to offer any number of dollars from $1 to $10 to the second player, who is called the responder. The proposer can make only one offer, and the game is never repeated with the same players facing each other. The responder can accept or reject this offer. If the responder accepts the offer, the money is shared accordingly. If the responder rejects the offer, both players receive nothing.

If the responder cares only about her own payoff in the game (it is said that she is *self-regarding* in this case) and the proposer knows or supposes this, the proposer will make the responder the minimum offer of $1, the responder will accept, and the game will be over. However, when the game actually is played, the self-regarding outcome almost never is attained or even approximated. In fact, as many replications of this experiment have documented, under varying conditions and with varying amounts of money, proposers routinely offer responders very substantial amounts (50 percent of the total generally is the modal offer) and responders frequently reject offers below 30 percent (Camerer 2003).

Are these results culturally dependent? Do they have a strong genetic component, or do all "successful" cultures transmit similar values of reciprocity to individuals? Alvin Roth (Roth, Prasnikar, Okuno-Fujiwara, and Zamir 1991) conducted ultimatum games in four different countries (the United States, Yugoslavia, Japan, and Israel) and found that although the level of offers differed slightly in different countries, the probability of an offer being rejected did not. This indicates that both proposers and responders have the same notion of what is considered fair in that society and that proposers adjust their offers to reflect that common notion. The differences in the levels of offers across countries were relatively small.

This ultimatum game result, along with that of many other similar games, suggests that many human subjects are strong reciprocators. Strong reciprocators come to strategic interactions with a propensity to cooperate (*altruistic cooperation*), respond to cooperative behavior by maintaining or increasing their level of cooperation, and responds to noncooperative behavior by punishing the "offenders" even at a cost to themselves and even when they cannot reasonably expect future personal gains to flow from the imposition of such punishment (this is called *altruistic punishment*).

Behavior in the ultimatum game thus conforms to the strong reciprocity model: Fair behavior in the ultimatum game among college students is a fifty-fifty split. Responders reject offers under 40 percent as a form of altruistic punishment of a norm-violating proposer. Proposers offer 50 percent because they are altruistic cooperators or 40 percent because they fear rejection. To support this interpretation it can be noted that if the offers in an ultimatum game are generated by a computer rather than by the proposer and if the respondents know this, low offers very rarely are rejected (Blount 1995). Moreover, in a variant of the game in which a responder's rejection leads to the responder getting nothing but allows the proposer to keep the share she suggested for herself, responders infrequently reject offers and proposers make considerably smaller offers.

The strong reciprocator is not a representative of one of the types of human nature found in traditional political philosophy. A strong reciprocator thus is neither the selfless altruist of utopian theory in the tradition of Jean-Jacques Rousseau (1712–1778) or that of Karl Marx (1818–1883) nor the selfish hedonist found in traditional economics and described by the economist Adam Smith (1723–1790) in *The Wealth of Nations* (1776). Such a person is a conditional cooperator whose penchant for reciprocity can be elicited in circumstances in which pure selfishness would dictate a different action. Indeed, the strong reciprocator is more akin to the empathetic individual found in Adam Smith's other important work, *The Theory of the Moral Sentiments* (1759) except that Smith there emphasizes the sweet side of human nature, playing down the willingness to punish transgressions that is uncovered routinely in behavioral games.

## Social Dilemmas

Another important behavioral game that sheds light on human nature and increases people's understanding of human social interaction is the *social dilemma*. A social dilemma is a group interaction in which all the players benefit if they all cooperate but each individual has an incentive to shirk and benefit from the cooperation of others.

An experimental representation of a social dilemma is the so-called *public goods game*. A typical public goods game consists of a number of rounds, say, ten. In each round each subject is grouped with several other subjects, say, three others. Each subject is given a certain amount of money, say, $20. Each subject, unseen by the others, then places a fraction of his or her money in a common account and puts the remainder in his or her private account. The experimenter then tells the subjects how much was contributed to the common account adds to the money in the common account enough so that, when divided among the four players, the private account of each subject can be increased by a fraction, say, 40 percent, of the players' original contribution to in the common account. Thus, if a subject contributes his or her whole $20 to the common account, the experimenter adds an additional $12, so each of the four group members will receive ($20 + $12)/4 = $8 at the end of the round. In effect, by putting the whole endowment into the common account, a player loses $12 and the other three group members gain in total $24 (= $8 × 3).

A self-regarding player will contribute nothing to the common account. However, only a fraction of subjects conform to the self-regarding model. The subjects begin by contributing on average about half of their endowments to the public account. The level of contributions decays over the course of the ten rounds until in the final rounds most players behave in a self-regarding manner. This is exactly what is predicted by the strong reciprocity model. Because they are altruistic contributors, strong reciprocators start out by contributing to the common pool, but in response to the norm violation on the part of the self-regarding types they begin to refrain from contributing.

How can it be known that the decay of cooperation in the public goods game is due to cooperators punishing free riders by refusing to contribute? Subjects often report this behavior retrospectively. More compelling, however, is the fact that when subjects are given a more constructive way of punishing defectors, they use it in a manner that helps sustain cooperation. For instance, Ernst Fehr and Simon Gächter (2000) set up an experimental situation in which the possibility of punishment for personal gain was removed completely. They used six- and ten-round public goods games with groups of four and with costly punishment allowed at the end of each round, employing three different methods of assigning members to groups.

They found that when costly punishment is permitted, cooperation does not deteriorate; indeed, if the same players stay together for the whole session, despite strict anonymity cooperation increases almost to full cooperation even on the final round. In effect, even though the groups had some selfish players, there was a sufficiently large fraction of strong reciprocators to ensure that it was not in the interest of the selfish to act selfishly.

## The Epistemological Foundations of Game Theory

One can characterize the choice situation facing an agent in terms of its level of complexity. The least complex situation occurs when an agent must choose from a set of fixed alternatives. Analytically complete axiomatic models of choice in this situation are well developed and empirically successful. Of intermediate complexity is a situation in which an agent must choose from a set of alternatives, each of which is a *probability distribution* over determinate outcomes. Analytically complete axiomatic models of choice in this situation are also well developed and empirically successful, although some important anomalies in human behavior have been noted in regard to decision theory. The most complex situation is the one described by game theory: An agent's choices affect not only that agent but other agents as well, the other agents also are engaged in making choices that affect themselves and others, and all agents take into account the strategic nature of their interactions. One of the most widely known attempts to illustrate such a game theoretic situation is the Prisoner's Dilemma.

It would be gratifying to have a fully successful analytical model of strategic interaction applicable to the highly complex level, but despite the efforts of theoreticians since the second half of the twentieth century, none exists. Ignoring the Prisoner's Dilemma for now, one can consider three simple games that dramatize the problems in developing such a theory, which then can be used to outline some important contributions to the epistemological underpinnings of game theory.

EVEN-ODD GAME. The first is the simple Even-Odd game. This game has two players, each of whom can show either one finger (One) or two fingers (Two). The two players show their fingers simultaneously, with player 1 winning if his choice matches that of the other player (i.e., if One-One or Two-Two occurs) and player 2 winning if her choice does not match it (i.e., if One-Two or Two-One occurs). Figure 1 shows the normal form of this game (the normal form specifies the moves that each player can make and the payoffs for each player as a function of the moves of both players).

This game obviously has no Nash equilibria in the "pure" strategies: One and Two. However it does have a unique Nash equilibrium in which each player plays One with probability 1/2 and plays Two with probability 1/2. Doubtless many people remember this solution from schoolyard days, when they learned to "mix up" their choices so that an opponent could not discover their next move. The problem is that this game is played just once (it is a *one-shot* game). Hence, if a player's opponent randomizes as suggested by the Nash equilibrium, it does not matter what the first player does: The expected payoff is zero whether the first player chooses One, Two, or a probability distribution over One and Two. However, the same is true for the opponent. Therefore, there is no reason for either player to randomize, yet that is the solution suggested by game theory.

An important step toward dealing with this problem is to note that each player chooses a best response not to the actual strategy of the other players but to his or her own conjecture about what the other players will do. Robert Aumann and Adam Brandenburger (1995) prove the following theorem for a two-player game. Suppose ˚_{1} is player 1's conjecture concerning player 2's strategy and ˚_{2} is player 2's conjecture concerning player 1's strategy. If both players know each other's conjectures and each knows that the other is rational (i.e., chooses a best response to his or her conjecture), (˚_{2},˚_{1}) is a Nash equilibrium.

BATTLE OF THE SEXES. This is a fine solution for Odd or Even, which has only one Nash equilibrium. However, one must consider another famous game, the Battle of the Sexes, which is depicted in Figure 2. In this game Rowena and Colin love each other and get one point by being together. However, Rowena loves the ballet and Colin loves gambling. Each gets a point for attending his or her favorite event. Thus, if both go to the opera, Rowena gets 2 and Colin gets 1, whereas if they both go gambling, Colin gets 2 and Rowena gets 1. Moreover, when they are not together, it is assume that they are so unhappy that each gets zero. It is easy to find two Nash equilibria: Both go gambling, and both go to the opera. It turns out that there is also a third Nash equilibrium in which each party goes to his or her favorite place with probability 2/3 and to the other's favorite
place with probability 1/3. This is called a *mixed strategy equilibrium*.

To see that One gambling with probability 2/3 and Two gambling with probability 1/3 is a Nash equilibrium, one should note that the expected payoff to One from gambling equals 2 × 1/3 + 0 × 2/3 = 2/3, whereas the expected payoff to One from ballet equals 0 × 1/3 + 1 × 2/3 = 2/3. Because these probabilities are equal, One can do no better than his probability 2/3 gambling, probability 1/3 ballet strategy, and a similar argument holds for Two.

In the case of Battle of the Sexes it is unreasonable to posit that each player knows the other's conjecture because there is no way of explaining how this mutual knowledge would have come about. Indeed, it is not even plausible to suppose that the players have conjectures concerning what the other will do unless there is more to the social situation than has been explained. Moreover, the players still have no incentive to play according to their partners' conjectures (Binmore 1988).

The problem becomes even more implausible when there are more than two players. In this case Aumann and Brandenburger (1995) show that if all players assign the same probability distribution to player types, it is known mutually that all players are rational (i.e., choose best responses), and the players' conjectures are commonly known, these conjectures form a Nash equilibrium. One says that a fact is commonly known if all players know the fact, all know that the others know it, all know that all know that the others know it, and so on (Lewis 1969).

CENTIPEDE GAME. There are simple games in which the very notion of rationality and the adequacy of the concept of the Nash equilibrium are brought into question. Consider, for instance, the Centipede Game. The players, Mutt and Jeff, start out with $2 each, and they alternate rounds. On the first round Mutt can defect (D) by stealing $2 from Jeff, and the game is over. Otherwise Mutt cooperates (C) by not stealing and receives an additional $1. Then Jeff can defect (D) and steal $2 from Mutt, and the game is over, or he can cooperate (C) and receive an additional $1. This continues until one player or the other defects or until each player has $100. The game tree is illustrated in Figure 3.

This game has only one Nash equilibrium outcome, in which Mutt defects on the first round. To see this, let round *k* be the first round in which either player defects in a Nash equilibrium. If *k* > 1, the other player's best response is to defect on round *k* - 1. Of course, common sense indicates that this is not the way real players would act in this situation, and empirical evidence corroborates this (McKelvey and Palfrey 1992). People in this game will cooperate up to round 90 and beyond before considering defecting.

It would be difficult to fault players for not being rational in this case because they do much better playing the way they do rather than the way dictated by the Nash equilibrium concept. The concept of rationality is problematized for the following reason: If Jeff believes Mutt is rational, Jeff will defect in round 2. This is why Mutt defects in round 1. But suppose Mutt cooperates in round 1. Then Jeff will recognize that his assumption concerning Mutt must be false. Jeff probably will say to himself, "I don't know what strategy Mutt is using, but since he cooperated once, perhaps if I cooperate now, Mutt will cooperate a second time." Thus, Jeff will tend to cooperate in round 2. Now Mutt, who is very smart, can foresee what Jeff will be thinking and hence will cooperate even if he is rational. One can conclude that agents who use best responses will not play the Nash equilibrium in this game. It is easy to see the problem by referring to the analysis of Aumann and Brandenburger (1995): The two players do not know each other's conjectures.

## Evolutionary Game Theory

To this point the focus has been on so-called classical game theory, which depicts the strategic interaction among a number of *rational agents*. The interaction is socially disembodied, with the agents having neither history nor substance outside this particular interaction. All socially relevant aspects of the actors must be captured by their beliefs and conjectures, which are totally disembodied and unmotivated. A similar degree of social minimality has given rise to powerful models of decision making when strategic interaction is absent, as in rational choice theory and decision theory. However, this does not extend to game theory, in which a more socially embedded approach is needed to derive plausible results.

The most promising alternative foundation for strategic interaction is known as *evolutionary game theory* (Maynard Smith 1982, Samuelson 1997, *G*intis 2000). The central actors in evolutionary game theory are not players but strategies. Suppose a group of agents periodically plays a certain classical game *G*. One assumes a large population of agents, each of whom adopts a particular strategy in playing *G*. One does not assume that the strategies represented in the population are in any way optimal, although one does assume that there is enough random variation and mutation across time that all pure strategies are represented.

In each period agents from the population are assigned randomly to play G. Their scores are tallied, and the change in the population over time is governed by an evolutionary dynamic in the sense that agents whose strategies are very successful tend to be copied by agents whose strategies are less successful. Thus, the population ecology of strategies moves over time in accordance with the notion of survival of the fittest. This is called a replicator dynamic (Hofbauer and Sigmund 1998, Gintis 2000).

The fundamental theorem of evolutionary game theory is that every equilibrium point of an evolutionary dynamic is a Nash equilibrium. This provides a justification for the concept of the Nash equilibrium without the need for the epistemological assumptions of classical game theory. Moreover, evolutionary game theory shows that many Nash equilibria of classical game theory are not evolutionarily stable and thus cannot explain observable social behavior.

A case in point is the Centipede Game described earlier in this entry. The author of this entry has created a computer program to simulate the evolution of behavior in the Centipede Game (this is called an *agent-based simulation*). The author created a population of 200 agents, each supplied with a strategy *sk* of the following form: "cooperate until round *k*, then defect." Initially, these strategies are assigned randomly to the agents, and they play 300,000 rounds, with a mutation rate of 0.001 (a mutant assumes a random strategy *sk* where 1 ≤ *k* ≤ 101). The results are shown in Figure 4.

It can be seen that cooperation quickly increases until after only a few rounds the average payoff is more than 95. Then cooperation erodes, as might be expected, until the average payoff dips below 80. At that point a pair of agents who choose strategies near *k* = 100 do very well, and those strategies grow at the expense of the strategies that involve defection on rounds near *k* = 80. Cooperation shoots back up to nearly perfect. The cycle repeats for 300,000 rounds and shows no signs of changing its basic character.

Even though the only Nash equilibrium of the stage game uses strategy *s*_{1}, it can be seen that the evolutionary dynamic never remotely approaches this equilibrium. This is the case because the Nash equilibrium involves such poor payoffs that even a small number of mutant players can invade a population of all-defectors, and the system quickly ramps up to almost full cooperation (changing the mutation rate does not alter this result). Thus, evolutionary game theory shows that the behavior observed when people play the Centipede Game is easy to model in a dynamic framework.

## Game Theory and Ethics

Game theory has been applied to ethical theory by John Harsanyi (1992). Harsanyi (1920–2000; winner of a Nobel Prize in economics in 1994) develops a theory of justice very close to that of the philosopher John Rawls (1921–2002) and shows that it can be derived from basic game theoretic reasoning. Other important contributions to the game theoretic analysis of ethics include those of Brian Skyrms (1996) and Ken Binmore (1998).

Perhaps the first indication that game theory would be important to ethical theory was the famous *tit-for-tat* computer competition run by Robert Axelrod (Axelrod and Hamilton 1981). Axelrod asked what a successful strategy in the repeated Prisoner's Dilemma might look like. In that game the dominant strategy is to defect. However, if the game is repeated several times, players may be able to use the threat of defecting in the future to induce their partners to cooperate in the present.

Axelrod recruited fourteen game theorists from economics, mathematics, and the behavioral and computer sciences to submit computerized strategies for playing 200 rounds of the Prisoner's Dilemma. Those strategies were paired with each other in a round robin tournament with the result that the absolutely simplest strategy won. This strategy was tit-for-tat, supplied by Anatol Rapoport, a mathematician at the University of Toronto. Tit-for-tat cooperates on the first move and then does whatever its partner did on the previous move. Tit-for-tat is thus a simple reciprocity enforcer, cooperating when its partner cooperates and defecting when its partner defects.

After publishing these results (Axelrod and Hamilton 1981) Axelrod decided to stage a second tournament. More than sixty researchers from six countries submitted new programs, many of which were aimed explicitly at defeating tit-for-tat. Nevertheless, tit-fortat again won handily.

This result relates to ethical theory because it shows the success of a strategy that is *nice* (never defect first), *punishing* (always retaliate against a defector), and *forgiving* (always revert to cooperating if your partner cooperates). These responses, of course, represent three important ethical principles. A fourth common ethical principle—*always turn the other cheek*—certainly would not fare well in this encounter, as it would be beaten by any program that could detect "wimps" (those who do not punish) and defect consistently in playing against them.

It is clear that the ethical principles behind the strong reciprocity associated with social dilemmas represent a higher development of tit-for-tat. Whereas tit-for-tat applies only to dyadic relationships, strong reciprocity applies to *n*-player social dilemmas.

HERBERT GINTIS

SEE ALSO *Artificial Morality;
Choice Behavior;
Decision Theory;
Rational Choice theory*.

## BIBLIOGRAPHY

Aumann, Robert, and Adam Brandenburger. (1995). "Epistemic Conditions for Nash Equilibrium." *Econometrica* 65(5): 1161–1180.

Axelrod, Robert, and William D. Hamilton. (1981). "The Evolution of Cooperation." *Science* 211: 1390–1396.

Binmore, Ken. (1988). "Modelling Rational Players: II." *Economics and Philosophy* 4: 9–55.

Binmore, Ken. (1998). *Game Theory and the Social Contract: Just Playing.* Cambridge, MA: MIT Press.

Blount, Sally. (1995). "When Social Outcomes Aren't Fair: The Effect of Causal Attributions on Preferences." *Organizational Behavior & Human Decision Processes* 63(2): 131–144.

Camerer, Colin. (2003). *Behavioral Game Theory: Experiments in Strategic Interaction.* Princeton, NJ: Princeton University Press.

Fehr, Ernst, and Simon Gächter. (2000). "Cooperation and Punishment." *American Economic Review* 90(4): 980–994.

Gintis, Herbert. (2000). *Game Theory Evolving.* Princeton, NJ: Princeton University Press.

Harsanyi, John C. (1992). "Game and Decision Theoretic Models in Ethics, Vol. I." In *Handbook of Game Theory with Economic Applications,* eds. Robert J. Aumann and Sergiu Hart. Amsterdam and New York: Elsevier Science.

Hofbauer, Josef, and Karl Sigmund. (1998). *Evolutionary Games and Population Dynamics.* Cambridge, UK: Cambridge University Press.

Lewis, David. (1969). *Conventions: A Philosophical Study.* Cambridge, MA: Harvard University Press.

Maynard Smith, John. (1982). *Evolution and the Theory of Games.* Cambridge, UK: Cambridge University Press.

McKelvey, Richard D., and Thomas R. Palfrey. (1992). "An Experimental Study of the Centipede Game." *Econometrica* 60: 803–836.

Roth, Alvin E.; Vesna Prasnikar; Masahiro Okuno-Fujiwara; and Shmuel Zamir. (1991). "Bargaining and Market Behavior in Jerusalem, Ljubljana, Pittsburgh, and Tokyo: An Experimental Study." *American Economic Review* 81(5): 1068–1095.

Samuelson, Larry. (1997). *Evolutionary Games and Equilibrium Selection.* Cambridge, MA: MIT Press.

Skyrms, Brian. (1996). *Evolution of the Social Contract.* Cambridge, UK: Cambridge University Press.

Smith, John Maynard. (1982). *Evolution and the Theory of Games.* Cambridge, UK: Cambridge University Press.

## Game Theory

# GAME THEORY.

Game theory, the formal analysis of conflict and cooperation, has pervaded every area of economics and the study of business strategy in the past quarter-century and exerts increasing influence in evolutionary biology, international relations, and political science, where the rational-choice approach to politics has been highly controversial. In a strategic game, each player chooses a strategy (a rule specifying what action to take for each possible information set) to maximize his or her expected payoff, taking into account that each of the other players is also making a rational strategic choice. In contrast to economic theories of competitive equilibrium, the focus of game theory is on strategic interaction and on what information is available to a player to predict the actions that the other players will take.

## The Origins of Game Theory

Writings by several nineteenth-century economists, such as A. A. Cournot and Joseph Bertrand on duopoly and F. Y. Edgeworth on bilateral monopoly, and later work in the 1930s by F. Zeuthen on bargaining and H. von Stackelberg on oligopoly, were later reinterpreted in game-theoretic terms, sometimes in problematic ways (Leonard, 1994; Dimand and Dimand). Game theory emerged as a distinct subdiscipline of applied mathematics, economics, and social science with the publication in 1944 of *Theory of Games and Economic Behavior,* a work of more than six hundred pages written in Princeton by two Continental European emigrés, John von Neumann, a Hungarian mathematician and physicist who was a pioneer in fields from quantum mechanics to computers, and Oskar Morgenstern, a former director of the Austrian Institute for Economic Research. They built upon analyses of two-person, zero-sum games published in the 1920s.

In a series of notes from 1921 to 1927 (three of which were translated into English in *Econometrica* in 1953), the French mathematician and probability theorist Emile Borel developed the concept of a mixed strategy (assigning a probability to each feasible strategy rather than a pure strategy selecting with certainty a single action that the opponent could then predict) and showed that for some particular games with small numbers of possible pure strategies, rational choices by the two players would lead to a minimax solution. Each player would choose the mixed strategy that would minimize the maximum payoff that the other player could be sure of achieving. The young John von Neumann provided the first proof that this minimax solution held for all two-person, constant-sum games (strictly competitive games) in 1928, although the proof of the minimax theorem used by von Neumann and Morgenstern in 1944 was based on the first elementary (that is, nontopological) proof of the existence of a minimax solution, proved by Borel's student Jean Ville in 1938 (Weintraub; Leonard, 1995; Dimand and Dimand). For games with variable sums and more players, where coalitions among players are possible, von Neumann and Morgenstern proposed a more general solution concept, the stable set, but could not prove its existence. In the 1960s, William Lucas proved by counterexample that existence of the stable set solution could not be proved because it was not true in general.

Although von Neumann's and Morgenstern's work was the subject of long and extensive review articles in economics journals, some of which predicted widespread and rapid application, game theory was developed in the 1950s primarily by A. W. Tucker and his students in Princeton's mathematics department (see Shubik's recollections in Weintraub) and at the RAND Corporation, a nonprofit corporation based in Santa Monica, California, whose only client was the U.S. Air Force (Nasar). Expecting that the theory of strategic games would be as relevant to military and naval strategy as contemporary developments in operations research were, the U.S. Office of Naval Research supported much of the basic research, and Morgenstern was named as an editor of the *Naval Research Logistics Quarterly.*

Much has been written about the influence of game theory and related forms of rational-choice theory such as systems analysis on nuclear strategy (although General Curtis LeMay complained that RAND stood for Research And No Development) and of how the Cold War context and military funding helped shape game theory and economics (Heims; Poundstone; Mirowski), mirrored by the shaping of similar mathematical techniques into "planometrics" on the other side of the Cold War (Campbell). Researchers in peace studies, publishing largely in the *Journal of Conflict Resolution* in the late 1950s and the 1960s, drew on Prisoner's Dilemma games to analyze the Cold War (see Schelling), while from 1965 to 1968 (while ratification of the Nuclear Non-Proliferation Treaty was pending) the U.S. Arms Control and Disarmament Agency sponsored important research on bargaining games with incomplete information and their application to arms races and disarmament (later declassified and published as Mayberry with Harsanyi, Scarf, and Selten; and Aumann and Maschler with Stearns).

## Nash Equilibrium, the Nash Bargaining Solution, and the Shapley Value

John Nash, the outstanding figure among the Princeton and RAND game theorists (Nasar; Giocoli), developed, in articles from his dissertation, both the Nash equilibrium for noncooperative games, where the players cannot make binding agreements enforced by an outside agency, and the Nash bargaining solution for cooperative games where such binding agreements are possible (Nash). Nash equilibrium, by far the most widely influential solution concept in game theory, applied to games with any number of players and with payoffs whose sum carried with the combination of strategies chosen by the players, while von Neumann's minimax solution was limited to two-person, constant-sum games. A Nash equilibrium is a strategy combination in which each player's chosen strategy is a best response to the strategies of the other players, so that no player can get a higher expected payoff by changing strategy as long as the strategies of the other players stay the same. No player has an incentive to be the first to deviate from a Nash equilibrium.

Nash proved the existence of equilibrium but not uniqueness: a game will have at least one strategy combination that is a Nash equilibrium, but it may have many or even an infinity of Nash equilibria (especially if the choice of action involves picking a value for a continuous variable). Cournot's 1838 analysis of duopoly has been interpreted in retrospect as a special case of Nash equilibrium, just as Harsanyi perceived the congruity of Zeuthen's 1930 discussion of bargaining and the Nash bargaining solution. Refinements of Nash equilibrium, which serve to rule out some of the possible equilibria, include the concept of a subgame perfect equilibrium (see Harsanyi and Selten), which is a Nash equilibrium both for an entire extended game (a game in which actions must be chosen at several decision nodes in a game tree) and for any game starting from any decision node in the game tree, including points that would never be reached in equilibrium, so that any threats to take certain actions if another player were to deviate from the equilibrium path would be credible (rational in terms of self-interest once that point in the game had been reached). A further refinement rules out some subgame perfect Nash equilibria by allowing for the possibility of a "trembling hand," that is, a small probability that an opposing player, although rational, may make mistakes (Harsanyi and Selten). Thomas Schelling has suggested that if there is some clue that would lead players to regard one Nash equilibrium as more likely than others, that equilibrium will be a focal point.

Nash equilibrium, with its refinements, remains at the heart of noncooperative game theory. Applied to the study of market structure by Martin Shubik (1959), this approach has come to dominate the field of industrial organization, as indicated by Jean Tirole (1988) in a book widely accepted as the standard economics textbook on industrial organization and as a model for subsequent texts. More recently, noncooperative game theory has found economic applications ranging from strategic trade policy in international trade to the credibility of anti-inflationary monetary policy and the design of auctions for broadcast frequencies. From economics, noncooperative game theory based on refinements of Nash equilibrium has spread to business school courses on business strategy (see Ghemawat, applying game theory in six Harvard Business School cases for MBA students). Some economists view business strategy as an application of game theory, with ideas flowing in one direction, rather than as a distinct field (Shapiro).

However, scholars of strategic management remain sharply divided over whether game theory provides useful insights or just a rationalization for any conceivable observed behavior (see the papers by Barney, Saloner, Camerer, and Postrel in Rumelt, Schendel, and Teece, especially Postrel's paper, which verifies Rumelt's Flaming Trousers Conjecture by constructing a game-theoretic model with a subgame perfect Bayesian Nash equilibrium in which bank presidents publicly set their pants on fire, a form of costly signaling that is profitable only for a bank that can get repeat business, that is, a high-quality bank).

Nash proposed the Nash bargaining solution for two-person cooperative games, that the players maximize the product of their gains over what each would receive at the threat point (the Nash equilibrium of the noncooperative game that they would play if they failed to reach agreement on how to divide the gains), and showed it to be the only solution possessing all of a particular set of intuitively appealing properties (efficiency, symmetry, independence of unit changes, independence of irrelevant alternatives). Feminist economists such as Marjorie McElroy and Notburga Ott have begun to apply bargaining models whose outcome depends critically on the threat point (the outcome of the noncooperative game that would be played if bargaining does not lead to agreement), as well as Prisoner's Dilemma games, to bargaining within the household (see Seiz for a survey).

Another influential solution concept for cooperative games, the Shapley value for n-person games (Shapley), allots to each player the average of that player's marginal contribution to the payoff each possible coalition would receive and, for a class of games with large numbers of players, coincides with the core of a market (the set of undominated imputations or allocations), yet another solution concept discovered by graduate students at Princeton in the early 1950s (in this case, Shapley and D. B. Gillies) and then rediscovered by Shubik in Edgeworth's 1881 analysis. There is a large literature in accounting applying the Shapley value to cost allocation (Roth and Verrecchia).

## Applications of Game Theory

Lloyd Shapley and Shubik (1954), two Princeton contemporaries of Nash, began the application of game theory to political science, drawing on Shapley's 1953 publication to devise an index for voting power in a committee system. William Riker and his students at the University of Rochester took the lead in recasting political science in terms of strategic interaction of rational, self-interested players (see Riker and Ordeshook; Shubik, 1984; Riker in Weintraub), and there is now a specialized market for game-theory textbooks for political science students (Morrow). Donald Green and Ian Shapiro (1994) criticize recent applications to politics of game theory and related forms of rational-choice theory as viewing political behavior as too exclusively rational and self-interested to the exclusion of ideologies, values, and social norms (see Friedman for the ensuing controversy).

Recasting Marxism in terms of rational choice and analyzing class struggle as a strategic game is especially controversial (Carver and Thomas). Conflict and cooperation (whether in the form of coalitions or contracts) are at the heart of law, as of politics. Douglas Baird, Robert Gertner, and Randal Picker (1994), among others, treat such legal topics as tort, procedure, and contracts as examples of strategic interaction, as the growing sub-discipline of law and economics increasingly reasons in terms of game theory. As a counterpart at a more "macro" level to game-theoretic analysis of political and legal conflict and cooperation, Andrew Schotter (1981) and Shubik (1984) propose a "mathematical institutional economics" to explain the evolution of social institutions such as contract law, money, trust, and customs, norms, and conventions ("the rules of the game") as the outcome of strategic interaction by rational agents. This approach shows promise, but has been received skeptically by economists such as Ronald Coase who rely on less mathematical neoclassical techniques to develop a "New Institutional Economics," and with even less enthusiasm by economists outside the neoclassical mainstream, such as Philip Mirowski. Going beyond the explanation of merely mundane institutions, Steven Brams (1983) uses game theory to explore questions of theology.

## Prisoner's Dilemma

Game theorists and social scientists have been fascinated by Prisoner's Dilemma, a two-by-two game (two players, each with two possible pure strategies) with a particular payoff matrix (Rapoport and Chammah; Poundstone). The game's nickname and the accompanying story were provided by A. W. Tucker. Suppose that two prisoners, accused of jointly committing a serious crime, are interrogated separately. The prosecutor has sufficient evidence to convict them of a lesser crime without any confession, but can get a conviction on the more serious charge only with a confession. If neither prisoner confesses, they will each be sentenced to two years for the lesser charge. If both confess, each will receive a sentence of five years. However, if only one prisoner confesses, that prisoner will be sentenced to only one year, while the other prisoner will get ten years. In the absence of any external authority to enforce an agreement to deny the charges, each player has a dominant strategy of confessing (given that the other player has denied the charges, one year is a lighter sentence than two years; given that the other player has confessed, five years is a lighter sentence than ten years). The unique Nash equilibrium is for both players to confess (defect from any agreement to cooperate) and receive sentences of five years, even though both would be better off if both denied the charges (cooperated).

This game has been used as an explanation of how individually rational behavior can lead to undesirable outcomes ranging from arms races to overuse of natural resources ("the tragedy of the commons," a generalization to more than two players). If the game is repeated a known finite number of times, however large, the predicted result is the same: both players will confess (defect) on the last play, since there would be no opportunity of future punishment for defection or reward for cooperation; therefore both will also confess (defect) on the next-to-last play, since the last play is determined, and so on, with mutual defection on each round as the only sub-game perfect Nash equilibrium. However, the "folk theorem" states that for *infinitely* repeated games, even with discounting of future benefits or a constant probability of the game ending on any particular round (provided that the discount rate and the probability of the game ending on the next round are sufficiently small and that the dimensionality of payoffs allows for the possibility of retaliation), *any* sequence of actions can be rationalized as a subgame perfect Nash equilibrium. (The folk theorem owes its name to its untraceable origin.)

However, players do not generally behave in accordance with Nash's prediction. Frequent cooperation in one-shot or finitely repeated Prisoner's Dilemma has been observed ever since it was first played. The first Prisoner's Dilemma experiment, conducted at RAND by Merrill Flood and Melvin Drescher in January 1950, involved one hundred repetitions with two sophisticated players, the economist Armen Alchian from the University of California, Los Angeles, and the game theorist John Williams, head of RAND's mathematics department. Alchian and Williams succeeded in cooperating on sixty plays, and mutual defection, the Nash equilibrium, occurred only fourteen times (Poundstone, pp. 107–116). Robert Axelrod (1984) conducted a computer tournament for iterated Prisoner's Dilemma, finding that Rapoport's simple "tit for tat" strategy (cooperate on the first round, then do whatever the other player did on the previous round) yielded the highest payoff.

One way to explain the observed extent of cooperation in experimental games and in life is to recognize that humans are only boundedly rational, relying on rules of thumb and conventions, and making choices about what to know because information is costly to acquire and process. Assumptions about rationality in game theory, such as common knowledge, can be very strong: "An event is common knowledge among a group of agents if each one knows it, if each one knows the others know it, if each one knows that each one knows that the others know it, and so on … the limit of a potentially infinite chain of reasoning about knowledge" (Geanakoplos, p. 54). Ariel Rubinstein (1998) sketches techniques for explicitly incorporating computability constraints and the process of choice in models of procedural rationality. Alternatively, evolutionary game theory, surveyed by Larry Samuelson (2002), emphasizes adaptation and evolution to explain behavior, rather than fully conscious rational choice, returning to human behavior the extension of game theory to evolutionarily stable strategies for animal behavior (Maynard Smith; Dugatkin and Reeve).

## Conclusion

The award of the Royal Bank of Sweden Prize in Economic Science in Memory of Alfred Nobel to John Nash, John Harsanyi, and Reinhard Selten in 1994 recognized the impact of game theory (and a film biography of Nash, based on Nasar's 1998 book, subsequently won Academy Awards for best picture and best actor), while the multivolume *Handbook of Game Theory,* edited by Robert Aumann and Sergiu Hart (1992–2002), presents a comprehensive overview. Reflecting on what has been achieved, David Kreps concludes that

Non-cooperative game theory … has brought a fairly flexible language to many issues, together with a collection of notions of "similarity" that has allowed economists to move insights from one context to another and to probe the reach of these insights. But too often it, and in particular equilibrium analysis, gets taken too seriously at levels where its current behavioural assumptions are inappropriate. We (economic theorists and economists more broadly) need to keep a better sense of proportion about when and how to use it. And we (economic and game theorists) would do well to see what can be done about developing formally that senses of proportion. (pp. 184)

Strategic interaction has proved to be a powerful idea, and, although its application, especially beyond economics, remains controversial, it has proven fruitful in suggesting new perspectives and new ways of formalizing older insights.

*See also* ** Economics ** ;

**;**

*Mathematics***;**

*Probability***.**

*Rational Choice*## bibliography

Aumann, Robert J., and Sergiu Hart, eds. *Handbook of Game Theory with Economic Applications.* 3 vols. Amsterdam: North-Holland, 1992–2002.

Aumann, Robert J., and Michael B. Maschler, with the collaboration of Richard E. Stearns. *Repeated Games with Incomplete Information.* Cambridge, Mass.: MIT Press, 1995.

Axelrod, Robert. *The Evolution of Cooperation.* New York: Basic Books, 1984.

Baird, Douglas, Robert H. Gertner, and Randal C. Picker. *Game Theory and the Law.* Cambridge, Mass.: Harvard University Press, 1994.

Brams, Steven J. *Superior Beings: If They Exist, How Would We Know? Game-Theoretic Implications of Omniscience, Omnipotence, Immortality, and Incomprehensibility.* New York: Springer-Verlag, 1983.

Campbell, Robert W. "Marx, Kantorovich, and Novozhilov: *Stoimost'* versus Reality." *Slavic Review* 20 (1961): 402–418.

Carver, Terrell, and Paul Thomas, eds. *Rational Choice Marxism.* Houndmills, U.K.: Macmillan, 1995.

Dimand, Mary Ann, and Robert W. Dimand. *The History of Game Theory,* Vol. 1: *From the Beginnings to 1945.* London and New York: Routledge, 1996.

Dugatkin, Lee Alan, and Hudson Kern Reeve, eds. *Game Theory and Animal Behavior.* New York: Oxford University Press, 1998.

Friedman, Jeffrey, ed. "Rational Choice Theory." *Critical Review* 9 (1995).

Geanakoplos, John. "Common Knowledge." *Journal of Economic Perspectives* 6 (1992): 53–82.

Ghemawat, Pankaj. *Games Businesses Play: Cases and Models.* Cambridge, Mass.: MIT Press, 1997.

Giocoli, Nicola. *Modeling Rational Agents: From Interwar Economics to Early Modern Game Theory.* Cheltenham, U.K., and Northampton, Mass.: Edward Elgar, 2003.

Green, Donald P., and Ian Shapiro. *Pathologies of Rational Choice Theory: A Critique of Applications in Political Science.* New Haven, Conn.: Yale University Press, 1994.

Harsanyi, John C., and Reinhard Selten. *A General Theory of Equilibrium Selection in Games.* Cambridge, Mass.: MIT Press, 1988.

Heims, Steve J. *John Von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death.* Cambridge, Mass.: MIT Press, 1980.

Kreps, David M. *Game Theory and Economic Modelling.* Oxford: Clarendon Press, 1990.

Leonard, Robert J. "From Parlor Games to Social Science: Von Neumann, Morgenstern, and the Creation of Game Theory, 1928–1944." *Journal of Economic Literature* 33 (1995): 730–761.

——. "Reading Cournot, Reading Nash: The Creation and Stabilisation of the Nash Equilibrium," *Economic Journal* 104 (1994): 492–511.

Mayberry, John P., with John C. Harsanyi, Herbert E. Scarf, and Reinhard Selten. *Game-Theoretic Models of Cooperation and Conflict.* Boulder, Colo.: Westview 1992.

Maynard Smith, John. *Evolution and the Theory of Games.* Cambridge, U.K.: Cambridge University Press, 1982.

Mirowski, Philip. *Machine Dreams: Economics Becomes a Cyborg Science.* Cambridge, U.K.: Cambridge University Press, 2002.

Morrow, James D. *Game Theory for Political Scientists.* Princeton, N.J.: Princeton University Press, 1994.

Nasar, Sylvia. *A Beautiful Mind: A Biography of John Forbes Nash, Jr., Winner of the Nobel Prize in Economics, 1994.* New York: Simon and Schuster, 1998.

Nash, John F., Jr. *Essays on Game Theory.* Cheltenham, U.K., and Brookfield, Vt.: Edward Elgar, 1996.

Poundstone, William. *Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb.* New York: Doubleday, 1992.

Rapoport, Anatol, and Albert M. Chammah. *Prisoner's Dilemma: A Study in Conflict and Cooperation.* Ann Arbor: University of Michigan Press, 1965.

Riker, William H., and Peter C. Ordeshook. *An Introduction to Positive Political Theory.* Englewood Cliffs, N.J.: Prentice Hall, 1973.

Roth, A. E., and R. E. Verrecchia. "The Shapley Value as Applied to Cost Allocation: a Reinterpretation." *Journal of Accounting Research* 17 (1979): 295–303.

Rubinstein, Ariel. *Modeling Bounded Rationality.* Cambridge, Mass.: MIT Press, 1998.

Rumelt, Richard P., Dan E. Schendel, and David J. Teece, eds. *Fundamental Issues in Strategy: A Research Agenda.* Boston: Harvard Business School Press, 1994.

Samuelson, Larry. "Evolution and Game Theory." *Journal of Economic Perspectives* 16 (2002): 47–66.

Schelling, Thomas. *The Strategy of Conflict.* Cambridge, Mass.: Harvard University Press, 1960.

Schotter, Andrew. *The Economic Theory of Social Institutions.* Cambridge, U.K.: Cambridge University Press, 1981.

Seiz, Janet A. "Game Theory and Bargaining Models." In *The Elgar Companion to Feminist Economics,* edited by Janice Peterson and Margaret Lewis. Cheltenham, U.K., and Northampton, Mass.: Edward Elgar, 1999.

Shapiro, Carl. "The Theory of Business Strategy." *RAND Journal of Economics* 20 (1989): 125–137.

Shapley, Lloyd S. "A Value for n-Person Games." In Harold Kuhn and Albert W. Tucker, eds., *Contributions to the Theory of Games,* Vol. 2, *Annals of Mathematics Studies,* no. 28. Princeton, N.J.: Princeton University Press, 1953.

Shapley, Lloyd S., and Martin Shubik, "A Method for Evaluating the Distribution of Power in a Committee System." *American Political Science Review* 48 (1954): 787–792.

Shubik, Martin. *A Game-Theoretic Approach to Political Economy.* Vol. 2 of *Game Theory in the Social Sciences.* Cambridge, Mass.: MIT Press, 1984.

——. *Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games.* New York: Wiley, 1959.

Tirole, Jean. *The Theory of Industrial Organization.* Cambridge, Mass.: MIT Press, 1988.

Von Neumann, John, and Oskar Morgenstern. *Theory of Games and Economic Behavior.* Princeton, N.J.: Princeton University Press, 1944; 3rd ed. 1953.

Weintraub, E. Roy, ed. *Toward a History of Game Theory.* Durham, N.C.: Duke University Press, 1992. Annual supplement to *History of Political Economy.*

*Robert* *W.* *Dimand*

## Game Theory

# Game Theory

Game theory is a branch of mathematics used to analyze competitive situations whose outcomes depend not only on one’s own choices, and perhaps chance, but also on the choices made by other parties, or *players*. Because the outcome of a game is dependent on what *all* players do, each player tries to anticipate the choices of other players in order to determine his own best choice. How these interdependent strategic calculations are made is the subject of the theory. Game theory was created in practically one stroke with the publication of *Theory of Games and Economic Behavior* in 1944 by mathematician John von Neumann (1903–1957) and economist Oskar Morgenstern (1902–1977). This work was a monumental intellectual achievement and has given rise to hundreds of books and thousands of articles in a variety of disciplines.

The theory has several major divisions, the following being the most important:

*Two-person versus n-person*. The two-person theory deals with the optimal strategic choices of two players, whereas the *n* -person theory (*n* > 2) mostly concerns what coalitions, or subsets of players, will form and be stable, and what constitutes reasonable payments to their members.

*Zero-sum versus nonzero-sum*. The payoffs to all players sum to zero (or some other constant) at each outcome in zero-sum games but not in nonzero-sum games, wherein the sums are variable; zero-sum games are games of total conflict, in which what one player gains the others lose, whereas nonzero-sum games permit the players to gain or lose together.

*Cooperative versus noncooperative*. Cooperative games are those in which players can make binding and enforceable agreements, whereas noncooperative games may or may not allow for communication among the players but do assume that any agreement reached must be in equilibrium—that is, it is rational for a player not to violate it if other players do not, because the player would be worse off if he did.

Games can be described by several different forms, the three most important being:

*Extensive (game tree)*—indicates sequences of choices that players (and possibly chance, according to nature or some random device) can make, with payoffs defined at the end of each sequence of choices.*Normal/strategic (payoff matrix)*—indicates strategies, or complete plans contingent on other players’ choices, for each player, with payoffs defined at the intersection of each set of strategies in a matrix.*Characteristic function*—indicates values that all possible coalitions (subsets) of players can ensure for their members, whatever the other players do.

These different game forms, or representations, give less and less detailed information about a game—with the sequences in form 1 dropped from form 2, and the strategies to implement particular outcomes in form 2 dropped from form 3—to highlight different aspects of a strategic situation.

Common to all areas of game theory is the assumption that players are rational: They have goals, can rank outcomes (or, more stringently, attach utilities, or values, to them), and choose better over worse outcomes. Complications arise from the fact that there is generally no dominant, or unconditionally best, strategy for a player because of the interdependency of player choices. (Games in which there is only one player are sometimes called *games against nature* and are the subject of *decision theory*.)

A game is sometimes defined as the sum-total of its rules. Common parlor games, like chess or poker, have well-specified rules and are generally zero-sum games, making cooperation with the other player(s) unprofitable. Poker differs from chess in being not only an *n* -person game (though two players can also play it) but also a game of *incomplete information*, because the players do not have full knowledge of each other’s hands, which depend in part on chance.

The rules of most real-life games are equivocal; indeed, the “game” may be about the rules to be used (or abrogated). In economics, rules are generally better known and followed than in politics, which is why game theory has become the theoretical foundation of economics, especially microeconomics. But game-theoretic models also play a major role in other subfields of economics, including industrial organization, public economics, and international economics. Even in macroeconomics, in which fiscal and monetary policies are studied, questions about setting interest rates and determining the money supply have a strong strategic component, especially with respect to the timing of such actions. It is little wonder that economics, more than any of the other social sciences, uses game theory at all levels.

Game-theoretic modeling has made major headway in political science, including international relations, in the last generation. While international politics is considered to be quite anarchistic, there is certainly some constancy in the way conflicts develop and may, or may not, be resolved. Arms races, for instance, are almost always nonzero-sum games in which two nations can benefit if they reach some agreement on limiting weapons, but such agreements are often hard to verify or enforce and, consequently, may be unstable.

Since the demise of the superpower conflict around 1990, interest has shifted to whether a new “balance of power”—reminiscent of the political juggling acts of European countries in the nineteenth and early twentieth century—may emerge in different regions or even worldwide. For example, will China, as it becomes more and more a superpower in Asia, align itself with other major Asian countries, like India and Japan, or will it side more with Western powers to compete against its Asian rivals? Game theory offers tools for studying the stability of new alignments, including those that might develop on political-economy issues.

Consider, for example, the World Trade Organization (WTO), whose durability is now being tested by regional trading agreements that have sprung up among countries in the Americas, Europe, and Asia. The rationality of supporting the WTO, or joining a regional trading bloc, is very much a strategic question that can be illuminated by game theory. Game theory also provides insight into how the domestic politics of a country impinges on its foreign policy, and vice versa, which has led to a renewed interest in the interconnections between these two levels of politics.

Other applications of game theory in political science have been made to strategic voting in committees and elections, the formation and disintegration of parliamentary coalitions, and the distribution of power in weighted voting bodies. On the normative side, electoral reforms have been proposed to lessen the power of certain parties (e.g., the religious parties in Israel), based on game-theoretic analysis. Similarly, the voting weights of members of the European Union Council of Ministers, and its decision rules for taking action (e.g., simple majority or qualified majority), have been studied with an eye to making the body both representative of individual members’ interests and capable of taking collective action.

As game-theoretic models have become more prominent in political science, they have, at the same time, created a good deal of controversy. Some critics charge that they abstract too much from strategic situations, reducing actors to hyperrational players or bloodless automatons that do not reflect the emotions or the social circumstances of people caught up in conflicts. Moreover, critics contend, game-theoretic models are difficult to test empirically, in part because they depend on counterfactuals that are never observed. That is, they assume that players take into account contingencies that are hard to reconstruct, much less model precisely.

But proponents of game theory counter that the theory brings rigor to the study of strategic choices that no other theory can match. Furthermore, they argue that actors *are*, by and large, rational—they choose better over worse means, even if the goals that they seek to advance are not always apparent.

When information is incomplete, so-called Bayesian calculations can be made that take account of this incompleteness. The different possible goals that players may have can also be analyzed and their consequences assessed.

Because such reconstruction is often difficult to do in real-life settings, laboratory experiments—in which conditions can be better controlled—are more and more frequently conducted. In fact, experiments that test theories of bargaining, voting, and other political-economic processes have become commonplace in economics and political science. Although they are less common in the other social sciences, social psychology has long used experiments to investigate the choices of subjects in games like prisoners’ dilemma. This infamous game captures a situation in which two players have dominant strategies of not cooperating, as exemplified by an arms race or a price war. But doing so results in an outcome worse for both than had they cooperated. Because mutual cooperation is not a *Nash equilibrium*, however, each player has an incentive to defect from cooperation.

Equally vexing problems confront the players in another well-known game, chicken. Not only is cooperation unstable, but noncooperation leads to a disastrous outcome. It turns out that each player should defect if and only if the other player cooperates, but anticipating when an opponent will do so is no mean feat.

Since the invention of game theory in the mid-1940s, its development has been remarkable. Two Nobel prizes in economics were awarded to a total of five game theorists in 1994 and 2005 (including John Nash of the film *A Beautiful Mind* fame), but many other recipients of this prize have used game theory extensively. In addition, game-theoretic modeling has progressed rapidly in political science—and, to a lesser extent, in the other social sciences—as well as in a variety of other disciplines, including biology, business, and law.

**SEE ALSO** *Arms Control and Arms Race; Cold War; Deterrence, Mutual; Nash Equilibrium; Political Economy; Prisoner’s Dilemma (Economics)*

## BIBLIOGRAPHY

Aumann, Robert J., and Sergiu Hart, eds. 1992–2002. *Handbook of Game Theory with Economic Applications*. 3 vols. Amsterdam, NY: Elsevier.

Brams, Steven J. 1994. *Theory of Moves*. New York: Cambridge University Press.

Dixit, Avinash, and Susan Skeath. 2005. *Games of Strategy*. 2nd ed. New York: Norton.

Nasar, Sylvia. 1998. *A Beautiful Mind: A Biography of John Forbes Nash Jr., Winner of the Nobel Prize in Economics, 1994*. New York: Simon & Schuster.

Osborne, Martin J. 2004. *An Introduction to Game Theory*. New York: Oxford University Press.

von Neumann, John, and Oskar Morgenstern. 1953. *Theory of Games and Economic Behavior*. 3rd ed. Princeton, NJ: Princeton University Press.

*Steven J. Brams*

## Game Theory

# Game Theory

Analysis of zero-sum, two-player games

Game theory is a branch of mathematics concerned with the analysis of conflict situations. It involves determining a strategy for a given situation and the costs or benefits realized by using the strategy. First developed in the early twentieth century, it was originally applied to parlor games such as bridge, chess, and poker. Now, it is applied to a wide range of subjects such as economics, behavioral sciences, sociology, military science, and political science.

The notion of game theory was first suggested by mathematician John von Neumann in 1928. The theory received little attention until 1944, when Neumann and economist Oskar Morgenstern wrote the classic treatise *Theory of Games and Economic Behavior*. Since then, many economists, biologists, political scientists, military strategists, and operational research scientists have expanded and applied the theory.

## Characteristics of games

An essential feature of any game is conflict between two or more players resulting in a win for some and a loss for others. Additionally, games have other characteristics that make them playable. There is a way to start the game. There are defined choices players can make for any situation that can arise in the game. During each move, single players are forced to make choices or the choices are assigned by random devices (such as dice). Finally, the game ends after a set number of moves and a winner is declared. Obviously, games such as chess or checkers have these characteristics, but other situations such as military battles or animal behavior also exhibit similar traits.

During any game, players make choices based on the information available. Games are, therefore, classified by the type of information that players have available when making choices. A game such as checkers or chess is called a “game of perfect information.” In these games, each player makes choices with the full knowledge of every move made previously during the game, whether by herself or her opponent. Also, for these games there theoretically exists one optimal pure strategy for each player that guarantees the best outcome regardless of the strategy employed by the opponent. A game like poker is a “game of imperfect knowledge” because players make their decisions without knowing which cards are left in the deck. The best play in these types of games relies upon a probabilistic strategy and, as such, the outcome can not be guaranteed.

## Analysis of zero-sum, two-player games

In some games there are only two players and in the end, one wins while the other loses. This also means that the amount gained by the winner will be equal to the amount lost by the loser. The strategies suggested by game theory are particularly applicable to games such as these, known as zero-sum, two-player games.

Consider the game of matching pennies. Two players put down a penny each, either head or tail up, covered with their hands so the orientation remains unknown to their opponent. Then they simultaneously reveal their pennies and pay off accordingly; player A wins both pennies if the coins show the same side up, otherwise player B wins. This is a zero-sum, two-player game because each time A wins a penny, B loses a penny and visa versa.

To determine the best strategy for both players, it is convenient to construct a game payoff matrix, which shows all of the possible payments player A receives for any outcome of a play. Where outcomes match, player A gains a penny and where they do not, player A loses a penny. In this game it is impossible for either player to choose a move which guarantees a win, unless they know their opponent’s move. For example, if B always played heads, then A could guarantee a win by also always playing heads. If this kept up, B might change her play to tails and begin winning. Player A could counter by playing tails and the game could cycle like this endlessly with neither player gaining an advantage. To improve their chances of winning, players can devise a probabilistic (mixed) strategy. That is, to initially decide on the percentage of times they will put a head or tail, and then do so randomly.

According to the minimax theorem of game theory, in any zero-sum, two-player game there is an optimal probabilistic strategy for both players. By following the optimal strategy, each player can guarantee their maximum payoff regardless of the strategy employed by their opponent. The average payoff is known as the minimax value and the optimal strategy is known as the solution. In the matching pennies game, the optimal strategy for both players is to randomly select heads or tails 50% of the time. The expected payoff for both players would be 0.

## Nonzero-sum games

Most conflict situations are not zero-sum games or limited to two players. A nonzero-sum game is one in which the amount won by the victor is not equal to the amount lost by the loser. The Minimax Theorem does not apply to either of these types of games, but various weaker forms of a solution have been proposed including noncooperative and cooperative solutions.

When more than two people are involved in a conflict, oftentimes players agree to form a coalition. These players act together, behaving as a single player in the game. There are two extremes of coalition formation; no formation and complete formation. When no coalitions are formed, games are said to be non-cooperative. In these games, each player is solely interested in her own payoff. A proposed solution to these types of conflicts is known as a non-cooperative equilibrium. This solution suggests that there is a point at which no player can gain an advantage by changing strategy. In a game when complete coalitions are formed, games are described as cooperative. Here, players join together to maximize the total payoff for the group. Various solutions have also been suggested for these cooperative games.

## Application of game theory

Game theory is a powerful tool that can suggest the best strategy or outcome in many different situations.

### KEY TERMS

**Coalition** —A situation in a multiple player game in which two or more players join together and act as one.

**Game** —A situation in which a conflict arises between two of more players.

**Game payoff matrix** —A mathematical tool which indicates the relationship between a players payoff and the outcomes of a game.

**Minimax theorem** —The central theorem of game theory. It states that for any zero-sum two-player game there is a strategy which leads to a solution.

**Nonzero-sum game** —A game in which the amount lost by all players is not equal to the amount won by all other players.

**Optimal pure strategy** —A definite set of choices which leads to the solution of a game.

**Probabilistic (mixed) strategy** —A set of choices which depends on randomness to find the solution of a game.

**Zero-sum, two-player games** —A game in which the amount lost by all players is equal to the amount won by all other players.

Economists, political scientists, the military, and sociologists have all used it to describe situations in their various fields. A recent application of game theory has been in the study of the behavior of animals in nature. Here, researchers are applying the notions of game theory to describe the effectiveness of many aspects of animal behavior including aggression, cooperation, hunting and many more. Data collected from these studies may someday result in a better understanding of our own human behaviors.

## Resources

### BOOKS

Kaplow, Louis and Steven Shavell. *Decision Analysis, Game Theory, and Information*. New York: Foundation Press, 2004.

Vincent, Thomas L. and Joel S. Brown. *Evolutionary Game Theory, Natural Selection, and Darwinian Dynamics*. Cambridge, UK: Cambridge University Press, 2005.

Von Neumann, John, et al. *Theory of Games and Economic Behavior.* Princeton, NJ: Princeton University Press, 2004.

Perry Romanowski

## Game Theory

# GAME THEORY

Game theory is a way of reasoning through problems. Although its use can be found throughout history, it was only formally stylized by the economists John von Neumann and Oskar Morganstern in the 1940s. Game theory takes the logic behind complex strategic situations and simplifies them into models that can be used to explain how individuals reach decisions to act in the real world. Game theory models attempt to abstract from personal, interpersonal, and institutional details of problems how individuals or groups may behave given a set of given conditions. This modeling allows a researcher or planner to get at the root of complex human interactions. The major assumption underlying most game theory is that people and groups tend to work toward goals that benefit them. That is, they have ends in mind when they take actions.

The most important application of game theory to public health occurs when the actions of individuals or groups affect the health of others. On some occasions, individual or group strategies for betterment lead to inferior outcomes for the greater population.

Using game theory to model public health problems is not different from using it to model any other type of problem or decision-making scenario. One particularly illustrative game is called the Prisoners' Dilemma, illustrated below. This game is often used to show the need for public resources and services. That is, sometimes individuals who choose certain strategies end up with an inferior outcome because of the incentives they were presented with. In public health, the problem becomes apparent quickly.

In order to place these events into a context in which game theory can be employed, four commonly defined criteria are used:

*Players*are the decision makers in the game; a player can be an individual, group, or population that must decide how to use the resources available within given constraints.*Rules*are the constraints; all activity is defined by rules and gives the model an analytical credence to be tested for validity in the real world.*Strategies*are the courses of action open to the players in a game; players may choose their action dependent upon different situations they are presented with.*Payoffs*are the final returns to players, which are usually stated in terms that are objectively understood by each player of the game.

Consider a situation in which two groups of people border a malarial swamp. One group is named Alpha and the other is Beta. The swamp causes both groups to be plagued by malaria and other diseases. The problem could easily be remedied by draining the swampland. However, neither group is willing to act first because no incentives exist to take on the hard labor of draining the swamp alone. The greater utility that would be conveyed to both groups is lost because there is no incentive for either individual group to act.

## THE SWAMP: A PRISONERS' DILEMMA

The game called Prisoners' Dilemma can be modeled using game theory. The game matrix shown in Table 1 is an example of a common tool in game theory modeling. The players are named in the

*Table 1*

The Swamp: A Prisoner's Dilemma |
|||||

Beta |
|||||

Contribute |
Not Contribute |
||||

source: Courtesy of author. | |||||

Alpha |
Contribute |
Alpha | 1 | Alpha | -1 |

Beta | 1 | Beta | 2 | ||

Not Contribute |
Alpha | 2 | Alpha | 0 | |

Beta | -1 | Beta | 0 |

outer boxes, the rule is that the players may not communicate before simultaneously acting, the strategies are to contribute or not contribute, and the payoffs are in the innermost boxes.

Look at the situation as it is presented to the Alpha group. They realize that the outcome depends on the action the Beta group takes. If Beta contributes, it pays Alpha to avoid contributing, for in that instance, Alpha will benefit twice as much as if they worked with Beta to drain the swamp (2 points rather than 1). The reason the payoff for not contributing is greater is that Alpha will receive the benefit of draining the swamp without doing any of the work. However, if Beta does not contribute, Alpha still benefits by not contributing rather than contributing alone (the payoff is 0 instead of −1). That is, Alpha will choose not to bear the costs of draining the swamp alone.

The Alpha group reasons that regardless of Beta's action, their own best action is to not help drain the swamp. Because Beta's options are symmetric to Alpha's, they also reason that they benefit most through inaction. As a result, the swamp does not get drained, and both groups end up with an inferior outcome. This game leads to a special equilibrium called a Nash equilibrium, which means both players' strategies will lead them to the same payoff regardless of the strategy chosen by the opposing player.

## PUBLIC HEALTH IMPLICATIONS

The implication for public health is that the best strategies for individuals or groups are sometimes not the best strategies for everyone taken as a whole. Public health professionals need to be vigilant to these special circumstances and use interventions to create better incentive systems. For example, Alpha and Beta could each be levied a tax, by some authority over both, to pay for the draining of the swamp. The disincentives for progress would then be circumvented and both groups would benefit.

Game theory has been used to model a number of subjects important to public health, including organ donation, ethics, and the patient-provider relationship. Game theory provides a strong modeling device for public health professionals and illustrates the need of public intervention when the incentives of individuals impede progress for the group.

Peter S. Meyer

Nancy L. Atkinson

Robert S. Gold

(see also: *Community Health; Community Organization; Ethics of Public Health* )

## Bibliography

Hirshleifer, J., and Glazer, A. (1992). *Price and Applications.* Englewood Cliffs, NJ: Prentice Hall.

Nash, J. (1951). "Non-Cooperative Games." *Annals of Mathematics* 54:286–295.

Nicholson, E. (1998). *Microeconomic Theory.* Fort Worth, TX: Harcourt Brace.

O'Brien, B. J. (1988). "A Game-Theoretic Approach to Donor Kidney Sharing." *Social Science and Medicine* 26(11):1109–1116.

Parkin, M. (1990). *Microeconomics.* New York: Addison-Wesley.

Schneiderman, K. J.; Jecker, N. S.; Rozance, C.; Klotzko, A. J.; and Friedl, B. (1995). "A Different Kind of Prisoner's Dilemma." *Cambridge Quarterly of Healthcare Ethics* 4(4):530–545.

Von Neumann, J., and Morgenstern, O. (1944). *The Theory of Games in Economic Behavior.* New York: Wiley.

Wynia, M. K. (1997). "Economic Analyses, the Medical Commons, and Patients' Dilemmas: What Is the Physician's Role?" *Journal of Investigative Medicine* 45(2):35–43.

## Game Theory

# Game theory

Game theory is a branch of **mathematics** concerned with the analysis of conflict situations. It involves determining a strategy for a given situation and the costs or benefits realized by using the strategy. First developed in the early twentieth century, it was originally applied to parlor games such as bridge, chess, and poker. Now, game theory is applied to a wide range of subjects such as economics, behavioral sciences, sociology, military science, and political science.

The notion of game theory was first suggested by mathematician John von Neumann in 1928. The theory received little attention until 1944 when Neumann and economist Oskar Morgenstern wrote the classic treatise *Theory of Games and Economic Behavior*. Since then, many economists and operational research scientists have expanded and applied the theory.

## Characteristics of games

An essential feature of any game is conflict between two or more players resulting in a win for some and a loss for others. Additionally, games have other characteristics which make them playable. There is a way to start the game. There are defined choices players can make for any situation that can arise in the game. During each move, single players are forced to make choices or the choices are assigned by **random** devices (such as dice). Finally, the game ends after a set number of moves and a winner is declared. Obviously, games such as chess or checkers have these characteristics, but other situations such as military battles or animal **behavior** also exhibit similar traits.

During any game, players make choices based on the information available. Games are therefore classified by the type of information that players have available when making choices. A game such as checkers or chess is called a "game of perfect information." In these games, each player makes choices with the full knowledge of every move made previously during the game, whether by herself or her opponent. Also, for these games there theoretically exists one optimal pure strategy for each player which guarantees the best outcome regardless of the strategy employed by the opponent. A game like poker is a "game of imperfect knowledge" because players make their decisions without knowing which cards are left in the deck. The best play in these types of games relies upon a probabilistic strategy and, as such, the outcome can not be guaranteed.

## Analysis of zero-sum, two-player games

In some games there are only two players and in the end, one wins while the other loses. This also means that the amount gained by the winner will be equal to the amount lost by the loser. The strategies suggested by game theory are particularly applicable to games such as these, known as zero-sum, two-player games.

Consider the game of matching pennies. Two players put down a penny each, either head or tail up, covered with their hands so the orientation remains unknown to their opponent. Then they simultaneously reveal their pennies and pay off accordingly; player A wins both pennies if the coins show the same side up, otherwise player B wins. This is a zero-sum, two-player game because each time A wins a penny, B loses a penny and visa versa.

To determine the best strategy for both players, it is convenient to construct a game payoff **matrix** , which shows all of the possible payments player A receives for any outcome of a play. Where outcomes match, player A gains a penny and where they do not, player A loses a penny. In this game it is impossible for either player to choose a move which guarantees a win, unless they know their opponent's move. For example, if B always played heads, then A could guarantee a win by also always playing heads. If this kept up, B might change her play to tails and begin winning. Player A could counter by playing tails and the game could cycle like this endlessly with neither player gaining an advantage. To improve their chances of winning, each player can devise a probabilistic (mixed) strategy. That is, to initially decide on the percentage of times they will put a head or tail, and then do so randomly.

According to the minimax **theorem** of game theory, in any zero-sum, two-player game there is an optimal probabilistic strategy for both players. By following the optimal strategy, each player can guarantee their maximum payoff regardless of the strategy employed by their opponent. The average payoff is known as the minimax value and the optimal strategy is known as the solution. In the matching pennies game, the optimal strategy for both players is to randomly select heads or tails 50% of the time. The expected payoff for both players would be 0.

## Nonzero-sum games

Most conflict situations are not zero-sum games or limited to two players. A nonzero-sum game is one in which the amount won by the victor is not equal to the amount lost by the loser. The Minimax Theorem does not apply to either of these types of games, but various weaker forms of a solution have been proposed including noncooperative and cooperative solutions.

When more than two people are involved in a conflict, oftentimes players agree to form a coalition. These players act together, behaving as a single player in the game. There are two extremes of coalition formation; no formation and complete formation. When no coalitions are formed, games are said to be non-cooperative. In these games, each player is solely interested in her own payoff. A proposed solution to these types of conflicts is known as a non-cooperative equilibrium. This solution suggests that there is a point at which no player can gain an advantage by changing strategy. In a game when complete coalitions are formed, games are described as cooperative. Here, players join together to maximize the total payoff for the group. Various solutions have also been suggested for these cooperative games.

## Application of game theory

Game theory is a powerful tool that can suggest the best strategy or outcome in many different situations. Economists, political scientists, the military, and sociologists have all used it to describe situations in their various fields. A recent application of game theory has been in the study of the behavior of animals in nature. Here, researchers are applying the notions of game theory to describe the effectiveness of many aspects of animal behavior including aggression, cooperation, hunting and many more. Data collected from these studies may someday result in a better understanding of our own human behaviors.

## Resources

### books

Beasley, John D. *The Mathematics of Games.* Oxford: Oxford University Press, 1990.

Hoffman, Paul. *Archimedes' Revenge: The Joys and Perils of**Mathematics.* New York: Fawcett Crest, 1988.

Newman, James R., ed. *The World of Mathematics.* New York: Simon and Schuster, 1956.

Paulos, John Allen. *Beyond Numeracy.* New York: Alfred A. Knopf Inc, 1991.

Perry Romanowski

## KEY TERMS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .**Coalition**—A situation in a multiple player game in which two or more players join together and act as one.

**Game**—A situation in which a conflict arises between two of more players.

**Game payoff matrix**—A mathematical tool which indicates the relationship between a players payoff and the outcomes of a game.

**Minimax theorem**—The central theorem of game theory. It states that for any zero-sum two-player game there is a strategy which leads to a solution.

**Nonzero-sum game**—A game in which the amount lost by all players is not equal to the amount won by all other players.

**Optimal pure strategy**—A definite set of choices which eads to the solution of a game.

**Probabilistic (mixed) strategy**—A set of choices which depends on randomness to find the solution of a game.

**Zero-sum, two-player games**—A game in which the amount lost by all players is equal to the amount won by all other players.

## Game Theory

# Game theory

Game theory is a branch of mathematics concerned with the analysis of conflict situations. The term conflict situation refers to a condition involving two or more people or groups of people trying to achieve some goal. A simple example of a conflict situation is the game of tic-tac-toe. In this game, two people take turns making Xs or Os in a #-shaped grid. The first person to get three Xs or Os in a straight line wins the game. It is possible, however, that neither person is able to achieve this goal, and the game then ends in a tie or a stand-off.

The variety of conditions described by the term conflict situation is enormous. They range from board and card games such as poker, bridge, chess and checkers; to political contests such as elections; to armed conflicts such as battles and wars.

Mathematicians have long been intrigued by games and other kinds of conflict situations. Is there some mathematical system for winning at bridge? at poker? in a war? One of the earliest attempts to answer this question was the probability theory, developed by French mathematician and physicist (one who studies the science of matter and energy) Blaise Pascal (1623–1662) and his colleague Pierre de Fermat (1601–1665). At the request of a gentleman gambler, Pascal and Fermat explored the way to predict the likelihood of drawing certain kinds of hands (a straight, a flush, or three-of-a-kind, for example) in a poker game. In their attempts to answer such questions, Pascal and Fermat created a whole new branch of mathematics.

## Words to Know

**Game:** A situation in which a conflict arises between two or more players.

**Nonzero-sum game:** A game in which the amount lost by all players is not equal to the amount won by all other players.

**Zero-sum, two-player games:** A game in which the amount lost by one player is equal to the amount won by the other player.

The basic principles of game theory were first suggested by Hungarian American mathematician and physicist John von Neumann (1903–1957) in 1928. The theory received little attention until 1944, when Neumann and economist Oskar Morgenstern (1902–1977) wrote the classic treatise *Theory of Games and Economic Behavior.* Since then, many economists and scientists have expanded and applied the theory.

## Characteristics of games

The mathematical analysis of games begins by recognizing certain basic characteristics of all conflict situations. First, games always involve at least two people or two groups of people. In most cases, the game results in a win for one side of the game and a loss for the other side. Second, games always begin with certain set conditions, such as the dealing of cards or the placement of soldiers on a battlefield. Third, choices always have to be made. Some choices are made by the players themselves ("where shall I place my next X"?) and some choices are made by chance

(such as rolling dice). Finally, the game ends after a set number of moves and a winner is declared.

## Types of games

Games can be classified in a variety of ways. One method of classification depends on the amount of information players have. In checkers and chess, for example, both players know exactly where all the pieces are located and what moves they can make. There is no hidden information that neither player knows about. Games such as these are known as games of perfect information.

The same cannot be said for other games. In poker, for example, players generally do not know what cards their opponents are holding, and they do not know what cards remain to be dealt. Games like poker are known as games of imperfect knowledge. The mathematical rules for dealing with these two kinds of games are very different. In one case, one can calculate all possible moves because everything is known about a situation. In the other case, one can only make guesses based on probability as to what might happen next. Nonetheless, both types of games can be analyzed mathematically and useful predictions about future moves can be made.

Games also can be classified as zero-sum or nonzero-sum games. A zero-sum game is a game in which one person wins. Everything lost by the loser is given to the winner. For example, suppose that two players decide to match pennies. The rule is that each player flips a penny. If both pennies come up the same (both heads or both tails), player A wins both pennies. If both pennies come up opposite (one head and one tail), player B wins both pennies. This game is a zero-sum game because one player wins everything (both pennies) on each flip, while the other player loses everything. Game theory often begins with the analysis of zero-sum games between two players because they are the simplest type of conflict situation to analyze.

Most conflict situations in real life are not zero-sum games. At the end of a game of Monopoly™, for example, one player may have most of the property, but a second player may still own some property on the board. Also, the game may involve more than two people with almost any type of property distribution.

## Application of game theory

Game theory is a powerful tool that can suggest the best strategy or outcome in many different situations. Economists, political scientists, the military, and sociologists have all used it to describe situations in their various fields. A recent application of game theory has been in the study of the behavior of animals in nature. Here, researchers are applying the notions of game theory to describe the many aspects of animal behavior including aggression, cooperation, and hunting methods. Data collected from these studies may someday result in a better understanding of our own human behaviors.

## Game Theory

**Game Theory.** Within national security analysis, *Game theory* deals with parties making choices that influence each other's interests, where they all know that they are making such choices. Using mathematics, it analyzes the think/doublethink logic of how each adversary sees the other, sees the other's view of it, and so on. Unlike *war gaming*, where real players assume roles, it involves only mathematical calculations.

John von Neumann and Oskar Morgenstern laid the foundation of game theory in the 1940s. Its application to military problems has been limited but interesting. One World War II example involved submarine warfare. A submarine is passing through a corridor patrolled by submarine‐hunting planes. The submarine must spend some time traveling on the surface to recharge its batteries. The corridor widens and narrows, and the submarine is easier to detect in the narrower parts, with less sea for the hunters to scan. Where should the submarine surface? Where should the hunters focus their effort? The premise that the wide part is the one logical place is self‐refuting. If it were true, the hunters would deduce that, would head there and leave the narrower part alone, making the narrower part better. Choosing the narrow part likewise leads to a contradiction. Game theory advises a “mixed” strategy—do one or the other unpredictably, using exact probabilities calculated from the ease of detection in each section.

Other applications have addressed the problems of when an interceptor aircraft closing on a bomber should open fire, how to allocate antimissile defenses to targets of varying value, and when to fire intercontinental missiles to avoid Soviet nuclear explosions in the stratosphere.

These problems involved specific wartime encounters. Another area is broad strategy. A prevalent misconception is that game theory set the principles of nuclear strategy. In the 1940s, planners hoped that the new mathematics would do this, but strategic problems proved too complex. It was hard even to specify each side's goals. Game theory has not given exact strategic advice, but it has clarified general principles. In a model of crisis confrontation, for example, one side wants to show the adversary that it values winning very highly, to induce the other side to back down. It uses the tactic of sacrifice‐to‐show‐resolve—make some costly military deployment so the adversary will conclude that only a determined government would pay such a cost to prove its determination. The model precisely illustrates the skeletal structure of strategic concepts such as showing resolve or enhancing credibility. By the 1990s, a sophisticated body of academic work had addressed deterrence, escalation, war alliances, and the verification of arms treaties.

[See also Disciplinary Views of War: Political Science and International Relations; Operations Research; Strategy; War Plans.]

Bibliography

Melvin Dresher , Games of Strategy: Theory and Applications, 1961.

Barry O'Neill , A Survey of Game Theory Studies of Peace and War, in Robert Aumann and Sergiu Hart, eds., Handbook of Game Theory, 1994.

Barry O'Neill