Strategic games model conflict and cooperation, with the payoff to any player depending on the choice of strategy (a rule for selecting an action given each possible information set) not only by that player, but by all players. Strategic games can either be cooperative games, where some external authority exists that could enforce an agreement among the players, or noncooperative games, where no such external enforcement of agreements is available. Only self-enforcing agreements are possible in noncooperative games. Because binding agreements cannot be made, players in a noncooperative game may end up in a Pareto-inferior outcome, as in prisoner’s dilemma (q.v.), because a strategy combination that would produce a better outcome for all players would leave at least one player with an incentive to deviate. Most game theory emphasizes non-cooperative games, because there is no consensus about how to choose among the various solution concepts proposed for cooperative games (such as the core, kernel, nucleolus, and Shapley value). Noncooperative game theory builds primarily upon refinements of one solution concept, Nash equilibrium.
A Nash equilibrium is a strategy combination for which no player has an incentive to be the only player to switch to another strategy. John Nash, in articles from 1950 and 1951 and his 1996 volume of essays, proved that any strategic game with a finite strategy space and arbitrarily many players will have at least one equilibrium point, provided that players are allowed to choose mixed strategies (strategies that assign probabilities to the possible pure strategies, so that a player’s action at a particular decision node cannot be predicted with certainty). Nash equilibrium has been interpreted as a generalization of A. Cournot’s analysis of duopoly in 1838, where each of two mineral water suppliers chooses its profit-maximizing output as a best response to the other’s output, taking the other firm’s quantity as given, and equilibrium occurs where their reaction functions intersect and neither firm can profit by being the only one to change its quantity produced. However, Robert Leonard, in his 1994 article, suggests that this view reads too much into Cournot. The minimax mixed-strategy solution for two-person zero-sum games, whose existence was proved by John von Neumann in 1928, is a special case of Nash equilibrium for n-person, general sum games. Nash equilibrium need not be unique, and so refinements have been introduced to eliminate as unreasonable some of the Nash equilibria in a game with multiple equilibria, such as considering as reasonable only those Nash equilibria that are subgame perfect. A subgame perfect Nash equilibrium is a strategy combination that would still be a Nash equilibrium if the game was started at any decision node (even one that would never be reached in equilibrium), so that players make only credible threats (that is, a player does not adopt a strategy implying that, if he or she were ever to be at particular off-equilibrium point, the player would do something that would decrease the player’s expected payoff). For a game with multiple subgame perfect Nash equilibria, the concept of trembling hand equilibrium, where a player attaches a small probability to another player making a mistake, permits further restriction of the equilibria admitted as reasonable.
Because a Nash equilibrium is self-enforcing (no player can gain from being the only one to deviate), it is widely accepted as a plausible solution concept when there is preplay communication among players (especially if the Nash equilibrium is unique). As David G. Pearce’s 1984 article argues, if players cannot talk, or cannot reach agreement on which of multiple Nash equilibria to select, other strategy combinations that are Nash equilibria may be rationalizable. A strategy is rationalizable if there exists a consistent set of beliefs about the strategies and beliefs of all the other players for which that strategy is optimal, with each player maximizing his or her expected payoff subject to his or her subjective beliefs. However, rational-izability greatly extends the range of admissible solution concepts (as the possibility of binding agreement does for cooperative games), so that players, and game theorists analyzing the games they play, may fall back on Nash equilibrium as a focal point. Nash equilibrium, together with its refinements (especially subgame perfection in multistage games), remains the workhorse of noncooperative game theory, which in turn is the most developed and most widely influential form of game theory, spreading across disciplinary boundaries.
SEE ALSO Evolutionary Games; Game Theory; Nash Equilibrium; Nash, John; Prisoner’s Dilemma (Economics); Strategy and Voting Games; Subgame Perfection
Dimand, Mary Ann, and Robert W. Dimand. 1996. From the Beginnings to 1945. Vol. 1 of A History of Game Theory. London and New York: Routledge.
Dimand, Mary Ann, and Robert W. Dimand, eds. 1997. The Foundations of Game Theory. 3 vols. Cheltenham, U.K., and Lyme, NH: Edward Elgar Publishing.
Leonard, Robert J. 1994. Reading Cournot, Reading Nash: The Creation and Stabilisation of the Nash Equilibrium. Economic Journal 104 (424): 492–511.
Nash, John F., Jr. 1950. Equilibrium Points in n-Person Games. Proceedings of the National Academy of Sciences 36 (1): 48–49.
Nash, John F., Jr. 1951. Noncooperative Games. Annals of Mathematics 54: 286–295.
Nash, John F., Jr. 1996. Essays on Game Theory. Cheltenham, U.K., and Brookfield, VT: Edward Elgar Publishing.
Pearce, David G. 1984. Rationalizable Strategic Behavior and the Problem of Perfection. Econometrica 52 (4): 1029–1050.
Rasmusen, Eric. 2007. Games and Information. 4th ed. Malden, MA: Blackwell.
Von Neumann, John, and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Expanded 60th anniversary edition with contributions by Harold W. Kuhn, Ariel Rubinstein, et al., Princeton, NJ: Princeton University Press, 2004.
Weintraub, E. Roy, ed. 1992. Toward a History of Game Theory. Annual Supplement to Vol. 24 of History of Political Economy. Durham, NC: Duke University Press.
Robert W. Dimand