Skip to main content
Select Source:

Game Theory

Game Theory

I. Theoretical AspectsOskar Morgenstern

BIBLIOGRAPHY

II. Economic ApplicationsMartin Shubik

BIBLIOGRAPHY

I THEORETICAL ASPECTS

The theory of games is a mathematical discipline designed to treat rigorously the question of optimal behavior of participants in games of strategy and to determine the resulting equilibria. In such games each participant is striving for his greatest advantage in situations where the outcome depends not only on his actions alone, nor solely on those of nature, but also on those of other participants whose interests are sometimes opposed, sometimes parallel, to his own. Thus, in games of strategy there is conflict of interest as well as possible cooperation among the participants. There may be uncertainty for each participant because the actions of others may not be known with certainty. Such situations, often of extreme complexity, are found not only in games but also in business, politics, war, and other social activities. Therefore, the theory serves to interpret both games themselves and social phenomena with which certain games are strictly identical. The theory is normative in that it aims at giving advice to each player about his optimal behavior; it is descriptive when viewed as a model for analyzing empirically given occurrences. In analyzing games the theory does not assume rational behavior; rather, it attempts to determine what “rational” can mean when an individual is confronted with the problem of optimal behavior in games and equivalent situations.

The results of the interlocking individual actions are expressed by numbers, such as money or a numerically defined utility for each player transferable among all. Games of strategy include games of chance as a subcase; in games of chance the problem for the player is merely to determine and evaluate the probability of each possible outcome. In games of strategy the outcome for a player cannot be determined by mere probability calculations. Specifically, no player can make mere statistical assumptions about the behavior of the other players in order to decide on his own optimal strategy.

But nature, when interfering in a game through chance events, is assumed to be indifferent with regard to the player or players affected by chance events. Since the study of games of chance has given rise to the theory of probability, without which modern natural science could not exist, the expectation is that the understanding of the far more complicated games of strategy may gradually produce similar consequences for the social sciences.

History. In 1710 the German mathematician-philosopher Leibniz foresaw the need and possibility of a theory of games of strategy, and the notion of a minimax strategy (see section on “Two-person, zero-sum games,” below) was first formulated two years later by James Waldegrave. (See the letter from Waldegrave in the 1713 edition of Montmort 1708; see also Baumol & Goldfeld 1967.) The similarity between games of strategy and economic processes was occasionally mentioned, for example, by Edgeworth in his Mathematical Psychics (1881). Specialized theorems, such as Ernst Zer-melo’s on chess, were stated for some games; and Emile Borel developed a limited minimax strategy, but he denied the possibility of a general theorem. It was not until John von Neumann (1928) proved the fundamental theorem that a true theory of games emerged (see section on “Two-person, zero-sum games,” below). In their Theory of Games and Economic Behavior, von Neumann and Morgen stern (1944) extended the theory, especially to games involving more than two players, and gave applications of the theory in economics. Since then, throughout the world a vast literature has arisen in which the main tenets of the theory have been widened and deepened and many new concepts and ideas introduced. The four-volume Contributions to the Theory of Games (Kuhn & Tucker 1950-1959) and Advances in Game Theory (Dresher, Shapley, & Tucker 1964) give evidence of this continuing movement. These works contain extensive bibliographies, but see especially Volume 4 of Contributions to the Theory of Games.

Game theory concepts

Games are described by specifying possible behavior within the rules of the game. The rules are in each case unambiguous; for example, certain moves are allowed for specific pieces in chess but are forbidden for others. The rules are also inviolate. When a social situation is viewed as a game, the rules are given by the physical and legal environment within which an individual’s actions may take place. (For example, in a market individuals are permitted to bargain, to threaten with boycotts, etc., but they are not permitted to use physical force to acquire an article or to attempt to change its price.) The concrete occasion of a game is called a play, which is described by specifying, out of all possible, allowable moves, the sequence of choices actually made by the players or participants. After the final move, the umpire determines the payments to each player. The players may act singly, or, if the rules of the game permit it and if it is advantageous, they may form coalitions. When a coalition forms, the distribution of the payments to the coalition among its members has to be established. All payments are stated in terms of money or a numerically defined utility that is transferable from one player to another. The payment function is generally assumed to be known to the players, although modifications of this assumption have been introduced, as have other modifications—for example, about the character of the utilities and even about the transferability of payments.

The “extensive” form of a game, given in terms of successive moves and countermoves, can be represented mathematically by a game tree, which describes the unfolding of the moves, the state of information of the players at the moment of each choice, and the alternatives for choices available to each player at each occasion. This description can, in a strict mathematical sense, be given equiv alent^ in a “normalized” form: each player, uninformed about the choices made by any other player, chooses a single number that identifies a “strategy” from his given finite or infinite set of strategies. When all personal choices and a possible random choice are made (simultaneously), the umpire determines the payments. Each strategy is a complete plan of playing, allowing for all contingencies as represented by the choices and moves of all other players and of nature. The payoff for each player is then represented by his mathematical expectation of the outcome for himself. The final description of the game therefore involves only the players’ strategies and no further chance elements.

The theory explicitly assumes that each player, besides being completely informed about the alternative payoffs due to all moves made or strategies chosen, can perform all necessary computations needed to determine his optimal behavior. (This assumption of complete information is also com monplace in current economic theory, although seldom stated explicitly.)

The payments made by all players may add up to zero, as in games played for entertainment. In this case the gains of some are exactly balanced by the losses of others. Such games are called zero-sum games. In other instances the sum of all payments may be a constant (different from zero) or may be a variable; in these cases all players may gain or lose. Applications of game theory to economic of political problems require the study of these games, since in a purchase, for example, both sides gain. An economy is normally productive so that the gains outweigh any losses, whereas in a war both sides may lose.

If a player chooses a particular strategy as iden tified by its number, he selects a pure strategy; if he allows a chance mechanism, specified by himself, to make this selection for him, he chooses a mixed or statistical strategy. The number of pure strategies for a player normally is finite, partly because the rules of games bring the play to an end after a finite number of moves, partly because the player is confronted with only a finite number of alternatives. However, it is possible to treat cases with infinitely many strategies as well as to consider even the borderline case of games with infinitely many players. These serve essentially to study pathological examples or to explore certain mathematical characteristics.

Game theory uses essentially combinatorial and set-theoretical concepts and tools, since no specific calculus has as yet evolved—as happened when differential and integral calculus were invented simultaneously with the establishment of classical mechanics. Differential calculus is designed to determine maxima and minima, but in games, as well as in politics, these are not defined, because the out come of a player’s actions does not depend on his actions alone (plus nature). This applies to all players simultaneously. A maximum (or minimum) of a function can be achieved only when all variables on which the maximum (minimum) depends are under the complete control of the wouldbe maximizer. This is never the case in games of strategy. Therefore, in the equivalent business, political, or military operations there obtains no maximum (minimum) problem, whether with or with out side conditions, as assumed in the classical literature of these fields; rather one is confronted there with an entirely different conceptual structure, which the theory of games analyzes.

Two-person, zero-sum games

The simplest game of strategy is a two-person, zero-sum game, in which players A and B each have a finite number of strategies and make their choices unknown to each other. Let P be the payoff to the first player, and let —P be the payoff to the second player. Then P is greater than, equal to, or loses than 0, depending on whether A wins, draws, or loses. Let A1, A2,, An be the strategies available to player A and B1,B2,, Bm be the strategies available to player B. In the resulting n x m array of numbers, each row represents a pure strategy of A, each column a pure strategy of B. The intersections of the rows and columns show the payoffs to player A from player B. The first player wishes to maximize this payoff, while the second wishes to minimize it. This array of numbers is called the payoff matrix, an example of which is presented in Table 1, where payments go from B to A. Player A’s most desirable payoff is 8; B’s is —10. Should player A pick strategy A,, either of these two events may happen depending on B’s action. But if A picks A1, B in his own interest would want to pick B3, which would mean that A would have to pay 10 units to B instead of receiving 8. The row minima represent the worst that could happen to A for each of his strategies, and it is natural that he would want to make as great as possible the least gain he can expect from each; that is, he seeks the maximum of the row minima, or the maximin, which in Table 1 is -1 (strategy A3). Conversely, B will wish to minimize the column maxima—that is, seek the

Table 1 – Payoff matrix for a two-person, zero-sumgame
B’s Strategy \ A’s StrategyB1B2B2Row minimo
A18−3−10−10
A20−26−2
A34−15−1
Column maximo8−16 

minimax—which is also —1 (strategy B2). We would say that each player is using a minimax strategy—that is, each player selects the strategy that minimizes his maximum loss. Any deviation from the optimal strategies A, and B. is fraught with danger for the deviating player, so that each will choose the strategy that contains the so-called saddle point of the payoff function. The saddle point is defined as the point at which the maximin equals the minimax. At this point the least that A can secure for himself is equal to the most that B may have to part with. (In the above example A has to pay one unit to B.) If there is more than one saddle point in the payoff matrix, then they are all equal to each other. Games possessing saddle points in pure strategies are called specially strictly determined. In these games it is immaterial whether the choice of the pure strategy by either player is made openly before the other makes his choice. Games of perfect information—that is, games in which each player at each move is always informed about the entire previous history of the play, so that what is preliminary to his choice is also anterior to it—are always specially strictly determined. Chess belongs in this class; bridge does not, since each of the two players (one “player” being the north-south team, the other the east-west team) is not even completely informed about himself—for example, north does not know precisely what cards south holds.

Most games will have no saddle points in pure strategies; they are then said to be not strictly determined. The simplest case is matching pennies. The payoff matrix for this game is presented in Table 2. Here, if one player has to choose openly before the other does, he is sure to lose. Each player will therefore strive to prevent information about his choice from flowing to the other. This is accomplished by the player’s choice of a chance mechanism, which selects from among the available pure strategies with probabilities determined by the player. In matching pennies, the chance mechanism should select “heads” with probability ½ and “tails” with probability ½. This randomization may be achieved by tossing the coin before showing it. If there is a premium, say on matching heads over matching tails, the payoff matrix would reflect this, and the probabilities with which the two sides of the coin have to be played in order to prevent disclosure of a pattern of playing to the benefit of the opponent would no longer be ½ for heads and ½ for tails. Thus, when there is no saddle point in pure strategies a randomization by a chance mechanism is called for. The players are then said to be using mixed, or statistical, strategies. This does not transform

Table 2 – Payoff matrix for matching pennies
B’s penny \ A’s pennyHeadsTailsRow minima
Heads1−1−1
Tails−11−1
Column maxima11 

a game of strategy into a game of chance: the strategic decision is the specification of the randomization device and the assignment of the proper probabilities to each available pure strategy. Whether pure or mixed strategies are needed to assure a saddle point, the theory at no point requires that the players make assumptions about each other’s intelligence, guesses, and the like. The choice of the optimal strategy is independent of all such considerations. Strategies selected in this way are perfect from the defensive point of view. A theory of true offensive strategies requires new ideas and has not yet been developed.

Von Neumann proved that each matrix game can be made strictly determined by introducing mixed strategies. This is the fundamental theorem of game theory. It shows that each zero-sum, two-person game has a saddle point in mixed strategies and that optimal mixed strategies exist for each of the two players. The original proof of this theorem made use of rather complex properties of set theory, functional calculus, and combinatorics. Since the original proof was given, a number of alternative, simplified versions have been given by various authors. The numerical solution of a matrix game with m columns and n rows demands the solution of a system of linear inequalities of m + n + 1 un knowns, the m + n probabilities for the strategies of players A and B and the minimax value. There exist many techniques for solving such systems; notably, an equivalence with solving dual linear programs has proved to be of great importance [seeProgramming]. High-speed computers are needed to cope with the rapid rise of the required arithmetical operations. A more modest view of mixed strategies is the notion of behavioral strategies, which are the probability distributions over each player’s information sets in the extensive form of the game. For games such as chess, even the optimal pure strategy cannot be computed, although the existence of a saddle point in pure strategies can be proved and either white or black has a winning pure strategy no matter what the other does (or both have pure strategies that enforce a draw). The problems of finding further computational techniques are actively being investigated.

n-Person, zero-sum games

When the number of players increases to n > 3, new phenomena arise even when the zero-sum restriction remains. It is now possible that cooperation will benefit the players. If this is not the case, the game is called inessential. In an essential game the players will try to form coalitions and act through these in order to secure their advantage. Different coalitions may have different strength. A winning coalition will have to divide its proceeds among its members, and each member must be satisfied with the division in order that a stable solution obtains [seeCoalitions].

Any possible division of payments among all players is called an imputation, but only some of all possible imputations will be contained in a solution. An inessential game has precisely one imputation that is better than any other, that is, one that dominates all others. This unique imputation forms the solution, but this uniqueness is trivial and applies only to inessential games. There is no cooperation in inessential games.

A solution of an essential game is characteristically a nonempty set of several imputations with the following properties: (1) No imputation in the set is dominated by another imputation in the set. (2) All imputations not in the set are dominated by an imputation contained in the set. There may be an infinite number of imputations in a solution set, and there may be several solution sets, each of which has the above properties. Furthermore, it should be noted that every imputation in a solution set is dominated by some imputation not in that set, but property (2) assures that such a dominating imputation is, in turn, dominated by an imputation in the solution set.

To be considered as a member of a coalition, a player may have to offer compensations or side payments to other prospective members. A compensation or side payment may even take the form of giving up privileges that the rules of the game may attribute to a player. A player may be admitted to a coalition under terms less favorable than those obtained by the players who form the initial core of a coalition (this happens first when n = 4), Also, coalitions of different strength can be distin guished. Discrimination may occur; for example, some players may consider others “taboo”—that is, unworthy as coalition partners. This leads to the types of discriminatory solutions that already occur when n = 3. Yet discrimination is not neces sarily as bad for the affected player as defeat is for a nondiscriminated player, because cooperation against the discriminated player may not be perfect. A player who by joining a coalition does not contribute more to it than what he can get by playing for himself merely has the role of a dummy.

The fundamental fact of cooperation is that the players in a coalition can each obtain more than they could obtain by playing alone. This expresses the nonadditivity—specifically, the superadditivity —of value, the explanation of which has long been recognized as a basic problem in economics and sociology. In spite of many efforts, no solution was found, but it is now adequately described by the characteristic function v(S), a numerical set function that states for any cooperative n-person game the proceeds of the coalition S, and an imputation that describes the distribution of all payments among all players (von Neumann & Morgenstern 1944, chapter 6).

Since there may be many solutions to a cooperative (essential) n-person game, the question arises as to which of them will in fact prevail. Each solution may correspond to a specific mode of behavior of the players or a specific form of social organization. This expresses the fact that in the same physical setting different types of social organization can be established, each one consistent in itself but in contradiction with other organizations. For example, we observe that the same tech nology allows the maintenance of varying economic systems, income distributions, and so on. If a stable standard of behavior exists (a mode of behavior accepted by society), then it can be argued that the only relevant solution is the one corresponding to this standard.

The choice of an imputation not in the solution set, while advantageous to each of those in the particular coalition that is able to enforce this im putation, cannot be maintained because another coalition can enforce another imputation, belonging to the solution set, that dominates the first one. Hence, a standard is set and proposals for imputations that are not in the solution will be rejected. The theory cannot state which imputation of all those belonging to the standard of behavior actually will be chosen—that is, which coalition will form. Work has been done to introduce new assumptions under which this may become feasible. No imputation contained in the solution set guarantees stability by itself, since each is necessarily dominated from the outside. But in turn each imputation is always protected against threats by another one within the solution set that dominates the imputation not in the solution set.

Since an imputation is a division of proceeds among the players, these conditions define a certain fairness, such that the classical problems of fair division (for example, cutting a cake) become amenable to game-theoretic analysis.

This conceptual structure is more complicated than the conventional view that society could be organized according to some simple principle of maximization. The conventional view would be valid only if there were inessentiality—that is, if there were no advantage in cooperation, or if cooperation were forbidden, or, finally, if a supreme authority were to do away with the entire imputation problem by simply assigning shares of income to the members of the society. Inessentiality would be the case for a strictly communistic society, which is formally equivalent to a Robinson Crusoe economy. This, in turn, is the only formal setup under which the classical notion of marginal utility is logically valid. Whether cooperation through formation of coalitions is advantageous to participants in a society, whether such cooperation, although advantageous, is forbidden, or whether compensations or side payments are ruled out by some authority although coalitions may be entered—these are clearly empirical questions. The theory should take care of all eventualities, and current investigations explore the different avenues. In economic life, mergers, labor unions, trade associations, car tels, etc., express the powerful tendencies toward cooperation. The cooperative case with side paments is the most comprehensive, and the theory was originally designed to deal with this case. Important results have been obtained for cooperative games without side payments (Aumann & Peleg 1961), and the fruitful idea of “bargaining sets” has been introduced (Aumann & Maschler 1964).

All indications point overwhelmingly to the benefits of cooperation of various forms and hence to the empirical irrelevance of those noncooperative, inessential games with uniquely determined solutions consisting only of one single imputation dominating all others (as described in the Lausanne school’s general economic equilibrium).

Cooperation may depend on a particular flow of information among the players. Since the required level may not in fact be attainable, noncooperative solutions become important. Economic markets in which players act independently and have no incentive to deviate from a given state have been studied (Nash 1950). Equilibrium points can be determined as those points for which unilateral changes in strategy are unprofitable to everyone. As Nash has shown, every finite game, or the domain of mixed strategies, has at least one equilibrium point. If there is more than one equilibrium point, an intermixture of strategy choices need not give another equilibrium point, nor is the payoff to players the same if the points differ from each other.

There is no proof, as yet, that every cooperative n-person, zero-sum game for any n > 4 has a solution of the specified kind. However, every individual game investigated, even with arbitrarily large n, has been found to possess a solution. The indications are that the proof for the general case will eventually be given. Other definitions of solutions— still differing from that of the Lausanne-Robinson Crusoe convention—are possible and somewhat narrow the field of choices. They are inevitably based on further assumptions about the behavior of the participants in the game, which have to be justified from case to case.

Simple games

In certain n-person games the sole purpose is to form a majority coalition. These games are the “simple” games in which voting takes place. Ties in voting may occur, and weights may differ from one player to another; for example, the chairman of a committee may have more than one vote. A player’s presence may therefore mean the difference between victory or defeat. Games of this nature can be identified with classical cases of production, where the players represent factors of production. It has been proven that even in relatively simple cases, although complete substitutability among players may exist, substitution rates may be undetermined and values are attributed to the players (factors) only by virtue of their relation to each other and not by virtue of their individual contribution. Thus, contrary to current economic doctrine, substitutability does not necessarily guarantee equality as far as value is concerned.

Simple games are suited for interpretation of many political situations in that they allow the determination of the weights, or power, of participants in decision processes. A particular power index has been proposed by Shapley. It is based on the notion of the average contribution a player can make to the coalitions to which he may belong, even considering, where necessary, the order in which he joins them. The weight of a senator, a congressman, and the president in the legislative process has been calculated for the United States. The procedure is applicable to other political systems—for example, the Security Council of the United Nations (Shapley 1953).

Composition of games

Every increase in the number of players brings new phenomena: with the increase from two to three players, coalitions become possible, from three to four, ties may occur among coalitions, etc. There is no guarantee that for very large n an asymptotic convergence of solutions will occur, since coalition formation always reduces large numbers of individual players to small numbers of coalitions acting upon each other. Thus, the increase in the number of players does not necessarily lead to a simplification, as in the case of an enlargement of the numbers of bodies in a physical system, which then allows the introduction of classical methods of statistical averages as a simplification. (When the game is inessential, the number of participants is irrelevant in any case.)

An effective extension of the theory by the enlargement of numbers can be achieved by viewing games played separately as one composite game and by introducing contributions to, or withdrawals from, the proceeds of a given game by a group of players outside the game under consideration. These more complicated notions involve constantsum games and demonstrate, among other things, how the coalition formation, the degree of cooperation among players, and consequently the distribution of the proceeds among them are affected by the availability of amounts in excess of those due to their own strategies alone. Strategy is clearly greatly influenced by the availability of greater pay ments than those that can be made by only the other players. Thus, coalitions—namely, social structures—cannot be maintained if outside con tributions become larger than specified amounts, such that as a consequence no coalition can exhaust the amounts offered. It can also be shown that the outside source, making contributions or withdrawals, can never be less than a group of three players.

These concepts and results are obviously of a rather complicated nature; they are not always directly accessible to intuition, as corresponds to a truly mathematical theory. When that level is reached, confidence in the mathematical results must override intuition, as the experience in the natural sciences shows. The fact that solutions of n-person games are not single numbers or single sets of numbers—but that the above-mentioned, more complicated structures emerge—is not an im perfection of the theory: it is a fundamental property of social organization that can be described only by game-theoretic methods.

Nonzero-sum games

Nonzero-sum games can be reduced to zero-sum games—which makes that entire theory applica ble—by the introduction of a fictitious player, so that an n-person, nonzero-sum game becomes equiv alent to an (n + l)-person, zero-sum game. The fictitious player is either winning or losing, but since he is fictitious he can never become a member of a coalition. Yet he can be construed as proposing alternative imputations, thereby influencing the players’ strategies and thus the course of the play. He will lose according to the degree of cooperation among the players. If the players cooperate per fectly, the maximum social benefit will be attained. In these games there is an increased role of threats, and their costs to the threatening player, although threats already occur in the zero-sum case.

The discriminatory solutions, first encountered for the three-person, zero-sum game, serve as in struments to approach these problems. Most ap plications to economics involve gains by the com munity—an economy being productive and there being no voluntary exchange unless both sides profit—while many other social phenomena fall under the domain of zero-sum games. The non zero-sum theory is so far the part of game theory least developed in detail, although its foundations seem to be firmly established by the above proce dure.

Applications

Game theory is applicable to the study of those social phenomena in which there are agents striving for their own advantage but not in control of all the variables on which the outcome depends. The wide range of situations of which this is true is obvious: they are economic, political, military, and strictly social in nature. Applications have been made in varying degree to all areas; some have led to experiments that have yielded important new insights into the theory itself and into special processes such as bargaining. Finally, the possi bility of viewing the basic problem of statistics as a game against nature has given rise to modern statistical decision theory (Wald 1950). The influ ence of game theory is also evident in philosophy, information theory, cybernetics, and even biology.

Oskar Morgenstern

[See also the biography ofVon Neumann.]

BIBLIOGRAPHY

Aumann, R. J.; and P eleg, B. 1961 Von Neumann-Morgenstern Solutions to Cooperative Games Without Side Payments. American Mathematical Society, Bul letin 66:173–179.

Aumann, R. J.; and Maschler, M. 1964 The Bargaining Set for Cooperative Games. Pages 443-476 in M. Dresher, L. S. Shapley, and A. W. Tucker (editors), Advances in Game Theory. Princeton Univ. Press.

Baumol, William J.; and Goldfeld, Stephen M. (edi tors) 1967 Precursors in Mathematical Economics. Unpublished manuscript. → To be published in 1967 or 1968 by the London School of Economics and Political Science. Contains the letter from Waldegrave to Remond de Montmort, first published in the second (1713) edition of Montmort (1708), describing his formulation, and a discussion by Harold W. Kuhn of the identity of Waldegrave.

Berge, Claude 1957 Theorie generale des jeux a n personnes. Paris: Gauthier-Villars.

Blackwell, David; and Girshick, M. A. 1954 Theory of Games and Statistical Decisions. New York: Wiley.

Braithwaite, Richard B. 1955 Theory of Games as a Tool for the Moral Philosopher. Cambridge Univ. Press.

Burger, Ewald (1959) 1963 Introduction to the The ory of Games. Englewood Cliffs, N.J.: Prentice-Hall. → First published in German.

Dresher, Melvin 1961 Games of Strategy: Theory and Applications. Englewood Cliffs, N.J.: Prentice-Hall.

Dresher, Melvin; Shapley, L. S.; and Tucker, A. W. (editors) 1964 Advances in Game Theory. Annals of Mathematic Studies, Vol. 32. Princeton Univ. Press.

Edgeworth, Francis Y. (1881)1953 Mathematical Psy chics: An Essay on the Application of Mathematics to the Moral Sciences. New York: Kelley.

Frechet, Maurice; and Von Neumann, John 1953 Commentary on the Three Notes of Emile Borel. Econometrica 21, no. 1:118–127.

Karlin, Samuel 1959 Mathematical Methods and The ory in Games, Programming and Economics. 2 vols. Reading, Mass.: Addison-Wesley.

Kuhn, Harold W.; and Tucker, A. W. (editors) 1950-1959 Contributions to the Theory of Games. 4 vols. Princeton Univ. Press.

Luce, R. Duncan;and Raiffa, Howard 1957 Games and Decisions: Introduction and Critical Survey. A Study of the Behavioral Models Project, Bureau of Applied Social Research, Columbia University. New York. → First published in 1954 as A Survey of the Theory of Games, Columbia University, Bureau of Ap plied Social Research, Technical Report No. 5.

Mckinsey, John C. C. 1952 Introduction to the Theory of Games. New York: McGraw-Hill.

[Montmort, Pierre Remond DE] (1708) 1713 Essay d’analyse sur les jeux de hazard. 2d ed. Paris: Quillau. → Published anonymously.

Morgenstern, Oskar 1963 Spieltheorie und Wirt-schaftswissenschaft. Vienna: Oldenbourg.

Nash, John F. Jr. 1950 Equilibrium in n-Person Games. National Academy of Sciences, Proceedings 36:48–49.

Princeton University Conference 1962 Recent Ad vances in Game Theory. Princeton, N.J.: The Con ference.

Shapley, L. S. 1953 A Value for n-Person Games. Vol ume 2, pages 307-317 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press.

Shapley, L. S.; and Shubik, Martin1954 A Method for Evaluating the Distribution of Power in a Com mittee System. American Political Science Review 48: 787–792.

Shubik, Martin(editor) 1964 Game Theory and Related Approaches to Social Behavior: Selections. New York: Wiley.

Suzuki, Mitsuo1959 Gemu no riron. Tokyo: Keisho Shobo.

Ville, Jean 1938 Sur la theorie generale des jeux ou intervient l’habilite des joueurs. Pages 105-113 in Emile Borel (editor), Traite du calcul des probability’s et de ses applications. Volume 4: Applications diverses et conclusion. Paris: Gauthier-Villars.

Vogelsang, Rudolf1963 Die mathematische Theorie der Spiele. Bonn: Dummler.

Von Neumann, John (1928) 1959 On the Theory of Games of Strategy. Volume 4, pages 13-42 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press.→First published in German in Volume 100 of the Mathematische Annalen.

Von Neumann, John;and Moegenstern, Oskar(1944) 1964 Theory of Games and Economic Behavior. 3d ed. New York: Wiley.

Vorob’ev, N. N. (editor) 1961 Matrichnye igry. Mos cow: Gosudarstvennoe Izdatel’stvo Fiziko-Matemati-cheskoi Literatury. → A collection of translations into Russian from foreign-language publications.

Wald, Abraham (1950)1964 Statistical Decision Functions. New York: Wiley.

Williams, John D. 1954 The Corn-pleat Strategyst: Being a Primer in the Theory of Games and Strategy. New York: McGraw-Hill.

II ECONOMIC APPLICATIONS

The major economic applications of game theory have been in oligopoly theory, bargaining theory, and general equilibrium theory. Several distinct branches of game theory exist and need to be identified before our attention is limited to economic behavior. John von Neumann and Oskar Morgen-stern, who first explored in depth the role of game theory in economic analysis (1944), presented three aspects of game theory which are so funda mentally independent of one another that with a small amount of editing their opus could have been published as three independent books.

The first topic was the description of a game, or interdependent decision process, in extensive form. This provided a phraseology (“choice,” “decision tree,” “move,” “information,” “strategy,” and “pay off”) for the precise definition of terms, which has served as a basis for studying artificial intelligence, for developing the behavioral theory of the firm (Cyert & March 1963), and for considering statis tical decision making [seeDecision Theory]. The definition of “payoff” has been closely associated with developments in utility theory [seeUtility].

The second topic was the description of the two-person, zero-sum game and the development of the mathematical theory based upon the concept of the minimax solution. This theory has formal mathematical connections with linear programming and has been applied successfully to the analysis of problems of pure conflict; however, its application to the social sciences has been limited because pure conflict of interests is the exception rather than the rule in social situations [seeProgramming].

The third subject to which von Neumann and Morgenstern directed their attention was the development of a static theory for the n-person (n ≥ 3), constant-sum game. They suggested a set of stability and domination conditions which should hold for a cooperative solution to an n-person game. It must be noted that the implications of this solution concept were developed on the assumption of the existence of a transferable, interpersonally comparable linear utility which provides a mecha nism for side payments. Since the original work of von Neumann and Morgenstern, twenty to thirty alternative solution concepts for the n-person, non-constant-sum game have been suggested. Some have been of purely mathematical interest, but most have been based on considerations of bargain ing, fair division, social stability, and other aspects of human affairs. Many of the solution concepts do not use the assumption of transferable utility.

Oligopoly and bargaining

Markets in which there are only a few sellers (oligopoly), two sellers (duopoly, a special case of oligopoly), one seller and one buyer (bilateral mo nopoly), and so on, lend themselves to game-theo retic analyses because the fate of each participant depends on the actions taken by the other partici pant or participants. The theory of games has pro vided a unifying basis for the mathematical and semimathematical works dealing with such situations and has also provided some new results. The methodology of game theory requires explicit and detailed definition of the strategies available to the players and of the payoffs associated with the strategies. This methodology has helped to clarify the different aspects of intent, behavior, and mar ket structure in oligopolistic markets (Shubik 1957). So-called conjectural variations and lengthy statements regarding an oligopolist’s (or duopolist’s or bargainer’s) moves and countermoves can be investigated in a unified way when expressed in terms of strategies.

Oligopoly

Perhaps the most pervasive concept underlying the writings on oligopoly is that of a non-cooperative equilibrium. A group of individuals is in a state of noncooperative equilibrium if, in the individual pursuit of his own self-interest, no one in the group is motivated to change his strategy. This concept is basic in the works of Cournot, Ber-trand, Edgeworth, Chamberlin, von Stackelberg, and many others. Nash (1951) has presented a general theory of noncooperative games, based on the equilibrium-point solution. This theory is directly related to Chamberlin’s theory of monopo listic competition, among others.

The outcome given by a solution is called Pareto optimal if no participant can be made better off without some other participant’s being made worse off. Noncooperative solutions, whose outcomes need not be Pareto optimal, have been distinguished from cooperative solutions, whose outcomes must be Pareto optimal. Also, equilibrium points are distinguished on the basis of whether the oligopoly model studied is static or dynamic. In much of the literature on oligopoly, quasi-cooperative solutions have been advanced and quasi-dynamic models have been suggested. Thus, while the Chamberlin large-group equilibrium can be interpreted as the outcome of a static noncooperative game, the small-group equilibrium and the market resolution suggested by Fellner (1949) are cast in a quasi-dy namic, quasi-cooperative framework. A limited amount of development of games of survival (Milnor & Shapley 1957) and games of economic survival (Shubik & Thompson 1959) has provided a basis for the study of multiperiod situations and for an extension of the noncooperative equilibrium concept to include quasi-cooperative outcomes.

New results. The recasting of oligopoly situations into a game-theory context has produced some new results in oligopoly theory (see, for example, May-berry, Nash, & Shubik 1953; Shubik 1959a). Nash (1953) and Shubik (1959a) have developed the definition of “optimum threat” in economic war fare. The kinky oligopoly demand curve and the more general problem of oligopolistic demand have been re-examined and interpreted. Other results concern stability and the Edgeworth cycle in price-variation oligopoly; duopoly with both price and quantity as independent variables; and the development of diverse concepts applicable to cartel be havior, such as blocking coalitions (Scarf 1965), discriminatory solutions, and decomposable games.

Selten (1965) has been concerned with the problem of calculating the noncooperative equilib ria for various classes of oligopolistic markets. His work has focused on both the explicit calculation and the uniqueness of equilibrium points. Vickrey (1961), Griesmer and Shubik (1963), and others have studied a class of game models applicable to bidding and auction markets. Working from the viewpoint of marketing and operations research, Mills (1961) and others have constructed several noncooperative game-theoretic models of competition through advertising. Jacot (1963) has considered problems involving location and spatial com petition.

Behavioristic findings. Game theory can be given both a normative and a behavioristic interpretation. The meaning of “rational behavior” in situations involving elements of conflict and cooperation is not well defined. No single set of normative criteria has been generally accepted, and no universal behavior has been validated. Closely related to and partially inspired by the developments in game theory, there has been a growth in experimental gaming, some of which has been in the context of economic bargaining (Siegel & Fouraker 1960) or in the simulated environment of an oligopolistic market (Hoggatt 1959). Where there is no verbal or face-to-face communication, there appears, un der the appropriate circumstances, to be some evi dence in favor of the noncooperative equilibrium.

Bargaining

The theory of bargaining has been of special interest to economists in the context of bilateral monopoly, which can involve two firms, a labor union and a firm, or two individuals en gaged in barter in the market place or trying to settle a joint estate. Any two-person, nonconstant-sum situation, be it haggling in the market or in ternational negotiations, can be formally described in the same game-theoretic framework. However, there are several substantive problems which limit application of this framework and which have re sulted in the development of different approaches. In nonconstant-sum games communication between the players is of considerable importance, yet its role is exceedingly hard to define. In games such as chess and even in many oligopolistic mar kets, a move is a well-defined physical act—moving a pawn in a definite manner or changing a price or deciding upon a production rate; in bargaining it may be necessary to interpret a statement as a move. The problem of interpreting words as moves in negotiation is critical to the description and un derstanding of bargaining and negotiation proc esses. This “coding” problem has to be considered from the viewpoint of many other disciplines, as well as that of game theory.

A desirable property of a theoretical solution to a bargaining problem is that it predicts a unique outcome. In the context of economics this would be a unique distribution of resources (and unique prices, if prices exist at all). Unfortunately, there are few concepts of solution pertaining to economic affairs which have this property. The price system and distribution resulting from a competitive mar ket may in general not be unique; Edgeworth’s so lution to the bargaining problem was the contract curve, which merely predicts that the outcome will be some point among an infinite set of possibilities.

The contract curve has the property that any point on it is jointly optimal (both bargainers can not improve their position simultaneously from a point on this curve) and individually rational (no point gives an individual less than he could obtain without trading). The Pareto-optimal surface is larger than the contract curve, for it is restricted only by the joint optimality condition. If it is assumed that a transferable comparable utility exists, then the Pareto-optimal surface (described in the space of the traders’ utilities) is flat; if not, it will generally be curved. Any point on the Pareto-optimal surface that is individually rational is called an imputation. In the two-person bargain the Edge-worth contract curve coincides with two game-theoretic solutions, the core and the stable set. The core consists of all undominated imputations (it may be empty). A stable set is a set of imputations which do not dominate each other but which to gether dominate all other imputations. An imputa-tation, α, is said to dominate another imputation, β, if (1) there exists a coalition of players who, acting jointly but independently of the others, could guarantee for themselves at least the amounts they would receive if they accepted α, and (2) each player obtains more in α than in β. The core and stable-set solutions can be defined with or without the assumption of transferable utilities. Neither of these solution concepts predicts a unique outcome.

One approach to bilateral monopoly has been to regard it as a “fair-division” problem, and several solution concepts, each one embodying a formalization of concepts of symmetry, justice, and equity, have been suggested (Nash 1953; Shapley 1953; Harsanyi 1956). These are generally known as value solutions, since they specify the amount that each participant should obtain. For the two-person case, some of the fair-division or arbitration schemes do predict unique outcomes. The Nash fair-division scheme assumes that utilities of the players are measurable, but it does not need as sumptions of either comparability or transferability of utilities (Shubik 1966). Shapley’s scheme does utilize the last two assumptions. Other schemes have been suggested by Raiffa (1953), Braithwaite (1955), Kuhn (in Shubik 1967), and others.

Another approach to bargaining is to treat it in the extensive form, describing each move explicitly and showing the time path taken to the settlement point. This involves attempting to parametrize qualities such as “toughness,” “flexibility,” etc. Most of the attempts to apply game theory in this manner belong to studies in social psychology, political science, and experimental gaming. However, it has been shown (Harsanyi 1956) that the dynamic process suggested by Zeuthen (1930) is equivalent to the Nash fair-division scheme.

General equilibrium

Game theory methods have provided several new insights in general equilibrium economics. Under the appropriate conditions on preferences and production, it has been proved that a price system that clears the market will exist, provided that each individual acts as an independent maximizer. This result holds true independently of the number of participants in the market; hence, it cannot be interpreted as a limiting phenomenon as the number of participants increases. Yet, in verbal discussions contrasting the competitive market with bi lateral monopoly, the difference generally stressed is that between the market with many participants, each with little if any control over price, and the market with few participants, where the interactions of each with all the others are of maximum importance.

The competitive equilibrium best reflects the spirit of “the invisible hand” and of decentralization. The use of the word “competitive” is counter to both game-theoretic and common-language implications. It refers to the case in which, if each individual considers himself an isolated maximizer operating in an environment over which he has no control, the results will be jointly optimal.

Game-theoretic solutions

The power and appeal of the concept of competitive equilibrium appears to be far greater than that of mere decentralization. This is reflected in the finding that under the appropriate conditions the competitive equilibrium may be regarded as the limit solution for several conceptually extremely different game-theo retic solutions.

Convergence of the core. It has been noted that for bilateral monopoly the Edgeworth contract curve is the core. Edgeworth had suggested and presented an argument to show that if the number of traders is increased on both sides of the market, the contract curve would shrink (interpreted appropriately, given the change in dimensions). Shubik (1959b) observed the connection between the work of Edgeworth and the core; he proved the convergence of the core to the competitive equilibrium in the special case of the two-sided market with transferable utility and conjectured that the result would be generally true for any number of markets without transferable utility. This result was proved by Scarf (the proof, although achieved earlier, is described in Scarf 1965); Debreu and Scarf improved upon it (1963). Using the concept of a continuum of players (rather than considering a limit by replicating the finite number of players in each category, as was done by Shubik, Scarf, and Debreu), Aumann (1966) proved the convergence of the core under somewhat different conditions. When transferable utility is assumed, the core converges to a single point and the competitive equilibrium is unique. Otherwise it may split and converge to the set of competitive equilibria.

The convergence of the core establishes the existence of a price system as a result of a theory which makes no mention of prices. The theory’s prime concern is with the power of coalitions. It may be looked upon as a formalization of countervailing power, inasmuch as it rules out imputations which can be dominated by any group in the society.

Shapley and Shubik (1966) have shown the convergence of the value in the two-sided market with transferable utility. In unpublished work Shapley has proved a more general result for any number of markets, and Shapley and Aumann have worked on the convergence of a nontransferable utility value recently defined by Shapley. Harsanyi (1959) was able to define a value that generalized the Nash two-person fair-division scheme to situations involving many individuals whose utilities are not transferable. This preceded and is related to the new value of Shapley, and its convergence has not been proved.

There are several other value concepts (Selten 1964), all of which make use of symmetry axioms and are based upon some type of averaging of the contributions of an individual to all coalitions.

If one is willing to accept the value as reflecting certain concepts of symmetry and fairness, then in an economy with many individuals in all walks of life, and with the conditions which are required for the existence of a competitive equilibrium satisfied, the competitive equilibria will also satisfy these symmetry and fairness criteria.

Noncooperative equilibrium. One of the important open problems has been the reconciliation of the various noncooperative theories of oligopolistic competition with general equilibrium theory. The major difficulty is that the oligopoly models are open in the sense that the customers-are usually not considered as players with strategic freedom, while the general equilibrium model considers every individual in the same manner, regardless of his position in the economy. Since the firms are players in the oligopoly models, it is necessary to specify the domain of the strategies they control and their payoffs under all circumstances. In a general equilibrium model no individual is considered a player; all are regarded as individual maximizers. Walras’ law is assumed to hold, and supply is assumed to equal demand.

When an attempt is made to consider a closed economic model as a noncooperative game, considerable difficulties are encountered in describing the strategies of the players. This can be seen im-mediately by considering the bilateral monopoly problem; each individual does not really know what he is in a position to buy until he finds out what he can sell. In order to model this type of situation as a game, it may be necessary to consider strategies which do not clear the market and which may cause a player to become bankrupt—i.e., unable to meet his commitments. Shapley and Shubik (in Shubik 1967) have successfully modeled the closed two-sided two-commodity market without side payments and have shown that the noncooperative equilibrium point converges from below the Pareto-optimal surface to the competitive equilibrium point. They also have considered more goods and markets on the assumption of the existence of a transferable (but not necessarily comparable) utility.

When there are more than two commodities and one market, the existence of a unique competitive equilibrium point appears to be indispensable in defining the strategies and payoffs of players in a noncooperative game. No one has succeeded in constructing a satisfactory general market model as a noncooperative game without using a side-payment mechanism. The important role played by the side-payment commodity is that of a strategy decoupler. It means that a player with a supply of this type of “money” can decide what to buy even though he does not know what he will sell.

In summary, it appears that, in the limit, at least three considerably different game-theoretic solutions are coincidental with the competitive equilibrium solution. This means that by considering different solutions we may interpret the com petitive market in terms of decentralization, fair division, the power of groups, and the attenuation of power of the individual.

The stable-set solution of von Neumann and Morgenstern, the bargaining set of Aumann and Maschler (1964), the “self-policing” properties of certain imputation sets of Vickrey (1959), and several other related cooperative solutions appear to be more applicable to sociology, and possibly anthropology, than to economics. There has been no indication of a limiting behavior for these solutions as numbers grow; on the contrary, it is conjectured that in general the solutions proliferate. When, however, numbers are few, as in cartel arrangements and in international trade, these other solutions provide insights, as Nyblen has shown in his work dealing with stable sets (1951).

Nonexistence of competitive equilibrium

When conditions other than those needed for the existence of a competitive equilibrium hold, such as external economies or diseconomies, joint ownership, increasing returns to scale, and interlinked tastes, then the different solutions in general do not converge. There may be no competitive equilibrium; the core may be empty; and the definition of a non-cooperative game when joint property is at stake will call for a statement of the laws concerning damages and threats. (Similarly, even though the conditions for the existence of a competitive equi librium are satisfied, the various solutions will be different if there are few participants.) When the competitive equilibrium does not exist, we must seek another criterion to solve the problem of distribution or, if possible, change the laws to rein-troduce the competitive equilibrium. The other solutions provide different criteria. However, if a society desires, for example, to have its distribution system satisfy conditions of decentralization and fair division, or of fair division and limits on power of groups, it may be logically impossible to do so.

Davis and Whinston (1962), Scarf (1964), and Shapley and Shubik (1964) have investigated applications of game theory to external economies, to increasing returns to scale, and to joint ownership. In the case of joint ownership the relation between economics and politics as mechanisms for the distribution of the proceeds from jointly owned resources is evident.

It must be noted that the “many solutions” approach to distribution is in contrast to the type of welfare economics that considers a community welfare function or social preferences, which are not necessarily constructed from individual preferences.

Other applications

Leaving aside questions of transferable utility, there is a considerable difference between an econ omy in which there is only barter or a passive shadow price system and one in which the government, and possibly others, have important monetary strategies. Faxen (1957) has considered financial policy from a game-theoretic viewpoint.

There have been some diverse applications of game theory to budgeting and to management science, as can be seen in the articles by Bennion (1956) and Shubik (1955).

Nyblen (1951) has attempted to apply the von Neumann and Morgenstern concept of stable set to problems of macroeconomics. He notes that the Walrasian system bypasses the problem of individual power by assuming it away. He observes that in game theory certain simple aggregation procedures do not hold; thus, the solutions to a four-person game obtained by aggregating two players in a five-person game may have little in common with the solutions to the original five-person game. He outlines an institutional theory of the rate of interest based upon a standard of behavior and (primarily at a descriptive level) links the concepts of discriminatory solution and excess to inflation and international trade.

Martin Shubik

[The reader who is not familiar with oligopoly theory and general equilibrium theory should consultEco NomicEquilibrium; Oligopoly; Welfare Economics.]

BIBLIOGRAPHY

Aumann, Robert J. 1966 Existence of Competitive Equilibria in Markets With a Continuum of Traders. Econometrica 34:1–17.

Aumann, R. J.; and Maschxer, M. 1964 The Bargaining Set for Cooperative Games. Pages 443-476 in M. Dresher, Lloyd S. Shapley, and A. W. Tucker (editors), Advances in Game Theory. Princeton Univ. Press.

Bennion, E. G. 1956 Capital Budgeting and Game Theory. Harvard Business Review 34:115—123.

Braithwaite, Richard B. 1955 Theory of Games as a Tool for the Moral Philosopher. Cambridge Univ. Press.

Cyert, Richard M.; and March, James G. 1963 A Behavioral Theory of the Firm. Englewood Cliffs, N.J.: Prentice-Hall.

Davis, Otto A.; and Whinston, A. 1962 Externalities, Welfare, and the Theory of Games. Journal of Political Economy 70:241–262.

Debreu, Gerard; and Scarf, Herbert 1963 A Limit Theorem on the Core of an Economy. International Economic Review 4:235–246.

Faxen, Karl O. 1957 Monetary and Fiscal Policy Under Uncertainty. Stockholm: Almqvist & Wiksell.

Fellner, William J. 1949 Competition Among the Few: Oligopoly and Similar Market Structures. New York: Knopf.

Griesmer, James H.; and Shubik, Martin 1963 To wards a Study of Bidding Processes. Naval Research Logistics Quarterly 10:11-21, 151-173, 199–217.

Harsanyi, John C. 1956 Approaches to the Bargaining Problem Before and After the Theory of Games. Eco nometrica 24:144–157.

Harsanyi, John C. 1959 A Bargaining Model for the Cooperative n-Person Game. Volume 4, pages 325— 356 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press. → Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Hoggatt, A. C. 1959 An Experimental Business Game. Behavioral Science 4:192–203.

Jacot, Simon-Pierre 1963 Strategic et concurrence de I’application de la theorie des jeux a I’analyse de la concurrence spatiale. Paris: SEDES.

Mayberry, J. P.; Nash, J. F.; and Shubik, Martin 1953 A Comparison of Treatments of a Duopoly Situation. Econometrica 21:141–154.

Mills, H. D. 1961 A Study in Promotional Competition. Pages 245-301 in Frank M. Bass et al. (editors), Mathematical Models and Methods in Marketing. Homewood, III.: Irwin.

Milnok, John W.; and Shapley, Lloyd S. 1957 On Games of Survival. Volume 3, pages 15-45 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press. → Volume 3 was edited by M. Dresher, A. W. Tucker, and P. Wolfe.

Nash, John F. Jr. 1951 Non-cooperative Games. Annals of Mathematics 54:286–295.

Nash, John F. Jr. 1953 Two-person Cooperative Games. Econometrica 21:128–140.

Nyblen, Goren 1951 The Problem of Summation in Economic Sciences. Lund (Sweden): Gleerup.

Raiffa, Howard 1953 Arbitration Schemes for Generalized Two-person Games. Volume 2, pages 361-387 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press.

Scarf, H. 1964 Notes on the Core of a Productive Economy. Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Scarf, H. 1965 The Core of an n-Person Game. Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Selten, Reinhard 1964 Valuation of n-Person Games. Pages 577-626 in M. Dresher, Lloyd S. Shapley, and A. W. Tucker (editors), Advances in Game Theory. Princeton Univ. Press.

Selten, Reinhard 1965 Value of the n-Person Game. → Paper presented at the First International Game Theory Workshop, Hebrew University of Jerusalem.

Shapley, Lloyd S. 1953 A Value for n-Person Games. Volume 2, pages 307-317 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ. Press.

Shapley, Lloyd S.; and Shubik, Martin 1964 Owner ship and the Production Function. RAND Corporation Research Memorandum, RM-4053-PR. Santa Monica, Calif.: The Corporation.

Shapley, Lloyd S.; and Shubik, Martin 1966 Pure Competition, Coalition Power and Fair Division. RAND Corporation Research Memorandum, RM-4917. Santa Monica, Calif.: The Corporation.

Shubik, Martin 1955 The Uses of Game Theory in Management Science. Management Science 2:40–54.

Shubik, Martin 1957 Market Form, Intent of the Firm and Market Behavior. Zeitschrift fur Nationalokon-omie 17:186–196.

Shubik, Martin 1959a Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games. New York: Wiley.

Shubik, Martin 1959b Edgeworth Market Games. Volume 4, pages 267-278 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ Press.→Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Shubik, Martin 1966 Measureable, Transferable, Comparable Utility and Money Unpublished manuscript, Yale Univ., Cowles Foundation for Research in Economics.

Shubik, Martin (editor) 1967 Essays in Mathematical Economics in Honor of Oskar Morgenstern. Princeton Univ. Press. See especially Harold W. Kuhn, “On Games of Fair Division” and Lloyd S. Shapley and Martin Shubik, “Concept and Theories of Pure Com petition.”

Shubik, Martin; and Thompson, Gerald L. 1959 Games of Economic Survival. Naval Research Logistics Quarterly 6:111–123.

Siegel, S.; and Fouraker, L. E. 1960 Bargaining and Group Decision Making: Experiments in Bilateral Monopoly. New York: McGraw-Hill.

Vickrey, William 1959 Self-policing Properties of Certain Imputation Sets. Volume 4, pages 213-246 in Harold W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games. Princeton Univ Press. → Volume 4 was edited by A. W. Tucker and R. Duncan Luce.

Vickrey, William 1961 Counterspeculation, Auctions and Competitive Sealed Tenders. Journal of Finance 16:8–37.

Von Neumann, John; and Morgenstern, Oskar (1944) 1964 Theory of Games and Economic Behavior. 3d ed. New York: Wiley.

Zeuthen, F. 1930 Problems of Monopoly and Economic Warfare. London: Routledge.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/game-theory

"Game Theory." International Encyclopedia of the Social Sciences. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/game-theory

Game Theory

GAME THEORY.

Game theory, the formal analysis of conflict and cooperation, has pervaded every area of economics and the study of business strategy in the past quarter-century and exerts increasing influence in evolutionary biology, international relations, and political science, where the rational-choice approach to politics has been highly controversial. In a strategic game, each player chooses a strategy (a rule specifying what action to take for each possible information set) to maximize his or her expected payoff, taking into account that each of the other players is also making a rational strategic choice. In contrast to economic theories of competitive equilibrium, the focus of game theory is on strategic interaction and on what information is available to a player to predict the actions that the other players will take.

The Origins of Game Theory

Writings by several nineteenth-century economists, such as A. A. Cournot and Joseph Bertrand on duopoly and F. Y. Edgeworth on bilateral monopoly, and later work in the 1930s by F. Zeuthen on bargaining and H. von Stackelberg on oligopoly, were later reinterpreted in game-theoretic terms, sometimes in problematic ways (Leonard, 1994; Dimand and Dimand). Game theory emerged as a distinct subdiscipline of applied mathematics, economics, and social science with the publication in 1944 of Theory of Games and Economic Behavior, a work of more than six hundred pages written in Princeton by two Continental European emigrés, John von Neumann, a Hungarian mathematician and physicist who was a pioneer in fields from quantum mechanics to computers, and Oskar Morgenstern, a former director of the Austrian Institute for Economic Research. They built upon analyses of two-person, zero-sum games published in the 1920s.

In a series of notes from 1921 to 1927 (three of which were translated into English in Econometrica in 1953), the French mathematician and probability theorist Emile Borel developed the concept of a mixed strategy (assigning a probability to each feasible strategy rather than a pure strategy selecting with certainty a single action that the opponent could then predict) and showed that for some particular games with small numbers of possible pure strategies, rational choices by the two players would lead to a minimax solution. Each player would choose the mixed strategy that would minimize the maximum payoff that the other player could be sure of achieving. The young John von Neumann provided the first proof that this minimax solution held for all two-person, constant-sum games (strictly competitive games) in 1928, although the proof of the minimax theorem used by von Neumann and Morgenstern in 1944 was based on the first elementary (that is, nontopological) proof of the existence of a minimax solution, proved by Borel's student Jean Ville in 1938 (Weintraub; Leonard, 1995; Dimand and Dimand). For games with variable sums and more players, where coalitions among players are possible, von Neumann and Morgenstern proposed a more general solution concept, the stable set, but could not prove its existence. In the 1960s, William Lucas proved by counterexample that existence of the stable set solution could not be proved because it was not true in general.

Although von Neumann's and Morgenstern's work was the subject of long and extensive review articles in economics journals, some of which predicted widespread and rapid application, game theory was developed in the 1950s primarily by A. W. Tucker and his students in Princeton's mathematics department (see Shubik's recollections in Weintraub) and at the RAND Corporation, a nonprofit corporation based in Santa Monica, California, whose only client was the U.S. Air Force (Nasar). Expecting that the theory of strategic games would be as relevant to military and naval strategy as contemporary developments in operations research were, the U.S. Office of Naval Research supported much of the basic research, and Morgenstern was named as an editor of the Naval Research Logistics Quarterly.

Much has been written about the influence of game theory and related forms of rational-choice theory such as systems analysis on nuclear strategy (although General Curtis LeMay complained that RAND stood for Research And No Development) and of how the Cold War context and military funding helped shape game theory and economics (Heims; Poundstone; Mirowski), mirrored by the shaping of similar mathematical techniques into "planometrics" on the other side of the Cold War (Campbell). Researchers in peace studies, publishing largely in the Journal of Conflict Resolution in the late 1950s and the 1960s, drew on Prisoner's Dilemma games to analyze the Cold War (see Schelling), while from 1965 to 1968 (while ratification of the Nuclear Non-Proliferation Treaty was pending) the U.S. Arms Control and Disarmament Agency sponsored important research on bargaining games with incomplete information and their application to arms races and disarmament (later declassified and published as Mayberry with Harsanyi, Scarf, and Selten; and Aumann and Maschler with Stearns).

Nash Equilibrium, the Nash Bargaining Solution, and the Shapley Value

John Nash, the outstanding figure among the Princeton and RAND game theorists (Nasar; Giocoli), developed, in articles from his dissertation, both the Nash equilibrium for noncooperative games, where the players cannot make binding agreements enforced by an outside agency, and the Nash bargaining solution for cooperative games where such binding agreements are possible (Nash). Nash equilibrium, by far the most widely influential solution concept in game theory, applied to games with any number of players and with payoffs whose sum carried with the combination of strategies chosen by the players, while von Neumann's minimax solution was limited to two-person, constant-sum games. A Nash equilibrium is a strategy combination in which each player's chosen strategy is a best response to the strategies of the other players, so that no player can get a higher expected payoff by changing strategy as long as the strategies of the other players stay the same. No player has an incentive to be the first to deviate from a Nash equilibrium.

Nash proved the existence of equilibrium but not uniqueness: a game will have at least one strategy combination that is a Nash equilibrium, but it may have many or even an infinity of Nash equilibria (especially if the choice of action involves picking a value for a continuous variable). Cournot's 1838 analysis of duopoly has been interpreted in retrospect as a special case of Nash equilibrium, just as Harsanyi perceived the congruity of Zeuthen's 1930 discussion of bargaining and the Nash bargaining solution. Refinements of Nash equilibrium, which serve to rule out some of the possible equilibria, include the concept of a subgame perfect equilibrium (see Harsanyi and Selten), which is a Nash equilibrium both for an entire extended game (a game in which actions must be chosen at several decision nodes in a game tree) and for any game starting from any decision node in the game tree, including points that would never be reached in equilibrium, so that any threats to take certain actions if another player were to deviate from the equilibrium path would be credible (rational in terms of self-interest once that point in the game had been reached). A further refinement rules out some subgame perfect Nash equilibria by allowing for the possibility of a "trembling hand," that is, a small probability that an opposing player, although rational, may make mistakes (Harsanyi and Selten). Thomas Schelling has suggested that if there is some clue that would lead players to regard one Nash equilibrium as more likely than others, that equilibrium will be a focal point.

Nash equilibrium, with its refinements, remains at the heart of noncooperative game theory. Applied to the study of market structure by Martin Shubik (1959), this approach has come to dominate the field of industrial organization, as indicated by Jean Tirole (1988) in a book widely accepted as the standard economics textbook on industrial organization and as a model for subsequent texts. More recently, noncooperative game theory has found economic applications ranging from strategic trade policy in international trade to the credibility of anti-inflationary monetary policy and the design of auctions for broadcast frequencies. From economics, noncooperative game theory based on refinements of Nash equilibrium has spread to business school courses on business strategy (see Ghemawat, applying game theory in six Harvard Business School cases for MBA students). Some economists view business strategy as an application of game theory, with ideas flowing in one direction, rather than as a distinct field (Shapiro).

However, scholars of strategic management remain sharply divided over whether game theory provides useful insights or just a rationalization for any conceivable observed behavior (see the papers by Barney, Saloner, Camerer, and Postrel in Rumelt, Schendel, and Teece, especially Postrel's paper, which verifies Rumelt's Flaming Trousers Conjecture by constructing a game-theoretic model with a subgame perfect Bayesian Nash equilibrium in which bank presidents publicly set their pants on fire, a form of costly signaling that is profitable only for a bank that can get repeat business, that is, a high-quality bank).

Nash proposed the Nash bargaining solution for two-person cooperative games, that the players maximize the product of their gains over what each would receive at the threat point (the Nash equilibrium of the noncooperative game that they would play if they failed to reach agreement on how to divide the gains), and showed it to be the only solution possessing all of a particular set of intuitively appealing properties (efficiency, symmetry, independence of unit changes, independence of irrelevant alternatives). Feminist economists such as Marjorie McElroy and Notburga Ott have begun to apply bargaining models whose outcome depends critically on the threat point (the outcome of the noncooperative game that would be played if bargaining does not lead to agreement), as well as Prisoner's Dilemma games, to bargaining within the household (see Seiz for a survey).

Another influential solution concept for cooperative games, the Shapley value for n-person games (Shapley), allots to each player the average of that player's marginal contribution to the payoff each possible coalition would receive and, for a class of games with large numbers of players, coincides with the core of a market (the set of undominated imputations or allocations), yet another solution concept discovered by graduate students at Princeton in the early 1950s (in this case, Shapley and D. B. Gillies) and then rediscovered by Shubik in Edgeworth's 1881 analysis. There is a large literature in accounting applying the Shapley value to cost allocation (Roth and Verrecchia).

Applications of Game Theory

Lloyd Shapley and Shubik (1954), two Princeton contemporaries of Nash, began the application of game theory to political science, drawing on Shapley's 1953 publication to devise an index for voting power in a committee system. William Riker and his students at the University of Rochester took the lead in recasting political science in terms of strategic interaction of rational, self-interested players (see Riker and Ordeshook; Shubik, 1984; Riker in Weintraub), and there is now a specialized market for game-theory textbooks for political science students (Morrow). Donald Green and Ian Shapiro (1994) criticize recent applications to politics of game theory and related forms of rational-choice theory as viewing political behavior as too exclusively rational and self-interested to the exclusion of ideologies, values, and social norms (see Friedman for the ensuing controversy).

Recasting Marxism in terms of rational choice and analyzing class struggle as a strategic game is especially controversial (Carver and Thomas). Conflict and cooperation (whether in the form of coalitions or contracts) are at the heart of law, as of politics. Douglas Baird, Robert Gertner, and Randal Picker (1994), among others, treat such legal topics as tort, procedure, and contracts as examples of strategic interaction, as the growing sub-discipline of law and economics increasingly reasons in terms of game theory. As a counterpart at a more "macro" level to game-theoretic analysis of political and legal conflict and cooperation, Andrew Schotter (1981) and Shubik (1984) propose a "mathematical institutional economics" to explain the evolution of social institutions such as contract law, money, trust, and customs, norms, and conventions ("the rules of the game") as the outcome of strategic interaction by rational agents. This approach shows promise, but has been received skeptically by economists such as Ronald Coase who rely on less mathematical neoclassical techniques to develop a "New Institutional Economics," and with even less enthusiasm by economists outside the neoclassical mainstream, such as Philip Mirowski. Going beyond the explanation of merely mundane institutions, Steven Brams (1983) uses game theory to explore questions of theology.

Prisoner's Dilemma

Game theorists and social scientists have been fascinated by Prisoner's Dilemma, a two-by-two game (two players, each with two possible pure strategies) with a particular payoff matrix (Rapoport and Chammah; Poundstone). The game's nickname and the accompanying story were provided by A. W. Tucker. Suppose that two prisoners, accused of jointly committing a serious crime, are interrogated separately. The prosecutor has sufficient evidence to convict them of a lesser crime without any confession, but can get a conviction on the more serious charge only with a confession. If neither prisoner confesses, they will each be sentenced to two years for the lesser charge. If both confess, each will receive a sentence of five years. However, if only one prisoner confesses, that prisoner will be sentenced to only one year, while the other prisoner will get ten years. In the absence of any external authority to enforce an agreement to deny the charges, each player has a dominant strategy of confessing (given that the other player has denied the charges, one year is a lighter sentence than two years; given that the other player has confessed, five years is a lighter sentence than ten years). The unique Nash equilibrium is for both players to confess (defect from any agreement to cooperate) and receive sentences of five years, even though both would be better off if both denied the charges (cooperated).

This game has been used as an explanation of how individually rational behavior can lead to undesirable outcomes ranging from arms races to overuse of natural resources ("the tragedy of the commons," a generalization to more than two players). If the game is repeated a known finite number of times, however large, the predicted result is the same: both players will confess (defect) on the last play, since there would be no opportunity of future punishment for defection or reward for cooperation; therefore both will also confess (defect) on the next-to-last play, since the last play is determined, and so on, with mutual defection on each round as the only sub-game perfect Nash equilibrium. However, the "folk theorem" states that for infinitely repeated games, even with discounting of future benefits or a constant probability of the game ending on any particular round (provided that the discount rate and the probability of the game ending on the next round are sufficiently small and that the dimensionality of payoffs allows for the possibility of retaliation), any sequence of actions can be rationalized as a subgame perfect Nash equilibrium. (The folk theorem owes its name to its untraceable origin.)

However, players do not generally behave in accordance with Nash's prediction. Frequent cooperation in one-shot or finitely repeated Prisoner's Dilemma has been observed ever since it was first played. The first Prisoner's Dilemma experiment, conducted at RAND by Merrill Flood and Melvin Drescher in January 1950, involved one hundred repetitions with two sophisticated players, the economist Armen Alchian from the University of California, Los Angeles, and the game theorist John Williams, head of RAND's mathematics department. Alchian and Williams succeeded in cooperating on sixty plays, and mutual defection, the Nash equilibrium, occurred only fourteen times (Poundstone, pp. 107116). Robert Axelrod (1984) conducted a computer tournament for iterated Prisoner's Dilemma, finding that Rapoport's simple "tit for tat" strategy (cooperate on the first round, then do whatever the other player did on the previous round) yielded the highest payoff.

One way to explain the observed extent of cooperation in experimental games and in life is to recognize that humans are only boundedly rational, relying on rules of thumb and conventions, and making choices about what to know because information is costly to acquire and process. Assumptions about rationality in game theory, such as common knowledge, can be very strong: "An event is common knowledge among a group of agents if each one knows it, if each one knows the others know it, if each one knows that each one knows that the others know it, and so on the limit of a potentially infinite chain of reasoning about knowledge" (Geanakoplos, p. 54). Ariel Rubinstein (1998) sketches techniques for explicitly incorporating computability constraints and the process of choice in models of procedural rationality. Alternatively, evolutionary game theory, surveyed by Larry Samuelson (2002), emphasizes adaptation and evolution to explain behavior, rather than fully conscious rational choice, returning to human behavior the extension of game theory to evolutionarily stable strategies for animal behavior (Maynard Smith; Dugatkin and Reeve).

Conclusion

The award of the Royal Bank of Sweden Prize in Economic Science in Memory of Alfred Nobel to John Nash, John Harsanyi, and Reinhard Selten in 1994 recognized the impact of game theory (and a film biography of Nash, based on Nasar's 1998 book, subsequently won Academy Awards for best picture and best actor), while the multivolume Handbook of Game Theory, edited by Robert Aumann and Sergiu Hart (19922002), presents a comprehensive overview. Reflecting on what has been achieved, David Kreps concludes that

Non-cooperative game theory has brought a fairly flexible language to many issues, together with a collection of notions of "similarity" that has allowed economists to move insights from one context to another and to probe the reach of these insights. But too often it, and in particular equilibrium analysis, gets taken too seriously at levels where its current behavioural assumptions are inappropriate. We (economic theorists and economists more broadly) need to keep a better sense of proportion about when and how to use it. And we (economic and game theorists) would do well to see what can be done about developing formally that senses of proportion. (pp. 184)

Strategic interaction has proved to be a powerful idea, and, although its application, especially beyond economics, remains controversial, it has proven fruitful in suggesting new perspectives and new ways of formalizing older insights.

See also Economics ; Mathematics ; Probability ; Rational Choice .

bibliography

Aumann, Robert J., and Sergiu Hart, eds. Handbook of Game Theory with Economic Applications. 3 vols. Amsterdam: North-Holland, 19922002.

Aumann, Robert J., and Michael B. Maschler, with the collaboration of Richard E. Stearns. Repeated Games with Incomplete Information. Cambridge, Mass.: MIT Press, 1995.

Axelrod, Robert. The Evolution of Cooperation. New York: Basic Books, 1984.

Baird, Douglas, Robert H. Gertner, and Randal C. Picker. Game Theory and the Law. Cambridge, Mass.: Harvard University Press, 1994.

Brams, Steven J. Superior Beings: If They Exist, How Would We Know? Game-Theoretic Implications of Omniscience, Omnipotence, Immortality, and Incomprehensibility. New York: Springer-Verlag, 1983.

Campbell, Robert W. "Marx, Kantorovich, and Novozhilov: Stoimost' versus Reality." Slavic Review 20 (1961): 402418.

Carver, Terrell, and Paul Thomas, eds. Rational Choice Marxism. Houndmills, U.K.: Macmillan, 1995.

Dimand, Mary Ann, and Robert W. Dimand. The History of Game Theory, Vol. 1: From the Beginnings to 1945. London and New York: Routledge, 1996.

Dugatkin, Lee Alan, and Hudson Kern Reeve, eds. Game Theory and Animal Behavior. New York: Oxford University Press, 1998.

Friedman, Jeffrey, ed. "Rational Choice Theory." Critical Review 9 (1995).

Geanakoplos, John. "Common Knowledge." Journal of Economic Perspectives 6 (1992): 5382.

Ghemawat, Pankaj. Games Businesses Play: Cases and Models. Cambridge, Mass.: MIT Press, 1997.

Giocoli, Nicola. Modeling Rational Agents: From Interwar Economics to Early Modern Game Theory. Cheltenham, U.K., and Northampton, Mass.: Edward Elgar, 2003.

Green, Donald P., and Ian Shapiro. Pathologies of Rational Choice Theory: A Critique of Applications in Political Science. New Haven, Conn.: Yale University Press, 1994.

Harsanyi, John C., and Reinhard Selten. A General Theory of Equilibrium Selection in Games. Cambridge, Mass.: MIT Press, 1988.

Heims, Steve J. John Von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death. Cambridge, Mass.: MIT Press, 1980.

Kreps, David M. Game Theory and Economic Modelling. Oxford: Clarendon Press, 1990.

Leonard, Robert J. "From Parlor Games to Social Science: Von Neumann, Morgenstern, and the Creation of Game Theory, 19281944." Journal of Economic Literature 33 (1995): 730761.

. "Reading Cournot, Reading Nash: The Creation and Stabilisation of the Nash Equilibrium," Economic Journal 104 (1994): 492511.

Mayberry, John P., with John C. Harsanyi, Herbert E. Scarf, and Reinhard Selten. Game-Theoretic Models of Cooperation and Conflict. Boulder, Colo.: Westview 1992.

Maynard Smith, John. Evolution and the Theory of Games. Cambridge, U.K.: Cambridge University Press, 1982.

Mirowski, Philip. Machine Dreams: Economics Becomes a Cyborg Science. Cambridge, U.K.: Cambridge University Press, 2002.

Morrow, James D. Game Theory for Political Scientists. Princeton, N.J.: Princeton University Press, 1994.

Nasar, Sylvia. A Beautiful Mind: A Biography of John Forbes Nash, Jr., Winner of the Nobel Prize in Economics, 1994. New York: Simon and Schuster, 1998.

Nash, John F., Jr. Essays on Game Theory. Cheltenham, U.K., and Brookfield, Vt.: Edward Elgar, 1996.

Poundstone, William. Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb. New York: Doubleday, 1992.

Rapoport, Anatol, and Albert M. Chammah. Prisoner's Dilemma: A Study in Conflict and Cooperation. Ann Arbor: University of Michigan Press, 1965.

Riker, William H., and Peter C. Ordeshook. An Introduction to Positive Political Theory. Englewood Cliffs, N.J.: Prentice Hall, 1973.

Roth, A. E., and R. E. Verrecchia. "The Shapley Value as Applied to Cost Allocation: a Reinterpretation." Journal of Accounting Research 17 (1979): 295303.

Rubinstein, Ariel. Modeling Bounded Rationality. Cambridge, Mass.: MIT Press, 1998.

Rumelt, Richard P., Dan E. Schendel, and David J. Teece, eds. Fundamental Issues in Strategy: A Research Agenda. Boston: Harvard Business School Press, 1994.

Samuelson, Larry. "Evolution and Game Theory." Journal of Economic Perspectives 16 (2002): 4766.

Schelling, Thomas. The Strategy of Conflict. Cambridge, Mass.: Harvard University Press, 1960.

Schotter, Andrew. The Economic Theory of Social Institutions. Cambridge, U.K.: Cambridge University Press, 1981.

Seiz, Janet A. "Game Theory and Bargaining Models." In The Elgar Companion to Feminist Economics, edited by Janice Peterson and Margaret Lewis. Cheltenham, U.K., and Northampton, Mass.: Edward Elgar, 1999.

Shapiro, Carl. "The Theory of Business Strategy." RAND Journal of Economics 20 (1989): 125137.

Shapley, Lloyd S. "A Value for n-Person Games." In Harold Kuhn and Albert W. Tucker, eds., Contributions to the Theory of Games, Vol. 2, Annals of Mathematics Studies, no. 28. Princeton, N.J.: Princeton University Press, 1953.

Shapley, Lloyd S., and Martin Shubik, "A Method for Evaluating the Distribution of Power in a Committee System." American Political Science Review 48 (1954): 787792.

Shubik, Martin. A Game-Theoretic Approach to Political Economy. Vol. 2 of Game Theory in the Social Sciences. Cambridge, Mass.: MIT Press, 1984.

. Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games. New York: Wiley, 1959.

Tirole, Jean. The Theory of Industrial Organization. Cambridge, Mass.: MIT Press, 1988.

Von Neumann, John, and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, N.J.: Princeton University Press, 1944; 3rd ed. 1953.

Weintraub, E. Roy, ed. Toward a History of Game Theory. Durham, N.C.: Duke University Press, 1992. Annual supplement to History of Political Economy.

Robert W. Dimand

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." New Dictionary of the History of Ideas. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." New Dictionary of the History of Ideas. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/game-theory

"Game Theory." New Dictionary of the History of Ideas. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/game-theory

Game Theory

Game Theory

BIBLIOGRAPHY

Game theory is a branch of mathematics used to analyze competitive situations whose outcomes depend not only on ones own choices, and perhaps chance, but also on the choices made by other parties, or players. Because the outcome of a game is dependent on what all players do, each player tries to anticipate the choices of other players in order to determine his own best choice. How these interdependent strategic calculations are made is the subject of the theory. Game theory was created in practically one stroke with the publication of Theory of Games and Economic Behavior in 1944 by mathematician John von Neumann (19031957) and economist Oskar Morgenstern (19021977). This work was a monumental intellectual achievement and has given rise to hundreds of books and thousands of articles in a variety of disciplines.

The theory has several major divisions, the following being the most important:

Two-person versus n-person. The two-person theory deals with the optimal strategic choices of two players, whereas the n -person theory (n > 2) mostly concerns what coalitions, or subsets of players, will form and be stable, and what constitutes reasonable payments to their members.

Zero-sum versus nonzero-sum. The payoffs to all players sum to zero (or some other constant) at each outcome in zero-sum games but not in nonzero-sum games, wherein the sums are variable; zero-sum games are games of total conflict, in which what one player gains the others lose, whereas nonzero-sum games permit the players to gain or lose together.

Cooperative versus noncooperative. Cooperative games are those in which players can make binding and enforceable agreements, whereas noncooperative games may or may not allow for communication among the players but do assume that any agreement reached must be in equilibriumthat is, it is rational for a player not to violate it if other players do not, because the player would be worse off if he did.

Games can be described by several different forms, the three most important being:

  1. Extensive (game tree) indicates sequences of choices that players (and possibly chance, according to nature or some random device) can make, with payoffs defined at the end of each sequence of choices.
  2. Normal/strategic (payoff matrix) indicates strategies, or complete plans contingent on other players choices, for each player, with payoffs defined at the intersection of each set of strategies in a matrix.
  3. Characteristic function indicates values that all possible coalitions (subsets) of players can ensure for their members, whatever the other players do.

These different game forms, or representations, give less and less detailed information about a gamewith the sequences in form 1 dropped from form 2, and the strategies to implement particular outcomes in form 2 dropped from form 3to highlight different aspects of a strategic situation.

Common to all areas of game theory is the assumption that players are rational: They have goals, can rank outcomes (or, more stringently, attach utilities, or values, to them), and choose better over worse outcomes. Complications arise from the fact that there is generally no dominant, or unconditionally best, strategy for a player because of the interdependency of player choices. (Games in which there is only one player are sometimes called games against nature and are the subject of decision theory.)

A game is sometimes defined as the sum-total of its rules. Common parlor games, like chess or poker, have well-specified rules and are generally zero-sum games, making cooperation with the other player(s) unprofitable. Poker differs from chess in being not only an n -person game (though two players can also play it) but also a game of incomplete information, because the players do not have full knowledge of each others hands, which depend in part on chance.

The rules of most real-life games are equivocal; indeed, the game may be about the rules to be used (or abrogated). In economics, rules are generally better known and followed than in politics, which is why game theory has become the theoretical foundation of economics, especially microeconomics. But game-theoretic models also play a major role in other subfields of economics, including industrial organization, public economics, and international economics. Even in macroeconomics, in which fiscal and monetary policies are studied, questions about setting interest rates and determining the money supply have a strong strategic component, especially with respect to the timing of such actions. It is little wonder that economics, more than any of the other social sciences, uses game theory at all levels.

Game-theoretic modeling has made major headway in political science, including international relations, in the last generation. While international politics is considered to be quite anarchistic, there is certainly some constancy in the way conflicts develop and may, or may not, be resolved. Arms races, for instance, are almost always nonzero-sum games in which two nations can benefit if they reach some agreement on limiting weapons, but such agreements are often hard to verify or enforce and, consequently, may be unstable.

Since the demise of the superpower conflict around 1990, interest has shifted to whether a new balance of powerreminiscent of the political juggling acts of European countries in the nineteenth and early twentieth centurymay emerge in different regions or even worldwide. For example, will China, as it becomes more and more a superpower in Asia, align itself with other major Asian countries, like India and Japan, or will it side more with Western powers to compete against its Asian rivals? Game theory offers tools for studying the stability of new alignments, including those that might develop on political-economy issues.

Consider, for example, the World Trade Organization (WTO), whose durability is now being tested by regional trading agreements that have sprung up among countries in the Americas, Europe, and Asia. The rationality of supporting the WTO, or joining a regional trading bloc, is very much a strategic question that can be illuminated by game theory. Game theory also provides insight into how the domestic politics of a country impinges on its foreign policy, and vice versa, which has led to a renewed interest in the interconnections between these two levels of politics.

Other applications of game theory in political science have been made to strategic voting in committees and elections, the formation and disintegration of parliamentary coalitions, and the distribution of power in weighted voting bodies. On the normative side, electoral reforms have been proposed to lessen the power of certain parties (e.g., the religious parties in Israel), based on game-theoretic analysis. Similarly, the voting weights of members of the European Union Council of Ministers, and its decision rules for taking action (e.g., simple majority or qualified majority), have been studied with an eye to making the body both representative of individual members interests and capable of taking collective action.

As game-theoretic models have become more prominent in political science, they have, at the same time, created a good deal of controversy. Some critics charge that they abstract too much from strategic situations, reducing actors to hyperrational players or bloodless automatons that do not reflect the emotions or the social circumstances of people caught up in conflicts. Moreover, critics contend, game-theoretic models are difficult to test empirically, in part because they depend on counterfactuals that are never observed. That is, they assume that players take into account contingencies that are hard to reconstruct, much less model precisely.

But proponents of game theory counter that the theory brings rigor to the study of strategic choices that no other theory can match. Furthermore, they argue that actors are, by and large, rationalthey choose better over worse means, even if the goals that they seek to advance are not always apparent.

When information is incomplete, so-called Bayesian calculations can be made that take account of this incompleteness. The different possible goals that players may have can also be analyzed and their consequences assessed.

Because such reconstruction is often difficult to do in real-life settings, laboratory experimentsin which conditions can be better controlledare more and more frequently conducted. In fact, experiments that test theories of bargaining, voting, and other political-economic processes have become commonplace in economics and political science. Although they are less common in the other social sciences, social psychology has long used experiments to investigate the choices of subjects in games like prisoners dilemma. This infamous game captures a situation in which two players have dominant strategies of not cooperating, as exemplified by an arms race or a price war. But doing so results in an outcome worse for both than had they cooperated. Because mutual cooperation is not a Nash equilibrium, however, each player has an incentive to defect from cooperation.

Equally vexing problems confront the players in another well-known game, chicken. Not only is cooperation unstable, but noncooperation leads to a disastrous outcome. It turns out that each player should defect if and only if the other player cooperates, but anticipating when an opponent will do so is no mean feat.

Since the invention of game theory in the mid-1940s, its development has been remarkable. Two Nobel prizes in economics were awarded to a total of five game theorists in 1994 and 2005 (including John Nash of the film A Beautiful Mind fame), but many other recipients of this prize have used game theory extensively. In addition, game-theoretic modeling has progressed rapidly in political scienceand, to a lesser extent, in the other social sciencesas well as in a variety of other disciplines, including biology, business, and law.

SEE ALSO Arms Control and Arms Race; Cold War; Deterrence, Mutual; Nash Equilibrium; Political Economy; Prisoners Dilemma (Economics)

BIBLIOGRAPHY

Aumann, Robert J., and Sergiu Hart, eds. 19922002. Handbook of Game Theory with Economic Applications. 3 vols. Amsterdam, NY: Elsevier.

Brams, Steven J. 1994. Theory of Moves. New York: Cambridge University Press.

Dixit, Avinash, and Susan Skeath. 2005. Games of Strategy. 2nd ed. New York: Norton.

Nasar, Sylvia. 1998. A Beautiful Mind: A Biography of John Forbes Nash Jr., Winner of the Nobel Prize in Economics, 1994. New York: Simon & Schuster.

Osborne, Martin J. 2004. An Introduction to Game Theory. New York: Oxford University Press.

von Neumann, John, and Oskar Morgenstern. 1953. Theory of Games and Economic Behavior. 3rd ed. Princeton, NJ: Princeton University Press.

Steven J. Brams

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/game-theory-0

"Game Theory." International Encyclopedia of the Social Sciences. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/game-theory-0

Game Theory

GAME THEORY

Game theory is a way of reasoning through problems. Although its use can be found throughout history, it was only formally stylized by the economists John von Neumann and Oskar Morganstern in the 1940s. Game theory takes the logic behind complex strategic situations and simplifies them into models that can be used to explain how individuals reach decisions to act in the real world. Game theory models attempt to abstract from personal, interpersonal, and institutional details of problems how individuals or groups may behave given a set of given conditions. This modeling allows a researcher or planner to get at the root of complex human interactions. The major assumption underlying most game theory is that people and groups tend to work toward goals that benefit them. That is, they have ends in mind when they take actions.

The most important application of game theory to public health occurs when the actions of individuals or groups affect the health of others. On some occasions, individual or group strategies for betterment lead to inferior outcomes for the greater population.

Using game theory to model public health problems is not different from using it to model any other type of problem or decision-making scenario. One particularly illustrative game is called the Prisoners' Dilemma, illustrated below. This game is often used to show the need for public resources and services. That is, sometimes individuals who choose certain strategies end up with an inferior outcome because of the incentives they were presented with. In public health, the problem becomes apparent quickly.

In order to place these events into a context in which game theory can be employed, four commonly defined criteria are used:

  • Players are the decision makers in the game; a player can be an individual, group, or population that must decide how to use the resources available within given constraints.
  • Rules are the constraints; all activity is defined by rules and gives the model an analytical credence to be tested for validity in the real world.
  • Strategies are the courses of action open to the players in a game; players may choose their action dependent upon different situations they are presented with.
  • Payoffs are the final returns to players, which are usually stated in terms that are objectively understood by each player of the game.

Consider a situation in which two groups of people border a malarial swamp. One group is named Alpha and the other is Beta. The swamp causes both groups to be plagued by malaria and other diseases. The problem could easily be remedied by draining the swampland. However, neither group is willing to act first because no incentives exist to take on the hard labor of draining the swamp alone. The greater utility that would be conveyed to both groups is lost because there is no incentive for either individual group to act.

THE SWAMP: A PRISONERS' DILEMMA

The game called Prisoners' Dilemma can be modeled using game theory. The game matrix shown in Table 1 is an example of a common tool in game theory modeling. The players are named in the

Table 1

The Swamp: A Prisoner's Dilemma
Beta
Contribute Not Contribute
source: Courtesy of author.
Alpha Contribute Alpha 1 Alpha -1
Beta 1 Beta 2
Not Contribute Alpha 2 Alpha 0
Beta -1 Beta 0

outer boxes, the rule is that the players may not communicate before simultaneously acting, the strategies are to contribute or not contribute, and the payoffs are in the innermost boxes.

Look at the situation as it is presented to the Alpha group. They realize that the outcome depends on the action the Beta group takes. If Beta contributes, it pays Alpha to avoid contributing, for in that instance, Alpha will benefit twice as much as if they worked with Beta to drain the swamp (2 points rather than 1). The reason the payoff for not contributing is greater is that Alpha will receive the benefit of draining the swamp without doing any of the work. However, if Beta does not contribute, Alpha still benefits by not contributing rather than contributing alone (the payoff is 0 instead of 1). That is, Alpha will choose not to bear the costs of draining the swamp alone.

The Alpha group reasons that regardless of Beta's action, their own best action is to not help drain the swamp. Because Beta's options are symmetric to Alpha's, they also reason that they benefit most through inaction. As a result, the swamp does not get drained, and both groups end up with an inferior outcome. This game leads to a special equilibrium called a Nash equilibrium, which means both players' strategies will lead them to the same payoff regardless of the strategy chosen by the opposing player.

PUBLIC HEALTH IMPLICATIONS

The implication for public health is that the best strategies for individuals or groups are sometimes not the best strategies for everyone taken as a whole. Public health professionals need to be vigilant to these special circumstances and use interventions to create better incentive systems. For example, Alpha and Beta could each be levied a tax, by some authority over both, to pay for the draining of the swamp. The disincentives for progress would then be circumvented and both groups would benefit.

Game theory has been used to model a number of subjects important to public health, including organ donation, ethics, and the patient-provider relationship. Game theory provides a strong modeling device for public health professionals and illustrates the need of public intervention when the incentives of individuals impede progress for the group.

Peter S. Meyer

Nancy L. Atkinson

Robert S. Gold

(see also: Community Health; Community Organization; Ethics of Public Health )

Bibliography

Hirshleifer, J., and Glazer, A. (1992). Price and Applications. Englewood Cliffs, NJ: Prentice Hall.

Nash, J. (1951). "Non-Cooperative Games." Annals of Mathematics 54:286295.

Nicholson, E. (1998). Microeconomic Theory. Fort Worth, TX: Harcourt Brace.

O'Brien, B. J. (1988). "A Game-Theoretic Approach to Donor Kidney Sharing." Social Science and Medicine 26(11):11091116.

Parkin, M. (1990). Microeconomics. New York: Addison-Wesley.

Schneiderman, K. J.; Jecker, N. S.; Rozance, C.; Klotzko, A. J.; and Friedl, B. (1995). "A Different Kind of Prisoner's Dilemma." Cambridge Quarterly of Healthcare Ethics 4(4):530545.

Von Neumann, J., and Morgenstern, O. (1944). The Theory of Games in Economic Behavior. New York: Wiley.

Wynia, M. K. (1997). "Economic Analyses, the Medical Commons, and Patients' Dilemmas: What Is the Physician's Role?" Journal of Investigative Medicine 45(2):3543.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." Encyclopedia of Public Health. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." Encyclopedia of Public Health. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/game-theory

"Game Theory." Encyclopedia of Public Health. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/game-theory

Game Theory

Game theory

Game theory is a branch of mathematics concerned with the analysis of conflict situations. The term conflict situation refers to a condition involving two or more people or groups of people trying to achieve some goal. A simple example of a conflict situation is the game of tic-tac-toe. In this game, two people take turns making Xs or Os in a #-shaped grid. The first person to get three Xs or Os in a straight line wins the game. It is possible, however, that neither person is able to achieve this goal, and the game then ends in a tie or a stand-off.

The variety of conditions described by the term conflict situation is enormous. They range from board and card games such as poker, bridge, chess and checkers; to political contests such as elections; to armed conflicts such as battles and wars.

Mathematicians have long been intrigued by games and other kinds of conflict situations. Is there some mathematical system for winning at bridge? at poker? in a war? One of the earliest attempts to answer this question was the probability theory, developed by French mathematician and physicist (one who studies the science of matter and energy) Blaise Pascal (16231662) and his colleague Pierre de Fermat (16011665). At the request of a gentleman gambler, Pascal and Fermat explored the way to predict the likelihood of drawing certain kinds of hands (a straight, a flush, or three-of-a-kind, for example) in a poker game. In their attempts to answer such questions, Pascal and Fermat created a whole new branch of mathematics.

Words to Know

Game: A situation in which a conflict arises between two or more players.

Nonzero-sum game: A game in which the amount lost by all players is not equal to the amount won by all other players.

Zero-sum, two-player games: A game in which the amount lost by one player is equal to the amount won by the other player.

The basic principles of game theory were first suggested by Hungarian American mathematician and physicist John von Neumann (19031957) in 1928. The theory received little attention until 1944, when Neumann and economist Oskar Morgenstern (19021977) wrote the classic treatise Theory of Games and Economic Behavior. Since then, many economists and scientists have expanded and applied the theory.

Characteristics of games

The mathematical analysis of games begins by recognizing certain basic characteristics of all conflict situations. First, games always involve at least two people or two groups of people. In most cases, the game results in a win for one side of the game and a loss for the other side. Second, games always begin with certain set conditions, such as the dealing of cards or the placement of soldiers on a battlefield. Third, choices always have to be made. Some choices are made by the players themselves ("where shall I place my next X"?) and some choices are made by chance

(such as rolling dice). Finally, the game ends after a set number of moves and a winner is declared.

Types of games

Games can be classified in a variety of ways. One method of classification depends on the amount of information players have. In checkers and chess, for example, both players know exactly where all the pieces are located and what moves they can make. There is no hidden information that neither player knows about. Games such as these are known as games of perfect information.

The same cannot be said for other games. In poker, for example, players generally do not know what cards their opponents are holding, and they do not know what cards remain to be dealt. Games like poker are known as games of imperfect knowledge. The mathematical rules for dealing with these two kinds of games are very different. In one case, one can calculate all possible moves because everything is known about a situation. In the other case, one can only make guesses based on probability as to what might happen next. Nonetheless, both types of games can be analyzed mathematically and useful predictions about future moves can be made.

Games also can be classified as zero-sum or nonzero-sum games. A zero-sum game is a game in which one person wins. Everything lost by the loser is given to the winner. For example, suppose that two players decide to match pennies. The rule is that each player flips a penny. If both pennies come up the same (both heads or both tails), player A wins both pennies. If both pennies come up opposite (one head and one tail), player B wins both pennies. This game is a zero-sum game because one player wins everything (both pennies) on each flip, while the other player loses everything. Game theory often begins with the analysis of zero-sum games between two players because they are the simplest type of conflict situation to analyze.

Most conflict situations in real life are not zero-sum games. At the end of a game of Monopoly, for example, one player may have most of the property, but a second player may still own some property on the board. Also, the game may involve more than two people with almost any type of property distribution.

Application of game theory

Game theory is a powerful tool that can suggest the best strategy or outcome in many different situations. Economists, political scientists, the military, and sociologists have all used it to describe situations in their various fields. A recent application of game theory has been in the study of the behavior of animals in nature. Here, researchers are applying the notions of game theory to describe the many aspects of animal behavior including aggression, cooperation, and hunting methods. Data collected from these studies may someday result in a better understanding of our own human behaviors.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." UXL Encyclopedia of Science. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." UXL Encyclopedia of Science. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/game-theory-2

"Game Theory." UXL Encyclopedia of Science. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/game-theory-2

Game Theory

Game Theory. Within national security analysis, Game theory deals with parties making choices that influence each other's interests, where they all know that they are making such choices. Using mathematics, it analyzes the think/doublethink logic of how each adversary sees the other, sees the other's view of it, and so on. Unlike war gaming, where real players assume roles, it involves only mathematical calculations.

John von Neumann and Oskar Morgenstern laid the foundation of game theory in the 1940s. Its application to military problems has been limited but interesting. One World War II example involved submarine warfare. A submarine is passing through a corridor patrolled by submarine‐hunting planes. The submarine must spend some time traveling on the surface to recharge its batteries. The corridor widens and narrows, and the submarine is easier to detect in the narrower parts, with less sea for the hunters to scan. Where should the submarine surface? Where should the hunters focus their effort? The premise that the wide part is the one logical place is self‐refuting. If it were true, the hunters would deduce that, would head there and leave the narrower part alone, making the narrower part better. Choosing the narrow part likewise leads to a contradiction. Game theory advises a “mixed” strategy—do one or the other unpredictably, using exact probabilities calculated from the ease of detection in each section.

Other applications have addressed the problems of when an interceptor aircraft closing on a bomber should open fire, how to allocate antimissile defenses to targets of varying value, and when to fire intercontinental missiles to avoid Soviet nuclear explosions in the stratosphere.

These problems involved specific wartime encounters. Another area is broad strategy. A prevalent misconception is that game theory set the principles of nuclear strategy. In the 1940s, planners hoped that the new mathematics would do this, but strategic problems proved too complex. It was hard even to specify each side's goals. Game theory has not given exact strategic advice, but it has clarified general principles. In a model of crisis confrontation, for example, one side wants to show the adversary that it values winning very highly, to induce the other side to back down. It uses the tactic of sacrifice‐to‐show‐resolve—make some costly military deployment so the adversary will conclude that only a determined government would pay such a cost to prove its determination. The model precisely illustrates the skeletal structure of strategic concepts such as showing resolve or enhancing credibility. By the 1990s, a sophisticated body of academic work had addressed deterrence, escalation, war alliances, and the verification of arms treaties.
[See also Disciplinary Views of War: Political Science and International Relations; Operations Research; Strategy; War Plans.]

Bibliography

Melvin Dresher , Games of Strategy: Theory and Applications, 1961.
Barry O'Neill , A Survey of Game Theory Studies of Peace and War, in Robert Aumann and Sergiu Hart, eds., Handbook of Game Theory, 1994.

Barry O'Neill

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Game Theory." The Oxford Companion to American Military History. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"Game Theory." The Oxford Companion to American Military History. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/game-theory

"Game Theory." The Oxford Companion to American Military History. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/game-theory

game theory

game theory The general theory of the rational behaviour of two or more people in circumstances where their interests are, at least in part, conflicting. In The Theory of Games and Economic Behaviour (1947), John Von Neumann and Oskar Morgenstern attempted to develop a theory covering both zero-sum games and non-zero-sum games. In this context a ‘game’ is any social situation where interaction occurs between at least two ‘players’ who are competing with each other at least some of the time. Such situations might include marriage, war, rivalry between political parties, the labour-market, and more specifically employer–worker negotiations. The key contribution of game theory is to provide an abstract mathematical theory to model what choices are possible, or likely, in situations with certain common features (such as the number of participants, or players, and whether the ‘prize’ is of fixed size or is variable).

Zero-sum games represent circumstances in which the gain of one participant is the loss of another; that is, situations where the size of the ‘cake’ is fixed, and everyone seeks to get as large a slice of it as possible. Two-person zero-sum games were the first to be studied by Von Neumann, who showed that in certain cases there would be a relatively stable equilibrium point (or minimax-maximin combination), at which one player's optimum choice met the other's.

In non-zero-sum or non-constant-sum games, it may pay all or some of the participants to co-operate actively to increase the total benefits achieved, so analysis focuses on the formation of coalitions and their outcomes. In effect, collaboration increases the size of the cake, but participants cannot always predict their rival's choice. The most famous examples are the well-known Prisoner's Dilemma and (more recent) Problem (or Tragedy) of the Commons, both of which capture clearly situations in which choices that maximize each individual's self-interest produce the worst possible outcome overall. Only if each participant chooses what is in the collective interest, rather than narrow self-interest, will the collective optimum result be achieved. In most laboratory experiments based on these games, nearly two-thirds of all subjects make the selfish, or distrustful choice; the co-operative outcome is achieved in a small minority of cases. However, they have been run on a vast scale using computer simulations to assess the effectiveness of various strategies pitted against each other; and, on this longer time-horizon, co-operation was found to evolve in a society of completely self-interested individuals.

While few social scientists use the mathematical models of game theory, the general theory and concepts have already had a profound effect on all the social sciences which study situations of conflict, competition, and potential co-operation (notably, for example, in studies of the military and of markets). See Robert Gibbons , A Primer in Game Theory (1992
) and Kenneth G. Binmore , Fun and Games (1992
).

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." A Dictionary of Sociology. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." A Dictionary of Sociology. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/game-theory

"game theory." A Dictionary of Sociology. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/game-theory

games, theory of

theory of games, group of mathematical theories first developed by John Von Neumann and Oskar Morgenstern. A game consists of a set of rules governing a competitive situation in which from two to n individuals or groups of individuals choose strategies designed to maximize their own winnings or to minimize their opponent's winnings; the rules specify the possible actions for each player, the amount of information received by each as play progresses, and the amounts won or lost in various situations. Von Neumann and Morgenstern restricted their attention to zero-sum games, that is, to games in which no player can gain except at another's expense.

This restriction was overcome by the work of John F. Nash during the early 1950s. Nash mathematically clarified the distinction between cooperative and noncooperative games. In noncooperative games, unlike cooperative ones, no outside authority assures that players stick to the same predetermined rules, and binding agreements are not feasible. Further, he recognized that in noncooperative games there exist sets of optimal strategies (so-called Nash equilibria) used by the players in a game such that no player can benefit by unilaterally changing his or her strategy if the strategies of the other players remain unchanged. Because noncooperative games are common in the real world, the discovery revolutionized game theory. Nash also recognized that such an equilibrium solution would also be optimal in cooperative games. He suggested approaching the study of cooperative games via their reduction to noncooperative form and proposed a methodology, called the Nash program, for doing so. Nash also introduced the concept of bargaining, in which two or more players collude to produce a situation where failure to collude would make each of them worse off.

The theory of games applies statistical logic to the choice of strategies. It is applicable to many fields, including military problems and economics. The Nobel Memorial Prize in Economic Sciences was awarded to Nash, John Harsanyi, and Reinhard Selten (1994), to Robert J. Aumann and Thomas C. Schelling (2005), and to Lloyd Shapley and Alvin Roth (2012) for work in applying game theory to aspects of economics.

See J. Von Neumann and O. Morgenstern, Theory of Games and Economic Behavior (3d ed. 1953); D. Fudenberg and J. Tirole, Game Theory (1994); M. D. Davis, Game Theory: A Nontechnical Introduction (1997); R. B. Myerson, Game Theory: Analysis of Conflict (1997); J. F. Nash, Jr., Essays on Game Theory (1997); A. Rapoport, Two-Person Game Theory (1999).

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"games, theory of." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"games, theory of." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/games-theory

"games, theory of." The Columbia Encyclopedia, 6th ed.. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/games-theory

game theory

game theory A mathematical theory of decision-making by participants with conflicting interests in a competitive situation, originated by Emile Borel in 1921 and rigorously established by John von Neumann in 1928. The theory attempts to gain insights into economic situations by isolating these aspects, which occur in their simplest form in games of strategy.

In a two-player game, as defined by the theory, each participant has a choice of plays for which there are several possible outcomes, gains or losses, depending on the opponent's choice. An optimum strategy states the relative frequency with which a player's choices should be used, so as to maximize his average gain (or minimize his average loss). The problem of determining the optimum strategy can be formulated as a problem in linear programming. Generalizations to n-person games are included in the theory.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." A Dictionary of Computing. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." A Dictionary of Computing. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/computing/dictionaries-thesauruses-pictures-and-press-releases/game-theory

"game theory." A Dictionary of Computing. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/computing/dictionaries-thesauruses-pictures-and-press-releases/game-theory

game theory

game theory The theory that relationships within a community (of organisms or of traits possessed by those organisms) can be regarded as a contest (i.e. a game) in which each participant seeks to secure some advantage. Numerical values can be attached to the gains and losses involved, allowing the contest to be simulated mathematically, usually by computer modelling. The application of game theory has produced many insights into ecological relationships and the significance of particular aspects of animal behaviour.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." A Dictionary of Ecology. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." A Dictionary of Ecology. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/game-theory

"game theory." A Dictionary of Ecology. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/game-theory

game theory

game theory The theory that relationships within a community (of organisms or of traits possessed by those organisms) can be regarded as a contest (i.e. a game) in which each participant seeks to secure some advantage. Numerical values can be attached to the gains and losses involved, allowing the contest to be simulated mathematically, usually by computer modelling. The application of game theory has produced many insights into ecological relationships and the significance of particular aspects of animal behaviour.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." A Dictionary of Zoology. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." A Dictionary of Zoology. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/game-theory-0

"game theory." A Dictionary of Zoology. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/game-theory-0

game theory

game theory In mathematics, analysis of problems involving conflict. Initially it was based on the assumption that participants in conflict adopt strategies that maximize personal gain and minimize loss. Later, more complex motivations, such as morality, were included. Applications of game theory include business management, sociology, economics and military strategy. The theory was introduced by Émile Borel and developed by John von Neumann in 1928.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." World Encyclopedia. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." World Encyclopedia. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/game-theory

"game theory." World Encyclopedia. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/game-theory

game theory

game the·o·ry (also games the·o·ry) • n. the branch of mathematics concerned with the analysis of strategies for dealing with competitive situations where the outcome of a participant's choice of action depends critically on the actions of other participants. Game theory has been applied to contexts in war, business, and biology. Compare with decision theory.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"game theory." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. 27 Jul. 2017 <http://www.encyclopedia.com>.

"game theory." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (July 27, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/game-theory

"game theory." The Oxford Pocket Dictionary of Current English. . Retrieved July 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/game-theory