# Probability, Subjective

# Probability, Subjective

The *subjective* or *personalist* theory of probability views probability as the likelihood that a particular individual attaches to the occurrence of an event or the truth of a proposition, rather than as the frequency with which a particular observation would occur in a long sequence of repetitions.

In his *Treatise on Probability*, published in 1921, but written as a Cambridge fellowship dissertation before World War I (1914–1918), John Maynard Keynes (1883–1946) distinguished a probability distribution over possible outcomes from an individual’s degree of belief that a particular probability distribution was in fact the true probability distribution, with the degree of belief reflecting the weight of available, relevant evidence. Keynes viewed probability as an objective relation that would be perceived the same way by any rational person with the same information. Émile Borel (1871–1956) and Frank P. Ramsey (1903–1930) responded by arguing for the more subjective interpretation that a person can have any degree of belief in any given statement on any evidence, and those beliefs will still be consistent and coherent, provided only that the person’s subjective probabilities attached to all possible outcomes sum to one and are bounded by zero and one, with a probability of *p* attached to statement or event *S* implying a probability of 1–*p* attached to the denial of *S*. (The relevant essays by Borel and Ramsey are reprinted in Kyburg and Smokler [1964]; see also Bruno de Finetti’s 1937 monograph and Bernard O. Koopman’s 1940 article in that volume.) In his memorial article on his friend Ramsey, Keynes accepted Ramsey’s criticism on the issue of the subjectivity of degrees of belief.

Like Frank Knight’s (1885–1972) *Risk, Uncertainty, and Profit* (1921), Keynes distinguished between *risk* (where outcomes are random, but the probability distribution of outcomes is known) and *uncertainty* (where the probability distribution is not known, and even a complete list of possible outcomes may not be possible). Risk is insurable, but uncertainty is not. Knight saw entrepreneurial profit as the reward for bearing uncertainty. Knight (1921, pp. 250–251) held that insurance was feasible even in situations with little objective data provided that professionals in the relevant field could make “conservative and competent” estimates (even if insurance was imperfect in such situations because of moral hazard), but that there remained some uncertainty that was “uninsurable (because unmeasurable and this because unclassifi-able),” when it was not possible for businessmen to even list all the possible outcomes. According to Keynes (1921), not all degrees of belief are numerically measurable or even comparable. Writing on long-period expectations in chapter 12 of his *General Theory of Employment, Interest, and Money* (1936), Keynes invoked fundamental, uninsurable uncertainty about the prospects of another world war, technological breakthroughs, or the position of property owners in the future social order to explain the volatility of private investment spending: expectations of the profitability of investment projects are guesses about an unknown and unknowable future, and are subject to drastic revision as scraps of new information become available. Under nonergodic conditions of true uncertainty, past observed frequencies are highly imperfect guides to future events, and a universe of discoverable regularities that can be expected to continue is a misleading analogy (Davidson 1991). Other chapters of Keynes’s *General Theory*, however, proceeded as though discoverable regularities exist.

Leonard J. Savage (1917–1971) and I. J. Good developed the personalist view of probability advanced by Ramsay, de Finetti, and Koopman, and explored its implications for statistics. In this view, probability is no more than an index of a person’s degree of belief in a statement (or in the occurrence of a future event), and reflects the limitations of a person’s information, which may or may not reflect any inherent randomness in the world. Good and Savage emphasized Thomas Bayes’s (d. 1761) theorem or rule as the way to update one’s belief in the probability of statement *S* in light of some observed data. The probability that the data is observed and that *S* is true can be expressed either as the probability of the data being observed given that *S* is true multiplied by the prior probability that *S* is true, Pr(dataǀS)Pr(S), or as the probability of the data being observed multiplied by the probability of *S* being true given that the data have been observed, Pr(data)Pr(Sǀdata). The posterior probability that *S* is true given that the data have been observed, Pr(Sǀdata), can be solved for as Pr(dataǀS)Pr(S)/Pr(data). The approach pioneered by Good and Savage is known as Bayesianism (see Joyce 2004) and views rational behavior as the maximization of subjective expected utility (with expected utility being linear in probabilities) subject to probabilistic beliefs that have been updated according to Bayes’s theorem. To the Bayesian, there is no distinction between uncertainty and risk: using available evidence in forming updated, posterior probabilities (over all possible, mutually exclusive outcomes, with “any other outcome” as one of the possibilities, so that the list is exhaustive) does not presume that a true, objective probability distribution will ever be achieved.

Various paradoxes, such as the Allais paradox and Ellsberg paradox, have been observed, in which people make choices in ways that violate Savage’s axioms for rationality in the sense of maximization of expected utility given coherent and consistent beliefs about probabilities (Machina [1987] and the extensive references given there, as well as Jallais and Pradier [2005]). Experiments conducted by the psychologists Daniel Kahneman and Amos Tversky (1937–1996), for which Kahneman received the 2002 Nobel Prize in economics, revealed framing effects, in which choices made by subjects depend on how questions are put: in particular, someone may assign probability *p* to a statement (or event) *S*, yet assign some probability other than 1–*p* to not- *S*.

Daniel Ellsberg (1961) showed experimental subjects two urns, one with fifty red balls and fifty black balls and the other with one hundred balls, an unknown number red and the rest black. Offered a prize for drawing a red ball, subjects strictly preferred to draw from the first urn, yet offered a prize for drawing a black ball, they again strictly preferred to draw from the first urn, a result not consistent with any subjective probability assigned to drawing a red ball from the second urn. Several commentators have interpreted such results as displaying aversion to uncertainty or ambiguity. Savage responded to Maurice Allais’s counterexample (in which Savage himself responded to twenty questions from Allais in ways that violated Savage’s axioms) by reinterpreting his axioms as a normative theory, which should convince anyone to whom it was explained suitably, rather than as a positive theory of rational behavior (Jallais and Pradier 2005). Peter Fishburn, David Schmeidler, and Robert Sugden, among others, have dealt with the observed paradoxes of choices involving subjective probability by generalizing expected utility theory by dropping the standard additivity or compounding rules of probability theory (Machina 1987).

## BIBLIOGRAPHY

Bayes, Thomas. 1763. An Essay Towards Solving a Problem in the Doctrine of Chances, with Richard Price’s Foreword and Discussions. *Philosophical Transactions of the Royal Society of London* 53: 370–418. Reprinted with a biographical note by G. A. Barnard in *Biometrika* 45 (1958): 293–315.

Davidson, Paul. 1991. Is Probability Theory Relevant for Uncertainty? A Post Keynesian Perspective. *Journal of Economic Perspectives* 5 (1): 129–143.

Ellsberg, Daniel. 1961. Risk, Ambiguity, and the Savage Axioms. *Quarterly Journal of Economics* 75 (4): 643–669.

Jallais, Sophie, and Pierre-Charles Pradier. 2005. The Allais Paradox and Its Immediate Consequences for Expected Utility Theory. In *The Experiment in the History of Economics*, ed. Philippe Fontaine and Robert Leonard, 25–49. London and New York: Routledge.

Joyce, James M. 2004. Bayesianism. In *The Oxford Handbook of Rationality*, ed. Alfred R. Mele and Piers Rawling, 132–155. Oxford: Oxford University Press.

Keynes, John Maynard. 1921. *A Treatise on Probability*. London: Macmillan.

Keynes, John Maynard. 1936. *The General Theory of Employment, Interest, and Money*. London: Macmillan.

Knight, Frank H. 1921. *Risk, Uncertainty, and Profit*. Boston: Houghton Mifflin.

Kyburg, Henry E., Jr., and Howard E. Smokler, eds. 1964. *Studies in Subjective Probability*. New York: Wiley.

Machina, Mark J. 1987. Choice Under Uncertainty: Problems Solved and Unsolved. *Journal of Economic Perspectives* 1 (1): 121–154.

Savage, Leonard J. 1972. *The Foundations of Statistics*. 2nd ed. New York: Dover.

*Robert W. Dimand*

#### More From encyclopedia.com

#### You Might Also Like

#### NEARBY TERMS

**Probability, Subjective**