Uncertainty

views updated May 29 2018

UNCERTAINTY

The privative concept of uncertainty is more important in science, technology, and ethics than its positive root, certainty. (There is no entry in the encyclopedia on certainty.) This is the case for two reasons: Uncertainty is more common than certainty, and the implications of uncertainty for human action are more problematic than certainty. Uncertainty in science or engineering appears to call for an ethical assessment; uncertainty in ethics is a cause for moral concern. Nevertheless before discussing uncertainty, it is useful to begin with some considerations of certainty, the positive notion from which it is derived.


Certainty and Uncertainty in History

Concern for certainty as a distinct issue emerges at the same time as modern natural science. In premodern philosophy and science, it is difficult to find any term or concept that is strictly analogous. The Latin certus, the etymological root of certainty, is from the verb cernere, meaning to decide or determine; the Greek cognate krinein means to separate, pick out, decide, or judge. This sense remains in English when speaking of a certain X, indicating one item picked out from a group.

The concept of certainty in something approaching the modern sense is first given extended analysis in relation to religious faith. Faith, according to Augustine, is more certain than other forms of knowledge. Thomas Aquinas replies (Quaestiones disputatae de veritate, q. 14) that faith is psychologically but not epistemologically more certain than knowledge. Falling between knowledge and opinion in its degree of certainty, faith is defined as "an act of the intellect assenting to divine truth at the command of the will moved by the grace of God" (Summa theologiae II-II, q. 1). Moreover the certainty of faith provides a basis for moral judgment that is more secure than any provided by natural knowledge. Through faith, ethics takes on obligations of a stronger character than would otherwise be possible.

From theology, certainty becomes an issue for science when philosophers such as Francis Bacon and René Descartes argue for seeking cognitive certainty not through faith but through new methodologies. As interpreted by John Dewey in The Quest for Certainty (1929), "The quest for certainty is a quest for peace which is assured, an object which is unqualified by risk and the shadow of fear that action casts" (Dewey, p. 7). But the effort to secure such certainty and security that was originally undertaken through religious acceptance or propitiation of the gods is, in the early twenty-first century, commonly sought by means of technology and science. Extending Dewey, it is noteworthy that significant worries about lacks of certainty only became prominent as the new methods began to succeed so as to raise expectations of still further achievement. Thus has the pursuit of certainty through science and technology acquired a sense of ethical obligation.

The quest for certainty implies the presence of uncertainty, so that although this could not have been said prior to the modern period, it is now common to describe all human action as taken in the context of uncertainty. Insofar as this is the case, uncertainty is a locus of ethical discourse and conflict. Yet there are two forms of uncertainty in the modern sense that are most basic. Although often thought of as incomplete knowledge and applied to propositions, uncertainty can also be a psychological state. This distinction is important because perceived uncertainties may or may not reflect the actual state of incompleteness in knowledge. Perceptions of uncertainty may themselves be uncertain.


Uncertainty in Science

Characterizing and quantifying uncertainty is a core activity of science. Uncertainty emerges from research methodologies themselves, from the inherent characteristics of the processes and phenomena being studied, from incomplete or imperfect understanding, and from the contexts within which human beings seek to understand their surroundings. These sources of uncertainty may be understood, but they can never be eliminated. Uncertainty is always present to some degree in scientific knowledge, and in our formal knowledge of the world. This phenomenon is most famously embodied in Heisenberg's Uncertainty Principle, which states that the location and momentum of subatomic particles—the fundamental components of existence—can never simultaneously be known with complete accuracy.

Uncertainty is conceptually and practically distinct from fallibilism, or the notion that all scientific knowledge may turn out to be false. While both uncertainty and fallibility are attributes of knowledge, uncertainty refers to the accuracy of knowledge; fallibility to the provisional nature of knowledge. As Heisenberg's Uncertainty Principle illustrates, even if some knowledge (in this case, the uncertainty principle itself) were not provisional, uncertainty would still exist.

If, by contrast, the world were largely deterministic—that is, if its behavior could be explained through comprehensible and invariant cause and effect relations—then uncertainty could be eliminated, at least in theory. In practice, determinism can be approximated in some important human activities. Engineered systems, for example, can be designed as closed systems whose functional behavior is dictated by well-tested, scientific laws (laws of gravity, thermodynamics, and more), tested in laboratories, and supported by experience. Thus, for example, a bridge, or electronic circuit, or nuclear reactor, may operate with high reliability for decades. Eventually, however, the apparently closed system is breached—by corrosion, contamination, earthquake, or terrorism, among others—and the behavior of the system can no longer be thought of as deterministic or certain. The embeddedness of all engineered systems in larger social and natural systems dictates that uncertainty will eventually be introduced into engineering.

Uncertainties can be known with accuracy in closed systems that display random, or aleatory, behavior. Once the laws governing such system behavior are well elucidated, aleatory uncertainties cannot be further reduced. The obvious example is a game of dice or cards, where probabilities of particular outcomes can be determined from relatively simple statistical methods due to the known behavior of six-sided dice or fifty-two-card decks. Random behavior, and thus aleatory uncertainty, also exists in nature (for example, radioactive decay, Brownian motion), and can be approximated by some living systems (such as growth of bacteria in a medium) over limited periods of time, and often described by simple mathematical relations. Aleatory uncertainty is a property of random behavior in closed systems; it is inherent in the system itself.

For open systems whose governing laws cannot be fully elucidated, which includes all social and many technological and natural systems, uncertainty is said to be epistemic—a consequence of incomplete knowledge about cause-and-effect relations. In such cases—that is, most of the real world—uncertainty is a characteristic of both the system itself, and the psychological state of those who are assessing the uncertainty. Most problems at the interface of science, uncertainty, and ethics, are problems of epistemic uncertainty.

Epistemic uncertainties are most typically measured and expressed in probabilistic terms. Probabilities may be determined through frequentist approaches based on statistical analysis of past events or phenomena, or through subjectivist approaches, such as eliciting expert opinions, or surveying the scientific literature on a given subject. It is important to keep in mind that probability distributions derived from subjectivist approaches are distributions of beliefs about events, not of actual event occurrences.

Epistemic uncertainties also may be expressed in qualitative terms (such as likely, unlikely, and doubtful), or nonprobabilistically as ranges in values (for example, as error bars on a graph). Quantitative, nonprobabilistic uncertainties can also be derived from a comparison of the differences among outputs from different mathematical models ("model uncertainty").

Uncertainty in some complex systems or problems can be successfully addressed with frequentist approaches, because observational experience is sufficient to allow rigorous statistical treatment. Insurance companies, for example, set premiums using population-based data on life expectancy, morbidity, and frequency of auto accidents, among others. Engineers use data from tests and historical performance to estimate probabilities of failures in technological systems. Weather forecasts take advantage of a long history of careful observation of meteorological events. In such cases, uncertainty estimates can be refined and sometimes reduced on the basis of ongoing experience. It is important to recognize, however, that frequentist estimates of uncertainty are not necessarily accurate indicators of future probabilities, because in open systems, past behavior, however well documented, does not necessarily foretell future behavior. For example, 100-year flood levels, which are based on historical records and used in the United States for planning and insurance purposes, derive from the false assumption that climate behavior does not vary on time scales of more than a century (Pielke 1999).


Contextual Origins of Uncertainty

Uncertainty is a crucial concept in human affairs because knowledge of the future is always imperfect, and decisions are therefore always made in the face of uncertainty about their outcomes. From this perspective, the word uncertainty refers most generally to the disparity between what is known and what actually is or will be. Uncertainty, that is, reflects an incomplete and imperfect characterization of current conditions relevant to a decision, and the incomplete and imperfect knowledge of the future consequences of the decision. Logically, then, one way to improve the success of a decision should be to characterize, and if possible reduce, the uncertainty relevant to that decision, and considerable resources in science are devoted to this task. But significant obstacles stand in the way of this goal.

Many, perhaps most, of the important decisions faced by society have one or more of the following attributes: (1) the problem cannot be characterized in terms of easily measured outcomes in a well-defined population; (2) sufficient or relevant historical data are not available to allow frequentist approaches; (3) the dynamics of system behavior are incompletely and imperfectly understood; (4) the system is open; (5) numerous disciplines can contribute relevant understanding; and (6) different interests or values define the problem in different ways. For these reasons, most uncertainties in human affairs are epistemic, and most must be assessed through subjectivist methods. In all such cases, estimates of uncertainty are themselves both uncertain and strongly conditioned by the social context within which they are generated and used.

Less uncertainty can be an attribute of less knowledge. Continual research into and experience with complex, open systems should be expected to reveal new questions and new intricacies that may add to uncertainty over time. New knowledge does not necessarily translate into a greater ability to make well-constrained statements about cause-and-effect relations relevant to human decisions. The archetypal example of this phenomenon is the climate change controversy, where ongoing research into the operations of the earth system and its interactions with human activities is continually introducing new variables and parameters, new appreciation of existing complexities, and new areas of scientific disagreement. While the observation of global warming is robust, and the rising impact of climate on society well documented, continued investigation into the causal relations between these two observations yields an ever expanding array of possible causal agents, and growing intricacy in the relations among agents.

A conventional view of this problem describes a cascade of uncertainty, where the more modest uncertainties embodied in the understanding of relatively simple systems or phenomena are introduced into and magnified at the next level of complexity, which in turn introduces its own, perhaps greater, uncertainties (Schneider and Kuntz-Duriseti 2002). The importance of this notion lies especially in the fact that simpler systems are generally farther away from real world problems. Thus it is hard enough to understand and reduce the uncertainties surrounding greenhouse gas behavior in the atmosphere, but if the concern is the impacts of those gases on society via changes in regional climate, then uncertainties cascade beyond comprehension or control.

This view of the problem locates uncertainty in the complexity of natural and social systems being studied, but uncertainty also arises from the conduct of these studies. Science is not a unitary activity; multiple disciplinary approaches often yield multiple perspectives that do not fit together to yield a seamless picture of nature, but rather create multiple and sometimes even conflicting pictures (Dupré 1993). For example, plant geneticists and those in related fields commonly evince greater certainty than ecologists that genetically modified crops will be beneficial to humanity and the environment. These differences derive in part from different ways of understanding nature. Plant geneticists, employing reductionist approaches to crop engineering, are thus confident about their ability to control crop behavior. Ecologists, in contrast, study complex systems where small variations in conditions are often seen to have large and unpredictable impacts.

Lying beneath these epistemological differences are likely to be ethical tensions between one worldview where control of nature yields human benefit and another where pretensions to control can be futile and dangerous. For complex issues where relevant knowledge comes from multiple disciplines, estimates of uncertainty may thus partly be a reflection of competing disciplinary perspectives, and the ethical commitments entailed in those perspectives. These relations are likely to be reinforced by behavioral attributes of scientists. In particular, experts typically underestimate uncertainty in their own area of expertise (Kahneman et al. 1982) while locating the sources of uncertainty in disciplines other than their own (Pinch 1981).

Uncertainty estimates may strongly reflect institutional and political context. Consider, for example, that the U.S. National Aeronautics and Space Administration (NASA) initially estimated the reliability of its space shuttle fleet at 0.9997, or one failure every 3,333 launches (Pielke 1993). Since then two shuttles out of 112 total launches have self-destructed during flight, yielding a historical reliability of 0.98—thirty times less than the initial estimate. High certainty about shuttle reliability could exist when experience with shuttle flights was small, and knowledge was limited. Yet high certainty was also consistent with the political interests of NASA, and with the institutional incentives in the agency, which rewarded launching shuttles, not grounding them. Another illustration comes from medical science, where a number of studies have shown that clinical trials directly or indirectly supported by pharmaceutical companies often yield more favorable assessments of new therapies—greater certainty about positive results—than trials that are not tied to the private sector in any way (Angell 2000). The point here is not that scientists are engaging in fraudulent research in an effort to bolster desired conclusions, but experimental design and interpretation of data are partly matters of judgment, and judgment may be influenced by the incentives, priorities, and culture of one's work environment.

Additional examples from such areas as climate change science (van der Sluijs et al. 1998), earthquake prediction (Nigg 2000), oil and gas reserve estimates (Gautier 2000), and nuclear waste disposal (Metlay 2000) show that uncertainty estimates are strongly dependent on institutional and political context, and that opening up the research process to additional scientific and institutional perspectives often leads to significant changes in perceived uncertainty.


Uncertainty and Values

Important decisions in human affairs create winners and losers relative to the status quo ante, and thus implicate competing interests and values. In areas of decision making that include a significant scientific component, such as the environment, public health, and technological risk, uncertainty provides the space for disputes between competing interests and values to play out, because those who hold contesting positions can make conflicting or disparate science-based claims about the consequences of particular courses of action. Thus, for example, supporters of genetically modified foods can point to the potential for gains in crop productivity, and opponents can point to the threat of diminished crop genetic diversity. This is a self-reinforcing process: As value disputes grow more heated, they bring out the latent uncertainties associated with a problem or decision by expanding the realm of phenomena, disciplinary perspectives, and institutional and political players relevant to the problem. These relations are schematically illustrated in Figure 1.

So long as uncertainty is understood simply in terms of the incomplete but ever-improving knowledge of the world, reduction of uncertainty will be prescribed as a path toward resolving political disputes. But when uncertainty is also recognized as an outgrowth of the contexts within which scientific inquiry is structured and carried out, the path begins to look Sisyphean. Indeed the contextual diversity of science is the manifestation of, not the solution to, the conflicting values that underlie political debate. These observations suggest that the taming of uncertainty must depend not on the capacity of science to characterize and reduce uncertainty, but on the capacity of political processes to successfully resolve value disputes that underlie the choices that humans face.


DANIEL SAREWITZ CARL MITCHAM

SEE ALSO Precautionary Principle; Reliability of Technology: Risk; Technical and Social Dimensions;Unintended Consequences.

BIBLIOGRAPHY

Angell, Marcia. (2000). "Is Academic Medicine for Sale?" New England Journal of Medicine 342: 1516–1518. How interests and uncertainty interact in clinical medicine.

Dewey, John. (1984). John Dewey: The Later Works, 1925–1953; vol. 4: (1929) The Quest for Certainty: A Study of the Relation of Knowledge and Action. Carbondale: Southern Illinois University Press.

Dupré, John. (1993). The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press. Why science does not provide a unified explanation of nature.

Gautier, Donald L. (2000). "Oil and Gas Resource Appraisal: Diminishing Reserves, Increasing Supplies." In Prediction: Science, Decision Making, and the Future of Nature, eds. Daniel Sarewitz; Roger Pielke Jr.; and Radford Byerly Jr. Covelo, CA: Island Press. Uncertainty and predictions of hydrocarbon reserves.

Kahneman, Daniel; Paul Slovic; and Amos Tversky. (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Classic treatment of uncertainty and human decision making.

Metlay, Daniel. (2000). "From Tin Roof to Torn Wet Blanket: Predicting and Observing Groundwater Movement at a Proposed Nuclear Waste Site." In Prediction: Science, Decision Making, and the Future of Nature, eds. Daniel Sarewitz; Roger Pielke Jr.; and Radford Byerly Jr. Covelo, CA: Island Press. Uncertainty and nuclear waste storage.

Nigg, Joanne. (2000). "Predicting Earthquakes: Science, Pseudoscience, and Public Policy Paradox." In Prediction: Science, Decision Making, and the Future of Nature, eds. Daniel Sarewitz; Roger Pielke Jr.; and Radford Byerly, Jr. Covelo, CA: Island Press. Uncertainty and earthquake prediction.

Pielke, Roger A., Jr. (1993). "A Reappraisal of the Space Shuttle Program." Space Policy 9: 33–157.

Pielke, Roger A., Jr. (1999). "Nine Fallacies of Floods." Climatic Change 42: 413–438. How misunderstanding uncertainty can lead to flawed public policies.

Pinch, Trevor J. (1981). "The Sun-Set: The Presentation of Certainty in Scientific Life." Social Studies of Science 11: 131–158. How scientists perceive uncertainty.

Schneider, Stephen H., and Kristin Kuntz-Duriseti. (2002). "Uncertainty and Climate Change Policy." In Climate Change Policy: A Survey, eds. Stephen H. Schneider; Armin Rosencranz; and John O. Niles. Washington, DC: Island Press. Natural science perspective on uncertainty in climate change science.

van der Sluijs, Jeroen; Josee van Eijndhoven; Simon Shackley; and Brian Wynne. (1998). "Anchoring Devices in Science for Policy: The Case of Consensus around Climate Sensitivity." Social Studies of Science 28(2): 291–323. Social science perspective on uncertainty in climate change science.

Uncertainty

views updated Jun 08 2018

Uncertainty

BIBLIOGRAPHY

In neoclassical theory, markets are portrayed as stable economic systems, with changes in variables having their desired effects: There is a strong tendency for the economic system to converge toward a position of equilibrium. Economic agents are assumed to have all reliable information for decision making, and the future is known with certainty. Any uncertainty regarding future events or outcomes is reduced to a probabilistic distribution.

In heterodox economics, and in post-Keynesian theory in particular, however, markets and economic systems are chaotic and unpredictable (Moore 2006). The economic system is set in what is called historical time; that is, the past is known and cannot be changed, but the future is unknown and cannot be predicted. This also suggests that we do not simply move from one position of equilibrium to another: The passage of time implies that during the interval when we are shifting, other variables may also be changing, such that a final position of equilibrium is difficult to predict and may never even exist. In other words, the economy is path-dependent: It is continuously moving such that there is no final position of rest.

The source of this instability is the uncertain future and how it affects motives and the decision making of all agents. Indeed, fundamental uncertainty is a central argument of post-Keynesian theory. It is defined as a situation in which agents do not know the future: It is the pure absence of knowledge. This was a central feature in Keyness theory of effective demand (see also Knight 1921; Shackle 1967). As Keynes tells us in this memorable passage (1973, p. 113):

We have, as a rule, only the vaguest idea of any but the most direct consequences of our acts. By uncertain knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

As Keynes makes clear, uncertainty is not the same as risk. In such a situation, outcomes are usually known, or at least the probability of possible outcomes known with certainty. But let us be clear: Uncertainty is not a situation in which agents cannot compute possible outcomes or probabilities or do not have sufficient information. Gathering more information does not make the future less uncertain.

Uncertainty affects decision making in many ways. For instance, if firms do not know the future or cannot predict future levels of effective demand or growth rates, how can they take a rational decision regarding investment? If central banks cannot know with certainty future levels of inflation or output, how can they correctly take decisions about interest rates? Similarly, how can banks lend to potential borrowers if they do not know whether they will be able to repay their loans, given the uncertain levels of effective demand in the future? The presence of uncertainty also leads to the emergence of power and hierarchical relationships: Faced with uncertainty, agents will try to capture the biggest share of wealth by exerting power over other individuals and social groups (Monvoisin and Rochon 2006).

Despite the pervasive nature of uncertainty, it does not lead to nihilism. Post-Keynesians have developed theories and policies that incorporate uncertainty (see Rochon 2006). Indeed, even when faced with uncertainty, agents, of course, still make decisions. Agents rely on rules of thumb: They will rely on past decisions, assume the near future is relatively similar to the present, follow the decisions taken by others, or simply postpone taking a decision.

SEE ALSO Economics; Economics, Post Keynesian; Expectations; Risk; Subjectivity: Analysis

BIBLIOGRAPHY

Davidson, Paul. 1978. Money and the Real World, 2nd ed. New York: Wiley.

Keynes, John Maynard. 1973. The General Theory and After: Part II Defence and Development. Vol. 14 of The Collected Writings of John Maynard Keynes, ed. Donald Moggridge. London: MacMillan and St. Martins Press.

Knight, Frank. 1921. Risk, Uncertainty and Profit. London: The London School of Economics and Political Science.

Monvoisin, Virginie, and Louis-Philippe Rochon. 2006. Economic Power and the Real World. International Journal of Political Economy 35 (4): 528.

Moore, Basil. 2006. Shaking the Invisible Hand : Complexity, Endogenous Money and Exogenous Interest Rates. London: Palgrave Macmillan.

Rochon, Louis-Philippe. 2006. Endogenous Money, Central Banks and the Banking System: Basil Moore and the Supply of Money. In Complexity, Endogenous Money and Macroeconomic Theory: Essays in Honour of Basil J. Moore, ed. Mark Setterfield, 220243. Cheltenham, U.K.: Edward Elgar.

Shackle, G. L. S. 1967. The Years of High Theory: Invention and Tradition in Economic Thought, 19261939. Cambridge, U.K.: Cambridge University Press.

Louis-Philippe Rochon

uncertainty

views updated May 11 2018

uncertainty
1. The uncertainty about a piece of knowledge in a knowledge base can be represented in a variety of ways. The most popular is to attach a number to the fact or rule, e.g. 1 for complete truth, 0 for complete falsity, ¾ for likely. Sometimes these numbers are intended to be the probability of the knowledge being true. Reasoning systems must assign an inferred uncertainty value to an inferred piece of knowledge. See also certainty factor.

2. See entropy.

uncertainty

views updated May 21 2018

un·cer·tain·ty / ˌənˈsərtntē/ • n. (pl. -ties) the state of being uncertain: times of uncertainty and danger. ∎  (usu. uncertainties) something that is uncertain or that causes one to feel uncertain: financial uncertainties.