Risk Perception

views updated

RISK PERCEPTION

Risk perception has been defined variously as perceived or subjective probability estimates of death, other judgments of probable harm or loss, psychological states such as fear or traumatic stress, beliefs about causal processes resulting in harm or loss—that is, mental models of hazardous processes, or attitudes toward the activity, event, product, or substance in question. Risk perception, in which risk is assessed subjectively, often without formal decomposition into probability and harm, is frequently treated as folk or lay risk assessment.

When elicited as subjective probability or frequency of mortality, risk perceptions can agree or disagree with actuarial information, where such exists, and can in some instances be validated or invalidated by science. Comparisons of lay and expert risk perceptions, together with research on the effects of risk communication, illustrate that expertise and information can have large effects on risk perceptions. Such comparisons have been used to make the ethical claim that non-experts are irrational when they fear risks that experts deem acceptable, such as risks from genetically modified organisms. Shrader-Frechette points out that those framing risk questions control the answers, and suggests that to deal with the great uncertainties surrounding, for example, ecological risks, the burden of proof should fall on those proposing that a risk is acceptable. Shrader-Frechette also proposes a three-category framework for risk, as an alternative applying the effect-no effect (or acceptable-unacceptable) dichotomized view of science to risks. In her view, serious risks for which the complexities and uncertainties are so great that we lack sufficient information to make a decision fall into a third category (e.g., Shrader-Frechette, 1994). However, as intuitive statisticians, both experts and non-experts are subject to predictable judgmental biases (Fischhoff, Bostrom, and Jacobs Quadrel 2002; Gilovich, Griffin, and Kahneman 2001; Kahneman, Slovic, and Tversky 1982). Personal experiences also affect risk perceptions, though if not repeated their effects may disappear over time. That communities enact policies to reduce their seismic risks following large earthquakes and resist or ignore them at other times testifies to this, as do differences between life scientists and other scientists in their risk perceptions.

Schools of Thought

Risk perception research since the 1970s has been characterized by several schools of thought, each of which is associated with particular disciplinary backgrounds and methodological predilections. Psychometric research and cultural theory are among the most widely acknowledged.

Psychometric research on risk perception proceeded by analogy with measurements of physical perceptions—such as light, weight, or heat—in attempting to establish reliable, validated psychological scales for perceived risk. By eliciting people's judgments on dimensions such as dread, familiarity, catastrophic potential, and control, researchers were able to predict, to some extent, risk acceptance judgments. This research produced a risk factor space, the two dimensions of which were how familiar, controllable, and understood risks are, and how much people dread them, including judgments of catastrophic potential. For example, the risks from nuclear power are typically perceived as highly unknown and dreaded, landing in the upper right quadrant of those two dimensions, where as the risks from bicycles are perceived as known and are not dreaded, putting them in the lower left quadrant. This vein of research is best characterized in works by Paul Slovic, Baruch Fischhoff, Sarah Lichtenstein, and colleagues (Slovic 2000).

Cultural theory stems from anthropologist Mary Douglas's writings on risk and culture. Among the best-known tests of cultural theory are those that employ grid/group theory, in which it has been shown that people's attitudes toward risks are a product of their degree of individualism, egalitarianism, and hierarchy or collectivism. Related research on worldviews posits that risk perceptions are a function of attitudes toward science and technology in particular, but also other attitudes.

Another approach is to treat risk perception as an instance of information processing. Information processing is cognitive, social, and affective (Damasio 1994). Cognitive processes such as categorization, similarity judgments, and inference from mental models are, from an information processing perspective, all components of risk perception. Recent research shows that there is a strong relationship between affect and perceived risk. There is a commonly observed inverse relationship between perceived risk and perceived benefit. Under time pressure, which limits analytic thought and increases reliance on affect, this inverse relationship strengthens (Finucane, Alhakami, Slovic, and Johnson 2000). Further, introducing information that changes one's affective evaluation of an item, for example information that associates nuclear power with clean air and pastoral scenes, can systematically change both the related risk and benefit judgments.

People seem prone to using an "affect heuristic" that improves judgmental efficiency by deriving both risk and benefit evaluations from a common source: affective reactions to the stimulus item. The mechanisms for these effects may be hardwired in our brains, in the amygdala, through which all thought passes. Animal studies suggest that the amygdala coordinates multiple fear systems, and that fear is a potent determinant of memory, learning, and salience.

Ethical Issues

People's behavior depends on their risk perceptions. Given this dependency, whose risk perceptions should prevail to determine societal priorities is often contested. Further, technical risk assessments generally apply to a statistical person or to a population, and so are not directly applicable to an individual or that individual's perceptions of his or her own risk. Therein lies the central ethical dilemma posed by risk perceptions, exacerbated by their variability and vulnerability to judgmental biases.

In addition, overarching ethical principles conflict with manipulations of risk perceptions that may, at face value, seem in the public interest. Principles such as those in the U.S. Bill of Rights are vulnerable to perceived needs precipitated by risk perceptions. As the U.S. Public Law 107–56 (commonly known as the U.S. Patriot Act, 2001) and the U.K. Anti-Terrorism, Crime, and Security Act (also 2001) illustrate, it is easy to delimit transparency of government, judicial checks on legislative and executive branches, and civil liberties and equal treatment of citizens under the guise of reducing risks, even without evidence that the measures enacted will actually reduce risks.

The literature on risk perception across different domains of science and technology is daunting. Health, environmental, and technological risk perception, and to some extent hazard perception, are largely separate bodies of research. Health risk perception research is rooted primarily in social psychology, and has been dominated by the health belief model, the theory of reasoned action, and variants thereon. This research is influenced by the extended parallel process model, which predicts that people who believe something poses a serious risk to them personally will engage in fear control rather than risk control if they do not believe that they can control the risk effectively (Witte 1992). Environmental and technological risk perception research has drawn more broadly on social and cognitive sciences, including the theories and models cited above. Methods have varied from informal and sometimes misleading reliance on casual observations, such as of focus groups, to carefully designed and implemented surveys and experiments. Anthropology and ethnographic methods of studying risk perceptions have grown in importance, as practitioners have recognized their value in improving the design of risk interventions, as well as providing a fuller account of how people perceive risk.

Spatial and temporal dimensions of risk perceptions remain to be fully explored, and will likely provide further insights into risk behaviors.


ANN BOSTROM

SEE ALSO Risk;Risk and Emotion; Risk-Cost-Benefit Analysis;Risk Ethics;Risk Perception;Risk Society.

BIBLIOGRAPHY

Damasio, Antonio R. (1994). Descartes' Error: Emotion, Reason and the Human Brain. New York: Putnam.

Douglas, Mary, and Aaron Wildavsky. (1983). Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. Berkeley: University of California Press.

Finucane, M.L.; A. Alhakami; P. Slovic; and S. M. Johnson. (2000). "The Affect Heuristic in Judgments of Risks and Benefits." Journal of Behavioral Decision Making 13: 1–17.

Fischhoff, B.; A. Bostrom; and M. Jacobs Quadrel. (2002). "Risk Perception and Communication." In Oxford Textbook of Public Health, 4th edition, ed. Roger Detels, et al. Oxford, UK: Oxford University Press.

Gilovich, Thomas; Dale Griffin; and Daniel Kahneman, eds. (2001). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, UK: Cambridge University Press.

Kahneman, Daniel; Paul Slovic; and Amos Tversky, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press.

Shrader-Frechette, K. (1994). "Science, Environmental Risk Assessment, and the Frame Problem." BioScience 44(8), 548–552.

Slovic, Paul. (2000). The Perception of Risk. London: Earthscan.

Witte, Kim. (1992). "Putting the Fear Back into Fear Appeals: The Extended Parallel Process Model." Communication Monographs 59: 329–349.