Risk Ethics

views updated

RISK ETHICS

Risk ethics is an emerging branch of philosophy that investigates the moral aspects of risk and uncertainty. Although one originating motivation in the pursuit of science and technology was an effort to reduce risk and uncertainty present in the natural world, it has been increasingly appreciated that the scientific and technological world presents its own constructed risks. Recognizing that one form of risk (natural) is overcome only at the cost of another form of risk (involved with science or technology) has stimulated critical reflection on risk in ways that did not occur in the absence of technological risk.


A Brief Introduction to Risk Concepts

Risk has vernacular and technical meanings. In everyday language a risk is simply a danger. But in relation to science and technology, risk is often defined as the probability of some harm. The probability of a benefit is often called a chance. According to another common definition, risk is identified with the value obtained by multiplying the probability of some harm or injury by its magnitude. With any attempt to spell out the details of how this might be done, however, problems arise since it is not clear that there is a single measure for all harms or injuries. Attempts have been made to measure all health effects in terms of quality-adjusted life years (Nord 1999). Risk-benefit analysis goes one step further and measures all harms in monetary terms (Viscusi 1992). However, as several critics have pointed out, such unified approaches depend on controversial value assumptions and may be difficult to defend from an ethical point of view (Shrader-Frechette 1992).

Independent of methodological issues, however, are the assumptions of traditional moral philosophy, which has focused on situations in which the morally relevant properties of human actions are both well-determined and knowable. In contrast, moral problems in real life often involve risk and uncertainty. According to common moral intuitions it is unacceptable to drive a vehicle in such a way that the probability is 1 in 10 that one runs over a pedestrian, but acceptable if this probability is 1 in 1 billion. (Otherwise one could not drive at all.) It is far from clear how standard moral theories can account for the difference and explain where the line should be drawn.


Utilitarianism

In utilitarian ethics, all moral appraisals are reducible to assignments of utility, a (numerical) measure of moral value. Furthermore, the utility of human actions is assumed to depend exclusively on their consequences. According to utilitarianism one should always choose the alternative that has the highest utility, that is, the best consequences.

One utilitarian approach to risk is actualism, according to which the moral value of a risky situation is equal to the utility of the outcome that actually materializes. For example, suppose that an engineer decides not to reinforce a bridge in advance of it being subject to an exceptionally heavy load, although there is a 50 percent risk that the bridge will collapse under such use. If all goes well and the bridge carries the load, then according to the actualist standpoint what the engineer did was right. But examples such as this show that actualism cannot provide meaningful action guidance. Even if actualism is accepted as a method for retrospective moral assessment, another theory is needed to guide decision-making about the future.

One such theory is expected utility maximization, which has become the standard utilitarian approach to risk. According to this theory, the utility of the prospect that an outcome may occur is obtained by multiplying the utility of the outcome itself by its probability. Then, the action with the highest probability-weighted value should be chosen. According to this rule, an action with the probability 1 in 10 to kill a person is five times worse than an action with the probability 1 in 50 of the same outcome. This method for weighing potential outcomes is routinely used in risk analysis.

In intuitive arguments about risk, it is common to give the avoidance of very large disasters, such as a nuclear accident costing thousands of human lives, a higher priority than is warranted by probability-weighted utility calculations. For instance, people clearly worry more about the possibility of airplane crashes (low-probability but high-cost events) than automobile accident deaths (which are higher-probability but lower-cost events). Expected utility maximization disallows such cautious decision-making. Proponents of precautionary decision-making may see this as a disadvantage of utility maximization, whereas others may see it as a useful protection against costly over-cautiousness.

Just like other forms of utilitarianism, expected utility maximization is strictly impersonal. Persons have no role in the ethical calculus other than as bearers of utilities whose values are independent of those who carry them. Therefore, a disadvantage affecting one person can always be justified by a sufficiently large advantage to some other person. No moral distinction is made between the act of exposing oneself to a serious danger in order to gain some advantage and the act of exposing someone else to the same danger for the same purpose. This is a problematic feature of utilitarian theory in general that is often aggravated in problems involving risk.


Duty- and Rights-Based Theories

A moral theory that is based on duties (rather than on the consequences of actions) is called deontological or duty-based. A moral theory in which rights have the corresponding role is called rights-based.

Robert Nozick formulated the problem for rights-based theories in dealing with risks in this way: "Imposing how slight a probability of a harm that violates someone's rights also violates his rights?" (Nozick 1974,
p. 7). Similarly, one may ask the following question about deontological theories: "How large must the probability be that one's action will in fact violate a duty for that action to be prohibited?"

One possible answer to these questions is to prescribe that a (rights- or duty-based) prohibition to bring about a certain outcome implies a prohibition to cause an increase in the probability of that outcome (even if the increase is very small). But such a far-reaching extension of rights and duties is socially untenable. Human society would be impossible if people were not allowed to perform actions such as car driving that involve a small risk of developing into a violation of some prohibition.

It seems clear that rights and prohibitions may lose their force when probabilities are sufficiently small. The most obvious way to account for this is to assign to each duty or right a probability limit below which it is not valid. However, no credible way to derive such a limit has been proposed. It is also implausible to draw the line between acceptable and unacceptable probabilities of harm with no regard to the benefits involved. (In contrast, such weighing against benefits is easily accounted for in utilitarian theories.)


Contract Theories

According to contract theories, the moral principles that rule humans' dealings with each other derive from a contract between all members of society. The social contract prohibits certain actions, such as actions that lead to the death of another person. Under what conditions should it also prohibit actions with a low but nonzero probability of leading to the death of another person? The most obvious response to this question is to extend the criterion that contract theory offers for the determinate case, namely consent among all those involved, to cases involving risk and uncertainty. This can be done in two ways because consent, as conceived in contract theories, can be either actual or hypothetical.

According to the criterion of actual consent, all members of society would have a veto over actions that expose them to risks. This would make it virtually impossible, for example, to site industries that are socially necessary but give rise to emissions that may disturb those living nearby. With a rule of actual consent, a small number of nonconsenting persons would be able to create a society of stalemates, to the detriment of everyone else. Therefore, actual consent is not a realistic criterion in a complex society in which everyone performs actions with marginal effects on the lives of many others.

Contract theory has a long tradition of operating with the hypothetical consent that is presumed to be given by every hypothetical participant in an ideal decision situation such as described in John Rawls's "original position." Unfortunately, none of the ideal situations constructed by contract theorists seems to have made the moral appraisal of risk and uncertainty easier or less dependent on controversial values than the corresponding appraisals in the real world.

Widening the Issue

Many discussions of risk have been limited by an implicit assumption that excludes important ethical aspects. It is assumed that once we have moral appraisals of actions with determinate outcomes, we can more or less automatically derive moral appraisals of actions whose outcomes are "probabilistic mixtures" of such determinate outcomes. Suppose, for instance, that moral considerations have led us to attach well-determined values to two outcomes X and Y. Then we are supposed to have the means needed to derive the values of mixed options such as 70 percent chance of X and 30 percent chance of Y. The crucial assumption is that the probabilities and values of nonprobabilistic alternatives completely determine the values of probabilistic alternatives.

In real life, however, there are always other factors in addition to probabilities and utilities that properly influence our moral appraisals of an uncertain or risky situation. We need to know not only the values and probabilities of potential outcomes, but also who exposes whom to risk and with what intentions, the extent to which the exposed person was informed, whether or not the person consented, and more.

Perhaps the most important foundational problem in risk ethics is the conflict between two principles that both have intuitive appeal. They can be called the collectivist and the individualist principles in risk ethics (Hansson 2004). According to the collectivist principle of risk ethics, exposure of a person to a risk is acceptable if and only if this exposure is outweighed by a greater benefit either for that person or others. According to the individualist principle, exposure of a person to a risk is acceptable if and only if this exposure is outweighed by a greater benefit for that person only.

The collectivist principle dominates traditional risk analysis, but if carried to extremes it will lead to neglect of individual rights. The individualist principle is equally problematic, because it allows minorities to prevent social progress. It is a major challenge for risk ethics to find a reasonable and principled compromise between these two extreme positions.


SVEN OVE HANSSON

SEE ALSO Risk Assessment;Risk Perception.

BIBLIOGRAPHY

Hansson, Sven Ove. (2003). "Ethical Criteria of Risk Acceptance." Erkenntnis 59(3): 291–309. Discusses how risks can be dealt with in different moral theories.

Hansson, Sven Ove. (2004). "Weighing Risks and Benefits." Topoi 23: 145–152. Discusses different ways to weigh risks against benefits.

Hansson, Sven Ove, and Martin Peterson. (2001). "Rights, Risks, and Residual Obligations." Risk, Decision, and Policy 6(3): 157–166. Discusses the obligations that follow from exposing the public to risk.

Nord, E. (1999). Cost-Value Analysis in Health Care: Making Sense Out of QALYs. Cambridge, UK: Cambridge University Press. One unified measure of human injuries.

Nozick, Robert. (1974). Anarchy, State, and Utopia. New York: Basic Books. Discusses rights-based approaches to risk.

Shrader-Frechette, Kristin. (1992). "Science, Democracy, and Public Policy." Critical Review 6(2–3): 255–264. A critical appraisal of cost-benefit analysis.

Thomson, Judith. (1985). "Imposing Risk." In To Breathe Freely: Risk, Consent, and Air, ed. Mary Gibson. Totowa, NJ: Rowman and Allanheld. Person-related aspects of risk exposure.

Viscusi, K. (1992). Fatal Tradeoffs: Public and Private Responsibilities for Risk. New York: Oxford University Press. The author is a leading proponent of risk-benefit analysis.