Deontology refers to a general category of ethical or moral theories that define right action in terms of duties and moral rules. Deontologists focus on the rightness of an act and not on what results from the act. Right action may end up being pleasant or unpleasant for the agent, may meet with approval or condemnation from others, and may produce pleasure, riches, pain, or even go unnoticed. What is crucial on this view is that right action is obligatory, and that the goal of moral behavior is simply that it be performed. The slogan of much of deontology is that the right is independent of the good. Deontology is opposed, therefore, to consequentialist or teleological theories in which the goal of moral behavior is the achievement of some good or beneficial state of affairs for oneself or others. For deontologists, the end of moral action is the very performance of it. For consequentialists, moral action is a means to some further end.
There are three interrelated questions that any deontological theory must answer. First, what is the content of duty? Which rules direct human beings to morally right action? Second, what is the logic of these duties or rules? Can their claims be delayed or defeated? Can they make conflicting claims? Third, why must human beings follow exactly those duties and rules, and not others? That is, what grounds or validates them as moral requirements?
The relevance of deontological ethics to issues in science and technology is not immediately obvious. Typical duties or rules in these theories are often quite abstract and sometimes address personal morality; hence they seem ill suited to broad and complicated questions in technical fields. As a matter of personal morality, deontologists might require one never to lie or steal, to give to charity, and to avoid unnecessary harm to people and animals. These rules are often internalized and are supported by religious, social, and civil institutions, and in some cases by enlightened self-interest. But is there a duty to support open source software, or to reject nanotechnology, or to avoid animal experimentation for human products? What list of rules is relevant to moral quandaries over cloning or information privacy?
Though the specific connection between ethical duties and scientific and technological practices may not be immediately obvious, it is clear that deontology can and should play an important role in evaluating these practices. Deontological theories give one a way to evaluate types of acts, so that one can judge a token of an act as obligatory, permissible, or forbidden even before the act is committed. Consequentialist evaluations, on the other hand, must await an accounting of the consequences of scientific and technological acts. Waiting on the consequentialist analysis may be perilous, because the long-term results of large-scale enterprises are often impossible to anticipate and very difficult to repair. As Edward Tenner (1997) has pointed out, modern technology often exacts a kind of revenge in the scope and severity of unintended consequences. Especially in fields such as bioethics, practitioners have often wanted bright lines between right and wrong acts in their ethical guidelines. That is, they want to have ethical rules or principals that are not wholly contingent on consequences. A form of deontological view in bioethics known as principalism focuses on the need for clear guidelines for action in order to avoid problems with unintended and far-reaching consequences of treatments and clinical practices. Even the basic and broadly applicable principle "Do no harm!" is deontological; it does not allow a tradeoff of benefit for some at the cost of harm to others.
Two deontological theories, from the works of Immanuel Kant (1724–1804) and W. D. Ross (1877–1971), serve as the foundations for much work in deontological ethics. Because they differ significantly in the content, logic, and ground of duties, it will be useful to examine them in modest detail before returning to questions of science and technology.
The Categorical Imperative
Kant developed the most important deontological ethical theory in Western philosophy. Scholars have come to agree that Kant provided not so much a list of duties as a procedure for determining duties. The procedure that specifies duty is the categorical imperative or unconditional command of morality. Kant articulated the categorical imperative in several distinct formulations. Even though these formulations provide different ways of generating duties, Kant maintained that his systematic ethic of duties was rigorous—in the technical sense that a "conflict of duties is inconceivable" (Kant 1997, p. 224). Indeed, a main feature of Kant's ethics is its reliance on consistency or harmony in action. This feature can be seen in the first formulation of Kant's categorical imperative, which goes as follows: "Act only on that maxim through which you can at the same time will that it become a universal law" (Kant 1997, p. 421).
Because a maxim in Kant's theory is a plan of action, the categorical imperative above provides an ethical test for intended actions, presumably to be used before one commits them. The point of the test is that one ought to be able to endorse the universal acceptability of the plans or intentions behind actions. People should not be partial to plans simply because they conceived such plans; the plans must be acceptable from any point of view. Maxims that cannot be universalized will produce logical contradiction or disharmony when they are run through the test of the categorical imperative. The grounding or validation of this principle lies in the universality of practical reason. For Kant, ethical duties arise from what is common to humans as rational beings. Humans have a kind of freedom that is gained in creating universal moral laws through intentional behavior. This moral and rational activity is, for Kant, what produces self-legislation or autonomy, and autonomy allows humans to transcend their animal nature.
The ability of humans to act from freely chosen moral rules explains the special moral status they enjoy; humans are, according to Kant, ends-in-themselves. Consequently this conception of a special status gives rise to another formulation of the categorical imperative: "Act in such a way that you always treat humanity [yours or another person's] never merely as a means but always at the same time as an end-in-itself" (Kant 1995a, p. 429).
This special moral status or intrinsic value implies that humans ought never to be valued as less significant than things that have merely instrumental value. Things of instrumental value are mere tools, and though they can be traded off with one another, they can never be more important than intrinsically valuable things. All technology is in some sense a mere tool; no matter how many resources society pours into technologies, the moral status of humans is supposed to trump the value of mere tools. Kantian duties are designed to protect that status.
The application of Kant's theory to issues in the ethics of technology produces intriguing questions. Do some technologies help persons treat others as mere means? The moral inquiry would have to consider aspects of the technologies and see whether technologies have "maxims" themselves—what Günther Anders called a "mode of treatment incarnated in those instruments" (Anders 1961, p. 134). These aspects might include the anonymity of online communities, the distributed effects of computer viruses, the externalizing of costs by polluting corporations, or the inherent destructiveness of a nuclear weapon. Further, one might ask whether some technologies themselves treat persons as mere means? Such a worry is related to Martin Heidegger's view that, under modern technology, humanity becomes a standing reserve to be exploited, and to Herbert Marcuse's claim that such a technological society debases humans by providing a smooth comfortable unfreedom. While these critics of technology do not always identify themselves as Kantians, the influence of Kant's humanistic account of duties has been so deep and broad that it is almost inescapable. Still there are deontologists who have parted ways with the Kantian tradition.
Prima Facie Duties
According to the British philosopher W. D. (Sir David) Ross, moral duties are not universal and unconditional constraints of universal practical reason. Rather they are conditional or prima facie obligations to act that arise out of the various relations in which humans stand to one another: neighbor, friend, parent, debtor, fellow citizen, and the like. This view gives content to duties based on a kind of role morality. It is through moral reflection that one apprehends these duties as being grounded in the nature of situated relations. Duty is something that, for Ross, arises between people, and not merely within the rational being as such. What exactly these prima facie duties are is not infallibly known until the problematic situations present themselves.
Nonetheless, Ross thinks, situated moral agents can grasp some obvious basic forms of duties. Fidelity, reparation, gratitude, justice, beneficence, self-improvement, and non-maleficence are what he identifies as nonreducible categories of duty—he admits that there may be others. Ultimately these duties are known by moral intuition and are objectively part of the world of moral relations and circumstances that humans inhabit. Much as one knows, in the right moment, what word fits in a poem, so too can one know what to do when duty makes demands. Sometimes an agent will intuit that more than a single duty applies, and in these cases must judge which duty carries more weight in order to resolve the conflict.
Ross's view is therefore both flexible and pluralistic, and is grounded in the actual roles of human lives. In these respects, it provides a foundation for a variety of professional codes of ethics, many of which are found in the scientific and technological community.
Hans Jonas and the Imperative of Responsibility
While Kant and Ross argued specifically against consequentialist theories in explaining their respective deontological views, other theorists are motivated by concerns over consequences in ways that influence the content of duties. Such is the case with the imperative of responsibility put forward by Hans Jonas (1984). Jonas calls for a new formula of duty because he thinks that traditional ethical theories are not up to the task of protecting the human species in light of the power of modern technology. His worry relates directly to the irreversible damage that modern technology could do to the biosphere, and hence to the human species. Because humans have acquired the ability to radically change nature through technology, they must adjust their ethics to constrain that power.
In language intentionally reminiscent of Kant's categorical imperative, Jonas gives his formula of duty as follows: "Act so that the effects of your action are compatible with the permanence of genuine human life" or so that they are "not destructive of the future possibility of such life" (Jonas 1984, p. 11). Referring to Kant's first version of the categorical imperative, Jonas criticizes its reliance on the test of logical consistency to establish duties. There is no logical contradiction, he notes, in preferring the future to the present, or in allowing the extinction of the human species by despoiling the biosphere. The imperative of responsibility, as a deontological obligation, differs from the ethics of Kant and Ross because it claims that humans owe something to others who are not now alive. For Jonas, neither the rational nature nor the particular, situated relations of human beings exhaustively define their duties. Indeed one will never be in situated relationships with people in far-off generations, but remoteness in time does not absolve the living of responsibilities to them.
Are All Duties Deontological?
Most professional codes of ethics in science and engineering consist of duties and rules. Does it follow that their authors tacitly accept the deontological orientation in ethics? It does not, and there is an important lesson here about the choice between deontology and other ethical orientations. The primary difference between professional codes and deontological ethical theories is that, in the former, the duties or rules are put forth as instrumental for competent or even excellent conduct within the particular profession. Some duties are directed toward the interests of clients or firms, but ultimately the performance of these duties supports the particular profession. The grounding of duties in professional codes resembles the function of rules under rule utilitarianism.
These rules would not be morally required for the general public, as would the rules of a deontological ethics. Professional codes are tools to improve the profession; the end of right action, in this case, is dependent upon the good of the profession, and the content of duties will depend on the particular views of the authors concerning that good.
Further Applications and Challenges
Duty ethics have been applied with some success in technological fields where consequentialist or utilitarian reasoning seems inappropriate. In biomedical ethics there is general acceptance of the view that do-not-resuscitate orders and living wills are to be respected, even when doing so means death for the patient and possibly great unhappiness for loved ones. In computer ethics, the argument for privacy of personal data does not generally depend on the use to which stolen data would be put. It is the principle, and not the damage, that is at the heart of the issue. There also seem to be lines of a deontological sort that cannot be crossed when it comes to some forms of experimentation on animals and treatment of human research subjects. For some emerging technologies, there are well-grounded deontological reasons for opposing research and development, even though the technologies eventually could yield great benefits. No one denies the good of the end, but they do deny that the end justifies any and all means. Where the claims of duties are not well grounded, a deontological approach to ethics runs the risk of sounding reactionary and moralistic.
THOMAS M. POWERS
Anders, Günther. (1961). "Commandments in the Atomic Age." In Burning Conscience: The Case of the Hiroshima Pilot, Claude Eatherly, Told in His Letters to Günther Anders. New York: Monthly Review Press.
Darwall, Stephen L., ed. (2002). Deontology. Oxford: Basil Blackwell Publishers.
Jonas, Hans. (1984). The Imperative of Responsibility: In Search of an Ethics for the Technological Age Chicago: University of Chicago Press. Originally published in 1979 as Das prinzip verantwortung, this book made Jonas famous.
Kant, Immanuel. (1995a). "Grounding for the Metaphysics of Morals." In Ethical Philosophy 2nd edition, trans. James W. Ellington. Indianapolis: Hackett Publishing. Originally published in 1785. Kant's most widely read and accessible work in ethics.
Kant, Immanuel. (1995b). "Metaphysical Principles of Virtue." In Ethical Philosophy, 2nd edition, trans. James W. Ellington. Indianapolis: Hackett Publishing. Originally published in 1797. Part of Kant's metaphysics of morals, the other half of which concerns the principles of political right
Marcuse, Herbert. (1992 ). One-Dimensional Man: Studies in the Ideology of Advanced Industrial Civilization, 2nd edition. New York: Beacon Press. Popular treatise that inspired a progressive critique of technology.
Ross, W. D. 1965 (1930). The Right and the Good. London: Oxford University Press. Classic text by an important British philosopher.
Tenner, Edward. (1997). Why Things Bite Back: Technology and the Revenge of Unintended Consequences. Cambridge, MA: Harvard University Press.