Moral Psychology

views updated

MORAL PSYCHOLOGY

Moral psychology is the area of scholarship that investigates the nature of psychological states that are associated with moralitystates such as intentions, motives, the will, reason, moral emotions (such as guilt and shame), and moral beliefs and attitudes. The purview of moral psychology also includes associated concepts of virtue, character trait, and autonomy. It has generally been thought of as a descriptive enterprise rather than a normative one, though this is not always the case.

Traditionally we can see two different approaches to moral psychology. The first is the a priori approach to understanding moral psychology and the significance and function of psychological states. The second is the empirical approach that considers the evidence of their significance, function, and development. Both of these strands will take as their starting point common sense intuitions about how people think about morality, make moral decisions, and the circumstances under which they feel moral emotions. These intuitions may be based on a long history of observation of human behavior, or they may simply be the result of natural selection leading to similarity in thought which itself might be adaptive. Either way, common sense provides the baseline for research in moral psychology.

The a priori strand engages in conceptual analysis of the relevant psychological states and their connections. There is a debate, for example, about whether reason alone can motivate, or not. What explains our actions? Is it the case that when I give money to charity I do so simply because I believe it will help people who need help, or do I also need to desire to help them? This will engage us in a discussion of the distinction between belief and desire. A view, which can be traced back at least as far as David Hume, holds that beliefs are of matters of fact and can be true or false; desires, on the other hand, have no truth-value. And, it is desires that are essentially motivating.

Thus, whenever one wants to fully explain an action one needs to be able to identify the belief/desire combination that gives rise to it. But this seems to present a puzzle for moral action: often, morality requires us to act against our desires. I am required to keep my promises, even if I don't want to. But how can I keep my promises if I don't want to, when desire is necessary for action? So there is also a normative question that can be raised. Presumably I am giving money to charity because I think that it is a good thing to do. I accept the norms of giving. So, is it the case that if I think that giving to charity is good, it necessarily provides me a motivating reason for giving? Is there a necessary connection, or conceptual tie, between the normative reason (the recognition that giving is good) and my motivation to perform the action of giving? If I think there is, then I am an "internalist"; if, however, I do not believe that there is a necessary connection, then I am an "externalist." The acceptance of the norm, the recognition that giving to charity is a good thing, will then necessarily mean that I have at least a weak desire to act on the reason. This could, of course, be defeasible.

But there are those who disagree. Those who are externalists, such as David Brink, argue that amoralists can recognize moral reasonsfor example, the amoralist can recognize that it is good to give to charityyet utterly fail to be moved by this recognition. Indeed, that is what it is to be an amoralist. They are defective not because they fail to see moral reasons as moral, but precisely because they recognize them and yet fail to be moved by them at all. Internalists argue that amoralists, when they articulate a belief that "x is good" and then fail to be moved, do not really believe what they have articulated. They are trying to make moral judgments, but they are failing to actually do so. Michael Smith also allows that such agents may be practically irrational.

A related feature of Hume's view of moral psychology is its commitment to the view that desire is a given. That is, one cannot reason oneself into a basic desire. One can reason about non-basic desiresfor example, perhaps I would like to eat ice cream today. Then someone points out that ice cream really isn't very healthy. Since I would rather eat healthy food, I now no longer desire to eat the ice cream. But the desire to eat the ice cream is not basic. Rather, I would like to feel goodand once someone points out to me that a habit of eating ice cream will make me less likely to keep feeling good in the long run, that desire to eat ice cream falls away. But I have not been reasoned out of the basic desire. Indeed, it is its conflict with this desire that makes me ready to jettison the other.

But other writers disagree with this Humean conception of desire, and the reason/desire dichotomy. They believe that we can rationally reflect on basic desires and come to change them through the force of this rational reflection alone. For example, one might argue that desires, even some fundamental ones, are based in part on beliefs that we have. If I desire, for example, to avoid treating persons as means, it may be that I have this desire because I think that being respectful toward others requires this, and I believe, with good reason, that respecting others is obligatory. This desire could be basic in that it cannot be reduced to another desire. If this case is plausible, then we have a basic desire supported by reason.

One way to view this case is as that of a commitment one has. The desire to avoid treating others disrespectfully is more than just a strong basic desire that I have, which happens to be stronger than the other desires I have that might conflict with it. It is a commitment, a normative commitment that I have, and I have it for reasons that are motivating reasons. These reasons carry the desire to be respectful of others with them. Further, there are reasons for this desire having to do with my beliefs about, perhaps, what it is to be a flourishing human being. Presumably I could be argued out of the desire, then. A Humean might try to respond to this, however, by pointing out that any "argument" one would give would in turn depend upon some stronger desire for its force. Desires are not themselves true or false, but they can loosely be considered irrational if based on false beliefs. Beliefs exposed as false would then presumably lead to an alteration of the desire one had based on that belief. In the example that I cited above, then, the Humean would probably say that my desire to be respectful of others is based on the belief that this is good and obligatoryso that simply shows that I have a more basic desire to live up to my obligations.

The field of moral psychology also has a more empirical side. Aristotle believed that the observation of human beings could reveal to us what, for human beings, was eudaimonia. Thomas Hobbes believed that an astute observer of human nature would find support for psychological egoism. Charles Darwin believed that natural selection could account for the sorts of emotions that human beings feel, including the moral emotions. Data that psychologists have gathered about human behavior have influenced the way some think about morality. For example, the work of psychologist Carol Gilligan raised the issue of gender differences in approaches to thinking about moral problems, which in turn influenced writers in feminist ethics.

More recently, empirical psychological research has been brought into moral theory to shed light on a host of issues, ranging from the issue of what, exactly, goes on in a person's brain when she thinks about moral issues, to the issue of the innateness of our moral cognition, to the seemingly basic commitment human beings have to moral objectivity. There is also the extremely interesting and important issue of how natural selection has shaped our sense of morality and moral practices, as well as our moral intuitions. For example, Jesse Prinz has done work in comparative psychology that offers evidence against moral nativism. He believes that the evidence best supports the view that there is not even a minimal innate moral competenceinstead it is culture that guides the formation of our moral capacities.

The work of Shaun Nichols draws on literature in developmental psychology to investigate the claim, widely argued in meta-ethics, that people are generally moral objectivists. That is, that people accept the view that there are some true moral judgments, and when a moral judgment is true, it is non-relativistically true. Nichols points out that experiments in developmental psychology, though not at this point in time conclusive, point to the view that for persons, generally, moral objectivism is the "default position" when it comes to commonsense, or lay, meta-ethics.

There is also a trend in moral philosophy of exploring the significance of emotion in moral judgment. This has a counterpart in the psychological research. Joshua Greene and Jonathan Haidt refer to this as the "affective revolution." The interest in this area of psychological research was sparked by Antonio Damasio's work showing that good reasoners needed affect. When portions of the brain that regulate affect are damaged, agents do not perform very well on follow-through in practical reasoning tasks. The classic case, discussed by Damasio, is Phineas Gage. Gage was a railway worker who suffered damage to his frontal lobe in an accident in 1848. This caused an apparently extreme personality change that involved inappropriate emotional responses and a disposition to impulsive behavior. He became unreliable and untrustworthy. He was able to reason in the abstract but was not able to carry through. Affect thus at least seems crucial to effective moral motivation. This conclusion was supported by studies involving more recent cases of frontal lobe damage.

Greene's own work explores brain activity when persons consider moral dilemmas. He and his colleagues discovered that when personal dilemmas were presented to subjectsthat is, situations in which those being harmed are close to the subjectthere is far more brain activity in the emotional areas of the brain, and those areas of the brain underlying social cognition, than when the problem cases were impersonal. We do seem moved to help in personal cases to a greater extent than impersonal cases. This research supports what charitable organizations have long realized. To promote giving there is a need to make the plight of the suffering personal to potential giversthrough photographs and letters, for example. Of course, this leaves untouched the question of what people ought to do. While it is true that our emotions are engaged more in these personal situations, that has no implications for what our obligations are in these cases. This is where we need normative ethics.

Still, this line of research supports the descriptive view that when we behave morally, or at least think about moral issues, in a way that has more motivating force, there is considerable engagement of our affective capacities. Further, when those affective capacities are impaired, we are left with agents whom we would describe as morally defective. Phineas Gage was widely considered to be a deadbeat after his accident. That is a moral judgment of his character, and the appropriateness of that judgment has something to do with the fact that he lacked the correct emotional responses, those appropriate for the circumstances in which he found himself.

Empirical psychological research has also influenced literature on virtue ethics. Virtue ethics is a type of normative ethical theory that bases moral evaluation on virtue concepts. The approach has been attacked for its failure to reflect psychological reality. For example, Gilbert Harman's work on virtues makes use of situationist literature in social psychology. He argues, citing situationist experiments, that there are no character traits. Rather, the best explanation for a person's behavior is his situationso, if one would like a reliable way to predict behavior, one needs simply to look at the person's situation. Persons who are in a hurry will be less likely to help than persons who are not. Persons who smell fresh cookies baking are more likely to act benevolently than those who are not smelling the cookies, and so forth.

Thus, character traits need not be cited at all in reliable predictions or explanations. There is no reason to think they exist. Further, if there are no character traits, then there are no character traits that are virtues. It would follow then that virtue ethics is a non-viable normative ethical theory, since it assumes what does not in fact exist. There are no stable character traits, at least, no stable and robust moral character traits. John Doris has softened Harman's claim somewhat, also by bringing in evidence from empirical psychology. On Doris's view all that is warranted by the empirical data is the view that character traits are not "global"that they are more narrowly prescribed and local than intuition would have it. Thus, there may not be a general robust trait of benevolence, but there may be a trait of "benevolence when one smells cookies" and "benevolence when one is not in a hurry," and so forth. Doris still views even this weaker position as a threat to virtue ethics since it cuts against the assumption that there are robust, global character traits. A virtue ethicist is free to respond that even if Doris is correct, virtue ethics may still offer a regulative ideal. After all, it is a theory of how we ought to be, not how we are.

Assuming, with common sense intuitions, that there are character traits that qualify as virtues, is there any particular psychology that characterizes moral virtue? Here we move away from use of evidence from experimental psychology and back to philosophical analysis of normative concepts that is, nevertheless, sensitive to our views of psychological reality. In my own work I argue there is no special psychology that characterizes moral virtue, and that what counts as a moral virtue is characterized by externalities such as the consequences that the traits systematically produce. Other writers, such as Rosalind Hursthouse, disagree. Taking Aristotle as her inspiration, she holds that virtue states require that the agent have certain psychological states, such as a kind of practical wisdom that is needed for deliberating well about what to dopresumably, one needs to deliberate well in order to be a good person. Another writer who has attacked this moral psychology of the virtues is Nomy Arpaly, who argues that all that is needed is that the agent be responsive to the right sorts of reasons.

It is true that one thing that we hold people responsible for is their failure to be responsive to the right sorts of reasons. If one observes an agent acting with a callous disregard for the well-being of others, this can give rise to feelings of outrage. Thus, these failures of appropriate responsiveness can generate moral emotions that are indicative of our moral commitments. For example, we have a commitment to a norm of honesty. This norm is important to regulating our social interactions. In a person of reasonably good character, a failure to be honest will lead to feelings of remorse. Also, in a person of reasonably good character, seeing another behave dishonestly will give rise to a reactive attitude of outrage or resentment. When such feelings are appropriately felt, this may serve as good evidence that there has been a moral failure.

Reactive attitudes, then, can figure into accounts of moral responsibility and moral accountability. R. Jay Wallace, for example, has developed an account of what it is to hold someone responsible, morallyit is an attitudinal stance toward someone, a third-person stance that crucially involves reactive attitudes. If one holds someone responsible for having done something bad, then it is appropriate to feel something like resentment toward that person. Note that this is not a descriptive claim. It is true that normal persons do feel resentment under these circumstances. It is also the case that this indignation or resentment is appropriate when one has been wronged. Thus, though there is some disagreement over this, the sphere of moral psychology does involve an investigation of some normative issues having to do with the normative status of some of the mental states and character traits central to moral evaluation.

See also Egoism and Altriusm; Human Nature; Moral Motivation; Moral Sentiments; Sympathy and Empathy; Virtue and Vice; Virtue Ethics

Bibliography

Arpaly, Nomy. Unprincipled Virtue. Oxford and New York: Oxford University Press, 2003.

Damasio, Antonio. Descartes' Error. New York: Putnam, 1994.

Darwin, Charles. The Descent of Man (1888). Amherst, NY: Prometheus, 1998.

Doris, John. Lack of Character. Cambridge and New York: Cambridge University Press, 2002.

Driver, Julia. Uneasy Virtue. Cambridge and New York: Cambridge University Press, 2001.

Gilligan, Carol. In a Different Voice. Cambridge, MA: Harvard University Press, 1982.

Greene, J. D., et. al. "An fMRI Investigation of Emotional Engagement in Moral Judgment." Science 293 (2001): 21052108.

Greene, Joshua, and Jonathan Haidt. "How (and Where) Does Moral Judgment Work?" TRENDS in Cognitive Sciences 6 (December 2002): 517523.

Harman, Gilbert. "Moral Philosophy Meets Social Psychology: Virtue Ethics and the Fundamental Attribution Error." Proceedings of the Aristotelian Society 99 (1999): 315331.

Hume, David. A Treatise of Human Nature, edited by L. A. Selby-Bigge (1896), revised by P. H. Nidditch. Oxford: Clarendon Press, 1978.

Hursthouse, Rosalind. On Virtue Ethics. Oxford and New York: Oxford University Press, 1999.

Nichols, Shaun. Sentimental Rules. Oxford and New York: Oxford University Press, 2004.

Prinz, Jesse. "Against Moral Nativism." Manuscript.

Smith, Michael. The Moral Problem. Oxford: Blackwell, 1994.

Wallace, R. Jay. Responsibility and the Moral Sentiments. Cambridge, MA: Harvard University Press, 1994.

Julia Driver (2005)

About this article

Moral Psychology

Updated About encyclopedia.com content Print Article