Research Ethics: Overview

views updated

RESEARCH ETHICS: OVERVIEW

Research ethics is typically divided into two categories: those issues inherent in the practice of research, and those that arise in the application or use of research findings. In the United States, ethical practice has come to be known as the responsible conduct of research (RCR); outside the United States another common term is good scientific practice (GSP). Ethical issues associated with the application of research findings deal with their use in the support of legal, social, or economic policy as well as their technological applications (e.g., genetic engineering in therapy and agriculture, bioweapons development, and dam siting and construction).

Many entries in the Encyclopedia of Science, Technology, and Ethics cover different aspects of research ethics in more detail. Prime examples include the entries on "Responsible Conduct of Research" and "Scientific Integrity," the composite on "Misconduct in Science," and the series dealing with various aspects of genetics. The focus here is on a more synthetic overview that also highlights some points missing elsewhere.


Background

Both aspects of research ethics came to the forefront of public attention at the end of World War II and have developed more fully over the mid-twentieth century. Leading discussions have often but not always taken place in the United States.


RESEARCH PRACTICE. Initially ethical concerns regarding research practice emphasized the use of humans as research subjects. The revelation of Nazi atrocities at the close of World War II focused international attention on research that subjected individuals to high altitude experiments in low-pressure chambers, freezing due to exposure or submersion in ice water, starvation or seawater as their primary source of fluids, and infection with malaria, typhoid, streptococcus, and tetanus. Judges presiding over the trial of Nazi physicians drafted the Nuremberg Code (1946), which has since been followed by additional ethical codes most prominently the World Medical Association Declaration of Helsinki (1964; most recently revised in 2002). For further depth on these issues see the entries on "Nazi Medicine" and "Human Subjects Research."

In the early 1970s the U.S. Tuskegee Syphilis studies came to light (see "Tuskegee Experiment) and focused national attention on human subjects treatment in the United States. This research, carried out from 1932 to 1972, recruited disadvantaged, rural black males who had contracted syphilis to participate in the study of the course of untreated disease. Although no clearly effective treatment was initially available, when it became apparent that penicillin was effective, participants were not given this medication. When these studies were made known, the U.S. Congress mandated a commission to identify, develop, and articulate the ethical principles that underlie and must guide the acceptable use of human volunteers and subjects in biomedical research. The commission's work resulted in the Belmont Report (National Commission 1978) that serves as the foundational document for research involving humans in the United States.

In the 1980s other egregious examples of scientific misconduct were exposed, including the fabrication and falsification of data, and plagiarism (Broad and Wade 1982, LaFollette 1992). While these were not the first instances of misconduct in science—the Piltdown Man fraud was initiated in 1912—they raised serious concerns not only within but beyond the scientific community. Indeed the U.S. Congress began to demand more consistent oversight of the process of research funding which led to establishment of the Office of Scientific Integrity within the National Institutes of Health that ultimately became the Office of Research Integrity (ORI) in the Department of Health and Human Services.

Moreover, within the scientific community, it became clear that concerns regarding serious scientific misconduct were only the tip of the iceberg in the sense that the professional standards, expectations of colleagues, and ethical values of the research community with regard to many aspects of research practice were not clearly articulated nor widely understood. There was, and is, a wide range of accepted practices without much discussion of the underlying assumptions and wider implications that place those practices along the continuum of preferred, acceptable, discouraged, and prohibited practices. As a result, trainees and even more established researchers are not always clear about the acceptability of established or ongoing practices within the community.

For example, while plagiarism (the misrepresentation of the writings or ideas of another as one's own) is clearly deceptive and unacceptable, other publication practices can also be problematic. The practice of "honorary" authorship—that is, including in the list of authors individuals who have not made a clear and significant intellectual contribution to the published work—became increasingly widespread over the latter part of the twentieth century. The practice of adding names to the list of authors (sometimes without the knowledge or consent of the individual "honored") in exchange for a reagent, a strain of mice, laboratory space, or past tutelage not only tends to "dilute" the apparent contribution of other authors (depending on a reader's assumptions), but also to deny honorary authors any opportunity to make fully informed decisions about their associations with the work.


APPLICATION OF RESEARCH FINDINGS. The end of World War II also brought greater awareness of the ethical implications of the uses of science and technology. The use of the atomic bomb by the United States on Japan raised a host of questions regarding the social responsibility of scientists and engineers for the consequences of their work. The Manhattan Project reflected a national priority to devote all resources, including scientific expertise to winning the war. Yet those working on the project could only speculate on the immediate and long-term health and environmental effects of an atomic explosion. Moreover, as scientist J. Robert Oppenheimer mused, the science was so "technically sweet" that its appeal overrode concerns about the creation of an enormously destructive bomb so unlike the conventional weapons with which people were already familiar.

In the 1960s Rachel Carson and others called attention to the dangers of chemical pollutants in the environment, and reactions took place against some of the kinds of chemicals being used in many agricultural, industrial, and military activities. In the 1970s developments in molecular biology (specifically techniques with recombinant DNA) led researchers to convene a conference in Asilomar, California, to discuss the implications and potential hazards of genetic engineering. This is often identified as the first widespread, proactive effort on the part of the scientific community to acknowledge and address its social responsibility.

The discussion has become more nuanced and complex as the impact of human activity on the environment and on other species as well as other human populations has become more apparent. Whether in the construction of large engineering projects such as dams that dramatically alter the landscape, inundate archaeological treasures, and displace the local population, or in the oftentimes poorly executed use of genetically engineered crops in developing nations, or in many other technological applications, their larger ethical and social implications have become the focus of increasing examination, debate, and institutional reform.


The Responsible Conduct of Research and Good Scientific Practice

Progress in science depends on trust between scientists that results have been honestly presented. It also depends on members of society trusting the honesty and motives of scientists and the integrity of their results (European Science Foundation 2000). Fostering this trust requires clear and strong ethical principles to guide the conduct of scientific research. In the United States, ethical research practice is generally referred to as RCR or the responsible conduct of research. The ORI, the U.S. federal agency primarily concerned with education in RCR, has identified nine core instructional areas in RCR (Office of Research Integrity 2005, Steneck 2004). Areas (1) through (5) deal with the actual conduct of research while areas (6) through (9) are associated with interactions between members of the scientific community.

  1. Data Acquisition, Management, Sharing, and Ownership. This area focuses on the ways in which data are recorded, whether in notebooks or in other formats (such as electronic records, photographs, slides, etc.), and how and for how long they should be stored. It explores as well the question of who owns the data, who is responsible for storing them, and who has access to them. Issues of privacy and confidentiality of patient information as well as intellectual property issues and copyright laws are included.
  2. Conflict of Interest and Commitments. Discussion of conflicting interests and commitments acknowledges the potential for interference in objective evaluation of research findings as a result of financial interests, obligations to other constituencies, personal and professional relationships, and other potential sources of conflict. It also considers strategies for managing such conflicts in order to prevent or control inappropriate bias in research design, data collection, and interpretation.
  3. Human Subjects. Ethical treatment of human research subjects references the requirements of the Office of Human Research Protections (OHRP), which are based on the ethical principles outlined in the Belmont Report (National Commission 1978). These principles include especially (a) respect for persons as expressed in the requirement for informed consent to participate and protection of vulnerable populations such as children and those with limited mental capacity; (b) emphasis on beneficence that maximizes the potential benefits of the research and minimizes risks; and (c) attention to considerations of justice in the form of equitable distribution of the benefits and burdens of the research across populations. Adequate attention to patient privacy and the variety of potential harms including psychological, social, and, economic is essential.
  4. Animal Welfare. Research involving animals emphasizes animal welfare in accordance with the regulations of the Office of Laboratory Animal Welfare (OLAW). Principles here emphasize respect for animals used in research (Russell and Burch 1959) in the form of "the three Rs": reduction of the number of animals used, replacement of the use of animals with tissue or cell culture or computer models or with animals lower on the phylogenetic scale whenever appropriate and possible, and refinement of the research techniques to decrease or eliminate pain and stress.
  5. Research Misconduct. Dealing with allegations of research misconduct is essential given its potential for derailing a research career. Definitions of scientific misconduct, including fabrication, falsification, and plagiarism as well as other serious deviations from accepted practice that may qualify as scientific misconduct, as distinguished from error, and protections for whistleblowers are important components of this topic.
  6. Publication Practices and Responsible Authorship. Publication practices and responsible authorship examine the purpose of publication and how that is reflected in proper citation practice, criteria for authorship, multiple, duplicate and fragmentary publication, and the pressure to publish. This area also considers allocation of credit, the implications and assumptions reflected in the order of authors, and the responsibility of authorship.
  7. Mentor/Trainee Responsibilities. The mentor/trainee relationship encompasses the responsibilities of both the mentor and the trainee, collaboration and competition, possible conflicts and potential challenges. It also covers the hierarchy of power and potential for the abuse of power in the relationship.
  8. Peer Review. The tension between collaboration and competition is embodied in the peer review process for both publication and funding. In this area of RCR issues associated with competition, impartiality and confidentiality are explored along with the specifics of the structure and function of editorial and review boards and the ad hoc review process.
  9. Collaborative Science. Not only does research build on the work of others, but more and more investigators from disparate fields work together. The collaborative nature of science requires that often implicit assumptions about common practices such as authorship and data sharing need to be made explicit in order to avoid disputes.

In Europe, the term of art for discussion of research ethics is GSP or good scientific practice (European Science Foundation 2000). However, unlike RCR, which emphasizes guidelines for positive research behaviors, there is a tendency in other countries to emphasize the avoidance of negative behaviors. This means that despite the name (good scientific practices) discussion focuses on scientific misconduct. For instance, it the pursuit of GSP, the U.K. Office of Science and Technology (OST), the oversight body of the U.K. Research Councils, categorizes scientific misconduct into two broad groups. The first pertains to the fabrication and falsification of research results. The second category pertains to plagiarism, misquoting, or other misappropriation of the work of other researchers. The OST statement "Safeguarding Good Scientific Practice" (1998) stresses the need to avoid misconduct by means of self regulation of and by the research community, arguing that "Integrity cannot be prescribed" (Office of Science and Technology).

With the creation of the Danish Committee on Scientific Dishonesty in 1992, Denmark became the first European country to form a national body to handle cases of scientific dishonesty—again with the aim of promoting GSP. This has prompted similar practices in other Scandinavian countries (Vuckovic-Dekic 2000).

A serious case of scientific misconduct in Germany in 1998 sparked the creation of the international Commission on Professional Self Regulation in Science. This Commission was charged to explore causes of dishonesty in the science system, discuss preventive measures, examine the existing mechanisms of professional self regulation in science, and make recommendations on how to safeguard them. It published a report titled "Proposals for Safeguarding Good Scientific Practice," which advised relevant institutions (universities, research institutes, and funding organizations) to establish guidelines of scientific conduct, policies for handling allegations, and rules and norms of good practice (Commission on the Professional Self Regulation in Science 1998). Fearing over-regulation, the commission recommended that institutions retain authority for establishing misconduct policies (rather than establishing a centralized committee as in the United States and Denmark).


Ethical Issues in the Application of Research

The Enlightenment creed Sapere aude! (Dare to know!) symbolized the distinctively modern belief that scientific research is an ethical responsibility, indeed a moral obligation of the highest order. Ancient or premodern thinkers generally maintained that there were limits to the quest for knowledge, beyond which lay spiritual and physical dangers. Although there is a long tradition of critiques of this foundational modern commitment (e.g., Wolfgang von Goethe's Faust and Mary Shelly's Frankenstein), they have become more refined, extended, and institutionalized in the latter half of the twentieth century as science and technology began to profoundly alter both society and individual lives. The ramifications of various technological developments (e.g., atomic energy, genetic engineering) have demonstrated that unfettered research will not automatically bring unqualified goods to society.

Daniel Callahan (2003) has argued that there is a widespread assumption of the "research imperative," especially in the area of biomedicine and health care. Though a complex concept, it refers to the way in which research creates its own momentum and justification for gaining knowledge and developing technological responses to diverse medical conditions. It can pertain to the ethically dubious rationale of pursuing research goals that are hazardous or of doubtful human value, or the rationale that the ends of research justify the means (no matter how abhorrent). It can also pertain to the seemingly noble goal of relieving pain and suffering. Yet this commitment to medical progress has raised health care costs and distracted attention from the ultimate ends of individual happiness and the common good. Research, no matter how honorable the intent of those performing and supporting it, must be assessed within the context of other goods, rather than elevated as an overriding moral imperative (Jonas 1969, Rescher 1987).

As is considered in entries on "Science Policy" and "Governance of Science," the core assumption of the inherent value of research was operationalized in post-World War II U.S governmental policies for the funding of scientific research. What came to be known as the "linear model" of science-society relations posited that investments in "basic" research would automatically lead to societal benefits (Price 1965). However, the framers of this policy never specified how this "central alchemy" would occur, and they did not adequately address the need to mitigate negative consequences of scientific research (Holton 1979). The economic decline of the late 1970s and 1980s, the end of the cold war in the early 1990s, and the growing federal budget deficits of the same period combined to stimulate doubts about the identity of purpose between the scientific community and society (Mitcham and Frodeman 2004).

The very fact that societal resources are limited for the funding of scientific research has stimulated questions about what kind of science should be pursued. For instance, physicist and science administrator Alvin Weinberg argued in the 1960s that internal assessments of the quality of scientific projects and scientific researchers should be complemented by evaluation of scientific merit as judged by scientists in other disciplines, of technological merit, and of social merit. For Weinberg, because of the limited perspective of those within the community, "the most valid criteria for assessing scientific fields come from without rather than from within the scientific discipline that is being rated" (1967, p. 82).

Put simply, while the internal ethics of research asks: "How should we do science?" the external ethics of research takes up a suite of questions involving participants beyond the immediate scientific community and addressing more fundamental ends. As Daniel Sarewitz (1996) noted the pertinent questions are "What types of scientific knowledge should society choose to pursue? How should such choices be made and by whom? How should society apply this knowledge, once gained? How can "progress" in science and technology be defined and measured in the context of broader social and political goals?" (p. ix).

Myriad attempts have been made to reformulate the relationship between scientific research and political purposes, where the criteria for assessing science derive partially from without rather than from within a particular scientific discipline. Models include Philip Kitcher's ideal of "well-ordered science" (2001) and the concept of "use inspired basic research" put forward by Donald Stokes (1997). Such revised social contracts for science shift the focus from maximizing investments in research to devising mechanisms for directing research toward societal benefits; a shift from "how much?" to "toward what ends and why?" Legislation such as the 1993 U.S. Government Performance and Results Act (GPRA) reflects this focus on the social accountability of publicly funded science, as do technology assessment institutions and ethical, legal, and social implications research performed in conjunction with genome and nanotechnology research.

The prioritization of research projects is another important area in this regard, including the issue of how much money to allocate to the study of different diseases, which often raises ethical concerns about systematic discrimination. The effective use of scientific research and technologies in development policies intended to decrease poverty and improve the health of those in developing countries is a related topic. Diverse experiences with the Green Revolution, for example, show the importance of context in directing research toward common interests and away from negative outcomes such as ecological harms and the exacerbation of wealth disparities. Both of these topics raise the important issue of the role of various publics in guiding and informing scientific research and technological applications.

Although it is still largely true that "more money for more science is the commanding passion of the politics of science" (Greenberg 2001, p. 3), a number of critics and policy makers understand that more is not necessarily better. Scientific progress does not always equate to societal or personal progress in terms of goals such as safety, health, and happiness (Lightman, Sarewitz, and Desser 2003). The potential unintended physical harms that may result from scientific research have long been recognized and debated in terms of the roles of scientists and non-scientists in risk assessment. More recent developments, especially in bio- and nanotechnology research, and the growing specter of catastrophic terrorist attacks have lent a more urgent tone to questions about "subversive truths" and "forbidden knowledge" (e.g., Johnson 1996).

Limiting scientific research raises practical questions such as "Who should establish and administer controls?" and "At what level should the controls be imposed?" (Graham 1979). Some (e.g., McKibben 2003) have advocated the large scale relinquishment of whole sectors of research such as nanotechnology. Others, including the innovator Ray Kurzweil, argue for a more fine-grained relinquishment and the prioritizing of funding for research on defensive technologies to counteract potential misuses of science. This view holds that the optimal response to the potential for bioterrorism, for example, is to lessen restrictions on and increase funding for bioweapons research so that preventive measures and cures can be developed.

Discussion of the ethical implications of the use of scientific research is, at its core, about procedures for democratic decisions and the allocation of authority and voice among competing societal groups. This can be construed in broad terms ranging from criticisms of Western science as a dominant even hegemonic way of knowing that drowns out other voices, to defenses of science as an inherently democratizing force where truth speaks to power. These vague issues take on importance in concrete contexts that concern judgments about the appropriate degree of scientific freedom and autonomy within democratic societies. The most important area in which these issues arise is the use of scientific knowledge in formulating public policies.

Although bureaucratic political decision-making has come to rely heavily on scientific input, it is not obvious how the borders and interstices between science and policy should be managed. On the one hand, it seems appropriate that research undertaken by scientific advisory panels (as distinct from research in general) be somehow connected to the needs of decision makers. On the other hand, sound procedures for generating and assessing knowledge require a degree of independence from political (and corporate) pressures. Failure in the first instance leads to generation of irrelevant information and often delayed or uninformed action. Failure in the second case leads to conflicts of interest or the inappropriate distortion of scientific facts to support preexisting political agendas (Lysenkoism is an extreme example) or corporate policies.

The latter instance is often couched in terms of the "politicization of science," which is a perennial theme in science-society relationships (e.g., Union of Concerned Scientists 2004). Yet in order to attain the democratic ideal of being responsive to the desires and fears of all citizens, the politicization of science in the sense of explicitly integrating it into the larger matrix of goods (and evaluating it from that standpoint) is proper. Scientific research can be "misused" when it is inappropriately mischaracterized (e.g., to over-hype the promise of research to justify funding) or delegitimized (Pielke 2004) and it is important to enforce ethical guidelines against these practices. However, the more common misuse of science that ranges from intentional to unconscious, is the practice of arguing moral or political stands through science (Longino, 1990). This can inhibit the ethical bases of disputes from being fully articulated and adjudicated, which often prevents science from playing an effective role in policy making (Sarewitz 2004).

Teaching Research Ethics

Science educators and researchers have generally believed their responsibility was to teach scientific concepts and laboratory techniques, and it was expected that professional values and ethical standards would be picked up by observing good examples. However, as a result of well-publicized and serious instances of scientific misconduct in the 1980s, the research community has become aware of the need to address the responsible conduct of research explicitly. Thus in 1989 the U.S. National Institutes of Health (NIH) began calling for formal instruction for NIH funded pre- and post-doctoral trainees in the responsible conduct and reporting of research (National Institutes of Health 1989). Moreover, in support of expanding the NIH requirement, both the report of the Commission on Research Integrity, "Integrity and Misconduct in Research" (1995) and the report of the international Commission on Professional Self Regulation in Science, "Proposals for Safeguarding Good Scientific Practice" (1998), highlighted the fact that education in RCR /GSP has been largely neglected worldwide and should be addressed for both trainees and senior scientists. In addition, recognition of the ethical implications of science and technology has led to the incorporation of these topics into many courses and programs aimed at teaching research and engineering ethics. It is widely appreciated that students need to understand that science and technology are not value free and that scientific information can be used for good or ill, misused or abused.

While it is widely believed that "by the time students enter graduate school, their values and ethical standards are so firmly established that they are difficult to change" (Swazey 1993, pp. 237–38) there is a solid body of evidence that supports the view that in fact adults can be taught to behave ethically through specific educational programs introduced at the undergraduate and postgraduate level (Rest et al. 1986; Bebeau et al. 1995). This is closely linked to the individual's reconceptualization of his or her professional role and relationship to society. Educational programs can affect awareness of moral problems and moral reasoning and judgment. Moreover, studies show that moral perception and judgment influence behavior.

There is some controversy regarding the emphasis of research ethics education, that is, whether to focus on the rules and regulations, expectations and standards of the research community, or to emphasize moral development. However in reality, teaching research ethics entails both communicating the standards and values of the community and promoting moral development through increased ethical sensitivity and ethical reasoning. Thus the goals of education in research ethics are to:

  1. Increase awareness and knowledge of professional standards. Toward this end, professional standards and ethical values of scientific research and conventions are identified and clarified, as is the range of acceptable practices along the continuum of preferred, acceptable, discouraged, and prohibited. In the process, the assumptions that underlie accepted practices are examined and the immediate and long-term implications of these practices are assessed.
  2. Increase awareness of ethical dimensions of science. This includes examination of the issues associated with both research practice and the application of research findings.
  3. Provide experience in making and defending decisions about ethical issues. Case studies designed to illustrate common research practices and situations are generally used. Discussion of these cases invariably entails in-depth analysis of affected parties, points of conflict, implications of various courses of action, and examination of the expectations, needs and responsibilities of the different characters in the scenario.
  4. Promote a sense of professional responsibility to be proactive in recognizing and addressing ethical issues associated with research.

A number of key characteristics of educational programs in research ethics have been identified (Bird 1999, Institute of Medicine 2002). These reflect principles of effective adult education as well as common sense. Programs that are required emphasize the view that ethical issues are inherent in research and that awareness of the ethical values and standards of the research community are an essential component of professional education. Interactive discussion of ethical issues and concerns raised by a realistic case provides participants with an opportunity to share their experience and solve problems in a context. This approach employs principles of learning science that have been identified through research on how people learn (Bransford et al. 1999). Broad faculty involvement in educational programs in research ethics demonstrates that this is valued by professionals across the discipline and incorporates a variety of experience and a range of perspectives with regard to accepted practices. Programs should begin early in research education (e.g., undergraduate science laboratory courses) and continue throughout college and graduate or other professional education. In so doing, individuals can reflect on their own experience, and their understanding and appreciation of ethical concerns and strategies for problem solving can evolve. When the various components of graduate education (i.e., courses, seminars, laboratory meetings, etc.) address ethical issues they reinforce and complement each other.


A variety of formats and strategies have been developed to teach research ethics. The most effective are case-based and integrate discussion of research ethics into all of the various elements of research education: as modules in core courses, stand-alone full semester or short courses on research ethics, departmental seminars, workshops, laboratory and research team meetings, one-on-one interactions between trainees and research supervisors, and computer-based instruction (Swazey and Bird 1997, Institute of Medicine 2002). Each approach has strengths and weaknesses.

Through explicit discussion of ethical issues associated with the practice of research and the application of research findings the research community acknowledges the complexity of the issues and the need to address them. Specifically addressing RCR reaffirms the responsibility of the research community for research integrity, individually and collectively, and the necessity of providing this information to its members. Identifying and examining the ethical issues associated with the application (or misapplication) of research findings emphasizes the responsibility of researchers and of citizens in general to examine and assess the ramifications of science and technology for society.


STEPHANIE J. BIRD ADAM BRIGGLE

SEE ALSO Accountability in Research;Animal Welfare; Chinese Perspectives: Research Ethics;Ethics: Overview; Misconduct in Science: Overview;Nazi Medicine;Science: Overview;Sociological Ethics.

BIBLIOGRAPHY

Bebeau, Muriel J.; Kenneth D. Pimple; Karen M. T. Muskavitch; Sandra L. Borden, and David L. Smith. (1995). Moral Reasoning in Scientific Research: Cases for Teaching and Assessment. Bloomington, IN: Indiana University.

Bird, Stephanie J. (1999). "Including Ethics in Graduate Education in Scientific Research." In Perspectives on Scholarly Misconduct in the Sciences, John M. Braxton, ed. Columbus: Ohio State University Press.

Bransford, John D., A. L. Brown, and R. R. Cocking. (1999). How People Learn: Brain, Mind, Experience, and School. Committee on Developments in Science of Learning and Commission on Behavioral and Social Sciences and Education. Washington DC: National Academies Press.

Broad, W.J., and Nicholas Wade. (1982). Betrayers of the Truth: Fraud and Deceit in the Halls of Science. New York: Simon and Schuster.

Callahan, Daniel. (2003). What Price Better Health?: Hazards of the Research Imperative. Berkeley, CA: University of California Press.

Commission on Research Integrity (CORI). (1995). Integrity and Misconduct in Research. Washington, DC: U.S. Department of Health and Human Services, Public Health Services.

Graham, Loren R. (1979). "Concerns about Science and Attempts to Regulate Inquiry." In Limits of Scientific Inquiry, Gerald Holton and Robert S. Morison, eds. New York: W.W. Norton.

Greenberg, Daniel S. (2001). Science, Money, and Politics: Political Triumph and the Ethical Erosion. Chicago: University of Chicago Press. Seeks to explain the success of autonomous science embedded in U.S. politics from World War II through the beginning of the twenty-first century and argues that lobbying for money by scientists has corroded their integrity.

Holton, Gerald. (1979). "From the Endless Frontier to the Ideology of Limits." In Limits of Scientific Inquiry, Gerald Holton and Robert S. Morison, eds. New York: W.W. Norton.

Institute of Medicine and National Research Council (2002). Integrity in Scientific Research: Creating and Environment that Promotes Responsible Conduct. Washington, DC: National Academies Press.

Johnson, Deborah. (1996). "Forbidden Knowledge and Science as Professional Activity," The Monist, 79(2): 197–217.

Jonas, Hans. (1969). "Philosophical Reflections on Experimenting with Human Subjects," Daedalus, 98: 219–247.

Kitcher, Philip. (2001). Science, Truth, and Democracy. Oxford: Oxford University Press. Argues that epistemic values do not stand above or apart from practical social concerns and offers a new model for controlling and directing scientific inquiry.

LaFollette, Marcel C. (1992). Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing. Berkeley, CA: Oxford University Press.

Lightman, Alan; Daniel Sarewitz, and Christina Desser, eds. (2003). Living with the Genie: Essays on Technology and the Quest for Human Mastery. Washington, DC: Island Press. Collects sixteen essays on the central tension between the increasing pace of scientific and technological change and an immutable human core, stressing the importance of individual and public decisions in shaping the outcomes of this tension.

Longino, Helen E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press.

McKibben, Bill. (2003). Enough: Staying Human in an Engineered Age. New York: Times Books. Advocates setting limits on the pursuit of knowledge and the quest for greater material wealth as they threaten to undermine the essence of being human.

Mitcham, Carl, and Robert Frodeman. (2004). "New Directions in the Philosophy of Science: Toward a Philosophy of Science Policy," Philosophy Today. 48(5 Supplement): 3–15.

National Commission for the Protection of Human Subjects (1978). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, DC: Department of Health, Education, and Welfare, Government Printing Office, (OS) 78–0012.

National Institutes of Health. (1989). Reminder and Update: Requirement for Programs on the Responsible Conduct of Research in National Research Service Award Institutional Training Programs. NIH Guide for Grants and Contracts.

Price, Don K. (1965). The Scientific Estate. Cambridge, MA: Harvard University Press. Critiques the linear model of science policy that derived from Vannevar Bush's Science—The Endless Frontier (1945) and comments on fundamental issues in the relationship between science and politics.

Rescher, Nicholas. (1987). Forbidden Knowledge and Other Essays on the Philosophy of Cognition. Boston: D. Reidel. Argues that knowledge is only one good among others. Chapter one is titled "Forbidden Knowledge: Moral Limits of Scientific Research."

Rest, James R.; Muriel Bebeau, and J. Volker. (1986). "An Overview of the Psychology of Morality." In Moral Development: Advances in Research and Theory, James R. Rest, ed. New York: Praeger.

Russell, W. M. S. and Burch, R. L. (1959). The Principles of Humane Animal Experimental Technique. London: Methuen.

Sarewitz, Daniel. (1996). Frontiers of Illusion: Science, Technology, and the Politics of Progress. Philadelphia, PA: Temple University Press. Traces modern myths about science and its relation to society, outlines the problems they raise, and concludes with recommendations to form a new mythology.

Sarewitz, Daniel. (2004). "How Science Makes Environmental Controversies Worse," Environmental Science and Policy 7(5): 385–403. Case studies and explanations pertaining to the way in which some environmental conflicts become "scientized."

Steneck, Nicholas H. (2004). Introduction to the Responsible Conduct of Research. Washington, DC: Government Printing Office.

Stokes, Donald E. (1997). Pasteur's Quadrant: Basic Science and Technological Innovation. Washington, DC: Brookings Institution Press. Analyzes the goals of understanding and use and offers a model of use-inspired basic research to help both science and society.

Swazey, Judith P. (1993). "Teaching Ethics: Needs, Opportunities and Obstacles." In Ethics, Values, and the Promise of Science. Forum Proceedings, February 25-26, 1993. Research Triangle Park, NC: Sigma Xi, the Scientific Research Society.

Swazey, Judith P. and Bird, Stephanie J. (1997). "Teaching and Learning Research Ethics" In Research Ethics: A Reader, Deni Elliott and Judy E. Stern, eds. Dartmouth, NH: University Press of New England.

Vuckovic-Dekic, Lj. (2000). "Good Scientific Practice." In Archive of Oncology, 8 (Suppl. 1): 3–4.

Weinberg, Alvin M. (1967). Reflections on Big Science. Cambridge, MA: MIT Press.


INTERNET RESOURCES

Commission on Professional Self Regulation in Science. (1998). "Recommendations of the Commission on Professional Self Regulation in Science." Available from http://www.dfg.de/aktuelles_presse/reden_stellungnahmen/download/self_regulation_98.pdf.

European Science Foundation. (2000). "Good Scientific Practice in Research and Scholarship." Available from http://www.hrb.ie/storage/researchfunding/fundingpolicies/goodscientificpractice.pdf.

Office of Research Integrity. (2005). "Education—Responsible Conduct of Research." Available from http://ori.dhhs.gov/education/ed_rcr.shtml.

Office of Science and Technology. (1998). "Safeguarding Good Scientific Practice." Available from http://www.ost.gov.uk/research/councils/safe.htm#2.1.

Pielke, Roger, ed. (2004). "Report on the Misuse of Science in the Administrations of George H.W. Bush and William
J. Clinton," available from http://sciencepolicy.colorado.edu/admin/publication_files/resourse-1429-ENVS%204800%20Report.pdf. Presents a taxonomy of the misuse of science and illustrates it through six case studies.

Union of Concerned Scientists. (2004). "Scientific Integrity in Policy Making: An Investigation into the Bush Administration's Misuse of Science," available from http://www.ucsusa.org/documents/RSI_final_fullreport.pdf. Illustrates through several vignettes the argument that individuals in the administration of President George W. Bush have suppressed or distorted research findings and undermined the quality and integrity of the appointment process.

About this article

Research Ethics: Overview

Updated About encyclopedia.com content Print Article