Misconduct in Science: Overview
Misconduct in Science: OVERVIEW
In the United States the official definition of research misconduct is:
Science and Technology Policy 2000, p. 76262)">
... fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. ... Fabrication is making up of data or results and recording or reporting them. Falsification is manipulating research materials, equipment or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. ... Plagiarism is the appropriating of another person's ideas, processes, results, or words without giving appropriate credit. Research misconduct does not include honest error or differences of opinion. A finding of research misconduct requires that: There be a significant departure from accepted practices of the relevant research community; and the misconduct be committed intentionally, or knowingly, or recklessly; and the allegation be proven by a preponderance of the evidence. (Office of Science and Technology Policy 2000, p. 76262)
A somewhat broader definition of scientific misconduct has been put forward by the Wellcome Trust, the largest biomedical charity in the United Kingdom:
... [t]he fabrication, falsification, plagiarism or deception in proposing, carrying out or reporting results of research or deliberate, dangerous or negligent deviations from accepted practices in carrying out research. It includes failure to follow established protocols if this failure results in unreasonable risk or harm to humans, other vertebrates or the environment. (Koenig 2001, p. 1411)
Germany (Bostanci 2002) and China (Yimin 2002) have also developed definitions of scientific misconduct that are somewhat broader than the U.S. version.
In all cases, core elements of the definition of misconduct in science (also known as scientific or research misconduct) include fabrication and falsification of research data, and plagiarism (FFP). This reflects both philosophy and history. Researchers depend on the reliability of the published work of others in order to determine how best to design and conduct investigations of research questions. Rather than reproducing all related experiments, investigators expect to be able to build on previous research, not only their own but also that of others. Thus fabrication and falsification undermine the fundamental and central tenets of the scientific enterprise. In addition, researchers expect to be recognized and held accountable for their contribution to a scientific body of knowledge. Plagiarism violates this expectation.
Although in retrospect the work of some earlier scientists has been the subject of debate (Broad and Wade 1982), during the seventeeth, eighteenth, and nineteenth centuries the only significant discussion of misconduct among scientists was an isolated work by Charles Babbage (1830), which identified three types of misconduct: trimming data to fit expectations; cooking data by discarding what did not fit expectation; and the outright forgery or creation of fictitious data. The most famous instance of scientific forgery occurred in the early-twentieth century with the discovery of Piltdown Man.
In the 1980s, blatant examples of research misconduct came to light (Broad and Wade 1982, Sprague 1993). As a result congressional committees responsible for oversight of various aspects of science and technology pressured funding agencies to develop policies to address what seemed to be the increasing incidence of scientific misconduct. These agencies, in particular the National Institutes of Health (NIH) and the National Science Foundation (NSF), developed policies designed to explicitly identify and address allegations of scientific misconduct.
In its initial policy, the NIH described misconduct as "serious deviation, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research" (Public Health Service 1986, p. 2), a definition from which later definitions have derived (Buzzelli 1999). Fabrication, falsification, and plagiarism are clearly provided as examples and the other serious deviation from accepted practices (OSD) clause emphasizes the primary role of the scientific community in identifying and setting the ethical standards for its members (Buzzelli 1999). Thus the OSD clause reflects the widespread view that the scientific community has a collective responsibility for establishing and upholding the professional standards of the community (Chubin 1985, Frankel 1993). The OSD clause is a common element of definitions of scientific misconduct found in many policies developed by U.S. funding agencies, universities, and professional societies. Nevertheless, in defining scientific and research misconduct, in the United States, the scientific community has tended to focus on FFP and has opposed the OSD clause (National Academy of Science 1992, Buzzelli 1999).
In 1993 the Commission on Research Integrity (CORI) was formed to advise the U.S. Department of Health and Human Services (DHHS) on ways to improve the Public Health Service response to allegations of misconduct in biomedical and behavioral research. The Commission found that in spite of the community's seeming preference "for a narrow and precise definition centered upon 'fabrication, falsification and plagiarism (FFP)' 'FFP' is neither narrow nor precise" (CORI 1995, p. 8). CORI's report, "Integrity and Misconduct in Research" (1995) clarified the role of intent in research misconduct and reframed the definition in terms of misappropriation of words or ideas (specifically including information gained through confidential review of manuscripts or grant applications), interference in the research activities of others (i.e., intentionally taking, hiding, or damaging research-related equipment, materials such as reagents, software, writings, or research products), and misrepresentation of information so as to deceive, either intentionally or with reckless disregard for the truth (thereby covering both fabrication and falsification). They also identified as other relevant forms of professional misconduct obstruction of investigations of research misconduct and noncompliance with research regulations, and highlighted the need to protect from retaliation those who bring forward good faith allegations of misconduct (commonly known as whistle-blowers). In addition, the Commission emphasized the need for a proactive rather than reactive approach to misconduct in science and recommended that research institutions be required to provide education in research integrity.
In the 1980s when concerns about the frequency of scientific misconduct were initially raised, the common response by senior members of the scientific community was that scientific misconduct is rare and in any case science is self-correcting. Given that FFP not only undermines but is inconsistent with the bedrock principles on which scientific research is based, it is not surprising that members of the scientific community would assume that genuine, authentic, and bona fide members of the community would not engage in such practices and that their occurrence would be rare. Indeed the frequency of misconduct continues to be debated. At the same time, it has become clear that the peer review process is largely incapable of detecting fabrication or falsification. What is not in doubt is the serious negative impact of even a single occurrence of misconduct not only for those involved and for those whose work is misdirected by fraudulent research, but also the negative impact on trust both within the scientific community and beyond (Kennedy 2000).
An apparent tension continues with regard to internal (i.e., within the scientific community) versus external governmental control of both the definition of scientific misconduct and of oversight of scientific research. However the tension may be more apparent than real since the scientific community is not homogeneous with regard to its views on research integrity and misconduct. As of 2002, the U.S. government policy regarding scientific misconduct continues to emphasize FFP and reflects vocal opposition by some segments of the scientific community to the OSD clause in spite of the obvious necessary reliance of the clause on the scientific community's own standards and assessment of accepted practices. It is nevertheless generally recognized that FFP does not encompass all of the serious deviations from accepted practice that are of concern to the wider scientific community. This is apparent from formal definitions of scientific misconduct like that advanced by the Wellcome Trust, educational programs at research institutions and professional scientific societies, and professional codes of ethics that identify and examine a wide array of other issues that arise in conducting and reporting scientific research, and in training science professionals. These issues include topics considered part of the responsible conduct of research (RCR) such as data management, humane treatment of research subjects whether laboratory animals or human volunteers, conflicts of interest, publication practices, peer review, and mentorship responsibilities. Moreover while the Office of Research Integrity (ORI) is responsible for addressing allegations of scientific misconduct either directly or by overseeing investigations conducted by research institutions, the agency relies on research institutions to conduct inquiries and investigations of allegations of research misconduct brought against their employees and students.
More to the point, the focus of concern both within the scientific community and in governmental agencies (exemplified by the ORI) is evolving (Mitcham 2003). Increasingly the ORI promotes research integrity through education and training in RCR (Pascal 1999). The scientific community, too, places less emphasis on misconduct and is more focused on research integrity and education (Institute of Medicine/National Research Council 2002). While there is some consensus as to what constitutes the most egregious form of scientific misconduct (i.e., FFP) the concept continues to evolve both within the United States (as a result of the focus on the elements of RCR) and in other countries, for example China and Germany.
STEPHANIE J. BIRD
Babbage, Charles. (1989 ). "Reflections on the Decline of Science in England and on Some of its Causes." In The Works of Charles Babbage, Vol.7, ed. Martin Campbell-Kelly. London: Pickering.
Bostanci, Adam. (2002). "Germany Gets in Step with Scientific Misconduct Rules." Science 296: 1778.
Broad, William, and Nicholas Wade. (1982). Betrayers of the Truth: Fraud and Deceit in the Halls of Science. New York: Simon and Schuster.
Buzzelli, Donald E. (1999). "Serious Deviation from Accepted Practices." Science and Engineering Ethics 5: 275–282. A commentary on "Developing a Federal Policy on Research Misconduct" by Sybil Francis.
Chubin, Daryl E. (1985). "Misconduct in Research: An Issue of Science Policy and Practice." Minerva 23(2): 175–202.
Commission on Research Integrity (CORI). (1995). Integrity and Misconduct in Research. Washington, DC: U.S. Department of Health and Human Services, Public Health Services.
Frankel, Mark S. (1993). "Professional Societies and Responsible Research Conduct." In Responsible Science: Ensuring the Integrity of the Research Process, Vol. 2. Washington, DC: National Academy Press.
Institute of Medicine / National Research Council. (2002). Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct. Washington, DC: National Academies Press.
Kennedy, Donald. (2000). "Reflections on a Retraction." Science 289: 1137.
Koenig, Robert. (2001). "Wellcome Rules Widen the Net." Science 293: 1411–1413.
Mitcham, Carl. (2003). "Co-Responsibility for Research Integrity." Science and Engineering Ethics 9: 273–290.
National Academy of Sciences (NAS). (1992). Responsible Science: Ensuring the Integrity of the Research Process, Vol. I. Washington, DC: National Academies Press.
Office of Science and Technology Policy. (2000). "Federal Policy on Research Misconduct." Federal Register 65: 76260–76264.
Pascal, Chris B. (1999). "The History and Future of the Office of Research Integrity: Scientific Misconduct and Beyond." Science and Engineering Ethics 5: 183–198.
Public Health Service. (1986). NIH Guide for Grants and Contracts 15(11). Special issue, July 18.
Sprague, Robert L. (1993). "Whistleblowing: A Very Unpleasant Avocation." Ethics and Behavior 3: 103–133.
Yimin, Ding. (2002). "Beijing U. Issues First-Ever Rules." Science 296: 448.