Research Misconduct

views updated

RESEARCH MISCONDUCT


Research encompasses a broad range of activities that are bound together by the common goal of advancing knowledge and understandings. Its usefulness to society rests on the expectation that researchers undertake and report their work fairly, accurately, and honestly. Researchers who fail to fulfill this expectation lack integrity and can be accused of engaging in research misconduct.

Policies and Procedures

In 1981 when Congress convened its first hearing to investigate fraud in biomedical research, researchers expressed confidence in their ability to police their own affairs. The fact that researchers who engaged in misconduct were caught seemingly justified this confidence. A few professional societies subsequently issued reports discussing the importance of integrity in research, including the Association of American Medical Colleges, which published The Maintenance of High Ethical Standards in the Conduct of Research in 1982, and the Association of American Universities, which published its Report on the Integrity of Research in 1983. A small number of research universities also adopted research misconduct policiesprimarily the ones directly affected by misconduct cases, such as Yale and Harvard universities. However, neither government nor the majority of research universities saw any pressing need to make major changes. Through the mid-1980s, research misconduct remained largely undefined on most university campuses and was policed only through the informal mechanisms of peer review and general policies governing academic conduct.

The Health Research Extension Act of 1985 changed this situation and required government and universities to take a more aggressive approach to investigating research misconduct. In response to this call for action, the Public Health Service (PHS) published an Interim Policy on Research Misconduct in 1986 and adopted a final policy in 1989. The latter established two offices to investigate and adjudicate research misconduct cases: the Office of Scientific Integrity (OSI) as part of the National Institutes of Health (NIH) and the Office of Scientific Integrity Review (OSIR), affiliated with the Office of the Assistant Secretary of Health (OASH). The National Science Foundation (NSF) also published Final Regulations for Misconduct in Science and Engineering Research (1987) and assigned administration of its regulations to the NSF Office of the Inspector General (OIG). These actions established policies and procedures for investigating research misconduct. They also required research universities to establish their own policies and procedures for handing research misconduct cases, which they slowly did over the course of the 1990s.

The responsibility for administering research misconduct policies on most university campuses is assigned to the chief research officer, although in a few cases universities have established research integrity or misconduct committees. On campuses with large research budgets, one staff person, sometimes called the "research integrity officer," is assigned primary responsibility for initiating inquiries, setting up investigation committees, making timely reports, and handling other matters relating to research misconduct. The process for determining whether misconduct has been committed usually follows the three-step model outlined by the federal governmentinquiry, investigation, and adjudication. During inquires, charges are informally assessed to determine whether there is enough evidence to proceed with a formal investigation. If there is, a formal investigation follows, after which decisions about innocence or guilt and appropriate penalties or exoneration are made (adjudication).

Proper handling of misconduct cases proposes three challenges for universities. First, since state and federal governments provide no funds to comply with research misconduct regulations, other sources of support are needed. Second, misconduct cases often pit one university employee against another, making it difficult for the university to provide equal justice and protection to all concerned. Third, universities have conflicts of interest when they confront reports of research misconduct.

Several factors can make it tempting for universities to dismiss cases early in the process prior to fair and complete investigations. Investigations can be expensive and divisive. Findings of misconduct can require that funds be returned to a funding agency, even if some or all of the funds have already been spent. Reports of research misconduct can also erode public confidence in a university. However, in addition to the clear responsibility universities have to assure that public research funds are used properly, the costs of cover-ups can be high and may lead to further regulation. Therefore universities must take their responsibility for conducting fair and complete investigations seriously.

Definitions

The first formal government definition of research misconduct was published in the 1986 PHS Interim Policy. This initial definition framed all subsequent discussions of research misconduct in two important ways. First, in rejecting the use of the term "fraud" for describing inappropriate behavior in research, PHS officials helped assure that all subsequent discussions would be framed in terms of "research misconduct." Second, the three key terms used in the Interim Policies to describe research misconduct"fabrication, falsification, and plagiarism," or FFPhave been used in all subsequent federal and many university definitions.

As important and long-lasting as the framework established in the Interim Policy turned out to be, it raised points of contention that to this day continue to polarize discussions of research misconduct. Most importantly, PHS proposed and the National Science Foundation (NSF) soon officially included, in 1987, one additional phrase in the government definition of research misconduct: "other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research." The "other practices" phrase turned out to be very controversial. Scientists worried that professional disagreements over methods or theories might be construed as other practices that seriously deviate from those that are commonly accepted. They wanted a tight, unambiguous definition that left no room for arbitrary interpretation. Government officials, particularly at NSF, felt they needed some flexibility to investigate behavior that did not constitute FFP but that nonetheless was clearly inappropriate and undermined the public's investment in research. NSF officials backed up their claim with several examples, including a widely publicized case of inappropriate sexual behavior by an anthropologist who had an NSF grant to train students in field research.

Two efforts to resolve disagreements over the definition of misconduct in the 1990s failed to produce a consensus. The first, led by a subcommittee of the National Academy of Sciences, dropped the "other practices" phrase from the formal definition of research misconduct, but agreed that there were "other questionable research practices" that needed to be investigated, not by government but by research institutions and professional societies. The second effort to produce a consensus definition by a specially appointed PHS Commission on Research Integrity failed to win serious support and was largely ignored. The failure of these efforts and the lack of a common government definition led eventually to a new government effort to produce a uniform federal definition for research misconduct, coordinated this time by the Office of Science and Technology Policy (OSTP) in the Executive Office of the President.

The new OSTP policy, which was published in the Federal Register in December 2000, defines "research misconduct" as "fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results." It also sets three criteria for proving misconduct, which further narrow the definition of research misconduct. When adopted, the new definition will require evidence that the behavior:

  • [represents a] significant departure from accepted practices of the relevant research community; and
  • [was] committed intentionally, or knowingly, or recklessly; and
  • [can be] proven by a preponderance of evidence.

With the publication of the OSTP Policy, nearly two decades of intense debate over the definition of research misconduct reached a tentative conclusion, assuming the federal agencies that fund research follow through and adopt the proposed OSTP definition.

Misconduct Cases

The evolution of research misconduct policy has unquestionably been driven by a small number of prominent cases. In the early 1970s, William Summerlin, working at the Sloane Kettering Institute, tried to pass off black patches painted on white mice as genuine skin grafts that he had applied using a new technique. Elias Asabati, while at Temple University and Jefferson Medical College in Philadelphia and the Anderson Hospital in Houston, took published articles, replaced the authors' names with his own name, made occasional minor modifications in the text, and then submitted them to other journals for publication. His misdeeds, which eventually included the submission of eighty fraudulent articles, became public in 1978. A junior researcher at Yale University, Vijay Soman, used information from an unpublished article from another laboratory being reviewed by his mentor, Philip Felig, to publish his own, supposedly original findings on the same topic. Yale initially ignored the charges of plagiarism and data falsification brought by the researcher whose work Soman used. The charges were confirmed in 1980 by an investigation conducted by the NIH. The 1981 Congressional hearings on fraud in biomedical research were convened specifically to investigate these cases.

Reports of new cases of research misconduct and the 1982 publication of Betrayers of the Truth: Fraud and Deceit in the Hall of Science, by New York Times writers William Broad and Nicholas Wade, guaranteed that the problem of research misconduct did not disappear after the 1981 hearings. One case, which involved data falsification by John Darsee, a promising young cardiovascular researcher at Harvard, dragged on for five years, due not to uncertainty about the actual misconduct but to a dispute over the responsibilities of others who oversaw Darsee's work. A paper by NIH researchers Walter Stuart and Ned Feder raised serious questions about the role of Darsee's chief mentor, Eugene Braunwald, in reviewing publications he coauthored with Darsee. Disagreement over the publication of Stuart and Feder's paper kept the Darsee case alive through most of the 1980s.

As the Darsee case was slowly coming to an end, two new cases assured continued public interest in research misconduct. The first involved disputed data published in an article in Cell in 1986, based on research conducted by Tufts University researcher Thereza Imanishi-Kari. A postdoctoral student working in Imanishi-Kari's laboratory, Margot O'Toole, raised questions about research misconduct when she was unable to replicate some of the results reported in the Cell article. Eventually, the article's most prominent co-author, Nobel scientist and Whitehead Institute Director David Baltimore, was drawn into the dispute. After numerous investigations by the Massachusetts Institute of Technology (MIT) and the Office of Scientific Integrity (renamed the Office of Research Integrity in 1992), the charges against Imanishi-Kari were dismissed by a Health and Human Services appeal board in 1996, ten years after the original article was published and five years after the article had been retracted by four of the five co-authors, including Baltimore. (David Baltimore was never formally charged with misconduct.) The bitter dispute between Baltimore and his supporters on the one hand and Congressman John Dingell of Michigan and research critics on the other seriously polarized the debate over the importance of and ways to deal with research misconduct.

The second prominent case involved NIH AIDS researcher Robert Gallo, a researcher in his laboratory, Mikulas Popovic, and their 1984 article published in Science claiming discovery of the AIDS virus. At issue was whether Gallo's team had isolated the virus described in the article or whether they had improperly used samples supplied by the Institut Pasteur in France. A series of articles by Chicago Tribune reporter John Crewdson and reports issued by the ORI and a subcommittee headed by Representative John Dingell cast serious doubts on Gallo's claims. However, the charges against Popovic were dismissed in 1995 by the same HHS appeal board that had dismissed the charges against Imanishi-Kari. ORI therefore decided to drop its charges against Gallo, arguing that the appeal board had adopted a new definition of misconduct that ORI was not prepared to meet.

In the late 1990s the focus of interest in research misconduct shifted to clinical research, following the report of the death of a young subject, Jesse Gelsinger, in 1999, during a gene therapy trial at the University of Pennsylvania. In this and other cases involving clinical research, the questionable research behavior does not constitute research misconduct, narrowly defined as FFB, but rather raises questions about conflicts of interest, misleading or incomplete reports on past research, the failure to inform research subjects of risks, and noncompliance with federal rules. These new concerns have raised questions about the way researchers are trained and steps that can be taken to foster the "responsible conduct of research."

Responsible Conduct of Research

Interest in instruction in the "responsible conduct of research" (RCR) emerged in the late 1980s as one solution to growing public concern about research misconduct. Although a number of earlier reports had stressed the importance of education in research training, few substantive changes in the way researchers are trained were made prior to the 1989 Institute of Medicine report The Responsible Conduct of Research in the Health Sciences. Within a year, NIH and the Alcohol, Drug and Mental Health Administration (ADAMHA) published rules that required researchers seeking a special type of award known as a "training grant" to include a description of "activities related to the instruction about the responsible conduct of research" in their applications.

Over the course of the 1990s the modest NIH/ADAMHA training grant requirement fostered the development of a growing number of RCR courses on university campuses and related instructional materials, such as textbooks, videos, and Internet resources. This development was given a considerable boost in 2000, when NIH implemented Required Education in the Protection of Human Research Participants and ORI published an RCR requirement that would have affected all PHS funded research, had it not been suspended due to Congressional questions about the way it was developed. However, even without the broad ORI RCR requirement, efforts continue on university campuses to formalize instruction in the responsible conduct of research, relying more and more on web-based training.

Future Considerations

When research misconduct first emerged as a public concern in the late 1970s, it was seen primarily as an aberration that did not typify the conduct of most researchers. By implication, it was therefore assumed that most researchers adopted high standards for integrity in their work. Since the early 1980s, research misconduct has continued to occur, but its overall rate of occurrence is still small in comparison to the total number of active researchers. Research misconduct, defined as intentional FFP, still seems to be an aberration that does not typify the conduct of most researchers. However, based on a growing body of research on research integrity, it can no longer be assumed that most researchers do in fact adopt high standards for integrity in their work.

Studies of peer review, publication practices, conflicts of interest, bias, mentoring, and other elements of the research process consistently report that significant numbers of researchers (defined as 10% or higher) do not adhere to accepted norms for the responsible practice of research. Significant numbers inappropriately list their names on publications, are unwilling to share data with colleagues, use inappropriate statistical analyses, provide inaccurate references in publications, fail to list conflicts of interest, and engage in other practices that fall short of ideal standards for the responsible conduct of research. There has been widespread agreement that these other "questionable research practices" as titled in the 1982 NAS report should not be considered research misconduct. However, whether classed as misconduct or not, these practices unquestionably waste public research dollars, undermine the integrity of the research record, and can even endanger public health. As a result, the focus of attention both in government and on university campuses is slowly shifting from confronting misconduct to fostering integrity through education and the serious appraisal of what it means to be a research university.

See also: Ethics, subentry on Higher Education; Federal Funding for Academic Research; Misconduct and Education.

bibliography

Association of American Medical Colleges. 1982. The Maintenance of High Ethical Standards in the Conduct of Research. Washington, DC: Association of American Medical Colleges.

Association of American Universities. 1983. Report of the Association of American Universities Committee on the Integrity of Research. Washington, DC: American Association of Universities.

Broad, William J., and Wade, Nicholas. 1982. Betrayers of the Truth: Fraud and Deceit in the Halls of Science. New York: Simon and Schuster.

Buzzelli, Donald E. 1993. "The Definition of Misconduct in Science: A View from NSF." Science 259:584585, 647648.

Crewdson, John. 2002. Science Fictions: A Scientific Mystery, a Massive Cover-up, and the Dark Legacy of Robert Gallo. Boston: Little, Brown.

Health Research Extension Act of 1985. U.S. Public Law 99158.

Hixson, Joseph. 1976. The Patchwork Mouse. Garden City, NY: Anchor Press/Doubleday.

Institute of Medicine, and Committee on the Responsible Conduct of Research. 1989. The Responsible Conduct of Research in the Health Sciences. Washington, DC: National Academy of Sciences.

Kevles, Daniel J. 1998. The Baltimore Case: A Trial of Politics, Science, and Character, 1st edition. New York: W.W. Norton.

National Academic of Science, Committee on Science Engineering and Public Policy, Panel on Scientific Responsibility and the Conduct of Research. 1992. Responsible Science : Ensuring the Integrity of the Research Process. Washington, DC: National Academy Press.

National Institutes of Health, and the Alcohol, Drug and Mental Health Administration. 1989. "Requirement for Programs on the Responsible Conduct of Research in National Research Service Award Institutional Training Programs." NIH Guide for Grants and Contracts 18:1.

National Institutes of Health. 2000. "Required Education in the Protection of Human Research Participants." Washington, DC: National Institutes of Health.

National Science Foundation. 1987. Misconduct in Science and Engineering Research: Final Regulations. Washington, DC: National Science Foundation.

Office of the President and Office of Science and Technology Policy. 2000. Federal Policy on Research Misconduct. Washington, DC: Office of Science and Technology.

Sarasohn, Judy. 1993. Science on Trial: The Whistle Blower, The Accused, and The Nobel Laureate. New York: St. Martin's Press.

Steneck, Nicholas H. 1984. "The University and Research Ethics: Commentary." Science, Technology, and Human Values 94 (September):615.

Steneck, Nicholas H. 1994. "Research Universities and Scientific MisconductHistory, Policies, and the Future." Journal of Higher Education 65 (3):5469.

Steneck, Nicholas H. 1999. "Confronting Misconduct in Science in the 1980s and 1990s: What Has and Has Not Been Accomplished?" Science and Engineering Ethics 5 (2):116.

Steneck, Nicholas H., and Scheetz, Mary D., eds. 2002. Investigating Research Integrity: Proceedings of the First ORI Research Conference on Research Integrity. Washington, DC: Office of Research Integrity.

internet resource

U.S. Department of Health and Human Services Office of Research Integrity. 2002. Responsible Conduct of Research (RCR) Education. <http://ori.dhhs.gov/html/programs/congressionalconcerns.asp>.

Nicholas H. Steneck

About this article

Research Misconduct

Updated About encyclopedia.com content Print Article