Was Rep. John Dingell's investigation of scientific fraud unjustified in the "Baltimore case"
Was Rep. John Dingell's investigation of scientific fraud unjustified in the "Baltimore case"?
Viewpoint: Yes, the Dingell investigation was an abuse of congressional power that hindered the objectivity of a scientific investigation.
Viewpoint: No, the Dingell investigation of scientific fraud was not unjustified because peer review and self-regulation cannot work alone in maintaining scientific integrity.
Although scientists agree that they should be accountable to the federal government for the funds they receive, they object to the concept that members of Congress can evaluate the merits of scientific experiments, or settle debates about scientific concepts and the interpretation of research results. In an attempt to pass judgment on science, members of Congress could use their power to determine the course of scientific research or direct research funds to their own constituents. Moreover, press coverage of congressional hearings puts scientists at a distinct disadvantage. The dispute known as the "Baltimore Case" is the best known example of a recent confrontation between scientists and a congressman who believed that he had evidence of scientific fraud. The case revolved around a paper published in the journal Cell in 1986.
Many observers believe that the scientists who faced Congressman John Dingell (D-Michigan), chairman of the House Energy and Commerce Committee and the House Subcommittee on Oversight and Investigations, were systematically bullied, intimidated, threatened, and smeared in the press by committee members who leaked confidential and incomplete information to reporters. Critics of the Dingell investigation called it a witch-hunt and offered the Congressman's own words as evidence. When David Baltimore failed to demonstrate what the Congressman considered proper humility in the face of questioning by his committee, Dingell promised his colleagues that he would "get that son of a bitch … and string him up high." Given the fact that the "son of a bitch" in question was a Nobel Laureate and a highly respected molecular biologist, many scientists interpreted the long, drawn-out investigation as an attempt to intimidate all scientists who might become involved in scientific disputes.
David Baltimore was elected to the National Academy of Sciences in 1974 and shared the 1975 Nobel Prize in Physiology or Medicine with Renato Dulbecco, and Howard Temin, for "discoveries concerning the interaction between tumor viruses and the genetic material of the cell." Temin and Baltimore independently discovered an enzyme called reverse transcriptase that allows genetic information to flow from RNA to DNA. The discovery of reverse transcriptase has had a tremendous impact on molecular biology and cancer research. In 1968 Baltimore joined the faculty of the Massachusetts Institute of Technology. He was appointed director of the new Whitehead Institute for Biomedical Research in 1984. Four years after the publication of the controversial Cell paper, Baltimore was appointed president of Rockefeller University. Because of the problems caused by the investigation of fraud charges brought against coauthor Dr. Thereza Imanishi-Kari, Baltimore resigned from the presidency in 1991. Three years later Baltimore returned to MIT as the Ivan R. Cottrell Professor of Molecular Biology and Immunology. Reflecting on the factors that had guided his career, Baltimore said: "My life is dedicated to increasing knowledge. We need no more justification for scientific research than that. I work because I want to understand."
In the wake of the Baltimore Case, many scientists worried that the definition of scientific misconduct adopted by the Federal government could inhibit innovative scientists and stifle cutting-edge research. Federal rules define scientific misconduct as "fabrication, falsification, plagiarism or other practices that seriously deviate from those that are commonly accepted within the scientific community…." Scientists fear that politicians could exploit the rather ill-defined and amorphous concept of "deviant" to force scientists to accept officially sanctioned research goals and punish those who pursue innovative ideas.
The history of medicine and science provide many examples of research claims that seriously deviated from accepted norms of the time. For example, based on accepted norms, a peer-reviewed journal in England rejected Dr. Edward Jenner's 1796 report on the use of cowpox to provide immunity against smallpox. Similarly, Nobel Prize-worthy reports on innovative research on the hepatitis B virus and a radioimmunoassay technique that can detect trace amounts of substances in the body were rejected by prominent peer-reviewed journals before they were published in other journals.
Observers with legal training, which includes many government officials, call attention to the difference between fraud in the civil tort law and fraud or misconduct in science. There are also differences between the rules that govern scientific misconduct and criminal statutes. The kinds of behavior that are seen as scientific misconduct tend to change with time. Now that the biomedical sciences have become the driving engine of biotechnology, the theft or unauthorized removal of materials from the laboratory has become a matter of increasing concern. Yet under the rules established by the government for grant oversight, theft is not defined as misconduct. The federal government regulations concerning misconduct are limited to fabrication, falsification, and plagiarism. Officials at the Office of Research Integrity (ORI), which serves as the watchdog of science at the U.S. Department of Health and Human Services, explain that there are already criminal statutes against theft, but the criminal code does not deal with plagiarism, fabrication, and falsification.
However, after reviewing a series of cases involving scientific fraud and misconduct, some observers conclude that science cannot be trusted to police itself. Therefore, outside investigators, such as the Office of Scientific Integrity (now the ORI), or Congressman Dingell's subcommittee, are needed to reveal misconduct and enforce proper standards of conduct among those conducting scientific research with public funds. Because science is done by fallible human beings, critics argue that Congress should play a role in investigating charges of fraud or misconduct. Moreover, they believe that the threat of such investigations might deter misconduct by other scientists. Enforcing high standards of scientific integrity is essential to prevent the publication and dissemination of false and misleading data and conclusions by unscrupulous researchers.
Those who investigate scientific misconduct admit that the cases may be very complex, but they argue that peer review and self-regulation are not sufficiently rigorous in detecting and prosecuting scientific fraud and misconduct. The Baltimore Case does support the argument that investigating charges of scientific fraud is a difficult task; pursuing the allegations of misconduct took 10 years, five investigations, and three congressional hearings. Finally, in 1996 a federal appeals panel overturned Dingell's findings and concluded that the evidence he had used to arrive at a verdict of fraud in 1994 was unreliable and based on "unwarranted assumptions."
—LOIS N. MAGNER
Viewpoint: Yes, the Dingell investigation was an abuse of congressional power that hindered the objectivity of a scientific investigation.
"An ambitious congressman using his power to decide what is true or false in science." This was Anthony Lewis's summation of the "Baltimore Case," in a New York Times Op-Ed dated June 24, 1996. If ever there was a scary picture of science, Lewis has painted it perfectly with this one sentence.
Science isn't decided in a committee, it's only funded there. Should scientists be accountable for the money they receive? Absolutely. But the people who fund the science aren't necessarily the ones who can decide whether that science is "good." If that were the case, it would be Congress, not the NIH, reviewing grant applications for scientific projects. Furthermore, conducting congressional hearings to settle scientific disputes sets us on a slippery slope towards government-controlled science (where only "politically correct" research is allowed), and, indeed, government-controlled thinking, reminiscent of George Orwell's 1984.
Politicians and bureaucrats view the world quite differently than scientists. Because they are not versed in the language of science, nor familiar with the culture of scientific research, they are ill equipped to make determinations on scientific disputes. Unfortunately, what the politicians lack in understanding they make up in power and privilege, and they have an attentive audience in the press that few scientists enjoy. This is a dangerous mix. The Baltimore Case was judged in the press, with committee members leaking confidential, incomplete information to chosen reporters. David Baltimore himself first heard about the imminent hearing from the press, who called for his reaction.
A Difference in Views of Science
The controversy concerning Federal definition of serious scientific misconduct is a perfect illustration of the differences between scientists and bureaucrats. The rules define science misconduct as "… fabrication, falsification, plagiarism or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting and reporting research." The "practices that seriously deviate" part makes sense to a Federal agency trying to protect public funds from wasteful wild-goose chases. But to scientists, this phrase sets a dangerous precedent of a government coercing scientists to think and work under a universally accepted formula, where innovations could bring charges of misconduct. Dr. David Goodstein, Vice Provost of Cal Tech, wrote: "To me it seems poor public policy to create a government bureaucracy mandated to root out transgressions that are not specified in advance."
Goodstein also points out the difference between fraud in the civil tort law, which most politicians would be familiar with, and fraud in science. In civil cases fraud must be proven to have caused actual damage. In science the intentional misrepresentation of procedures or results is considered fraud, regardless of whether this misrepresentation harmed anyone, or whether the conclusions of the research were right or wrong. Science, therefore, takes a much stricter view of fraud than civil tort law.
The Baltimore Case Revisited
Most people forget (or don't realize) that when Margot O'Toole first expressed doubts about the Cell paper, she felt there were errors in the paper, but was adamant that she did not suspect fraud. The issues she raised were typical of scientific disputes—doubts about the experiments, data interpretation, and the conclusions drawn from the data. Like many scientific disputes, this one could have been settled with further research. In fact, in the years since the publication of the paper, other researchers were able to verify the central claim of the paper, the very one O'Toole called into question. It is entirely acceptable for researchers to disagree with experimental interpretations. Many scientific journals contain dialog regarding a research paper, in the form of letters to the editors and responses from the original investigators. In fact, David Baltimore suggested such an exchange to O'Toole regarding the paper in question. She claimed she feared his written response would deter the Cell editors from publishing her critique, and she refused to engage in this exchange. In all the hearings at M.I.T. and Tufts, O'Toole denied the possibility of fraud. Her story started to change when "fraud busters" Stuart and Feder from the NIH got involved. Stuart and Feder were not highly regarded in the NIH. They got involved with the Cell paper after receiving calls from Charles Maplethorpe, a disgruntled graduate student who worked in Thereza Imanishi-Kari's lab. From their involvement, the case exploded into accusations of fraud.
The Danger of Congressional Power
After David Baltimore, Nobel laureate and coauthor of the infamous Cell paper, locked horns with Congressman Dingell while defending Imanishi-Kari, Dingell vowed, "I'm going to get that son of a bitch. I'm going to get him and string him up high." From this scientific vantage point, Dingell embarked on a witch-hunt that many later compared to the workings of McCarthy in his time. Like McCarthy, he had the power of the government at his side, and immunity as a member of Congress.
This meant that Dingell could circumvent the law and access information that was personal and confidential, and completely irrelevant to the case he was "investigating." In one instance, according to Dr. Bernadine Healy, director of NIH between 1991 and 1993, Dingell's committee accessed medical information in the personnel records of an accused scientist.
Dingell used his congressional power to call in the Secret Service, to run forensic tests on Imanishi-Kari's lab notebooks. He hoped to prove the parts of the notebooks called into question by O'Toole were fabricated. Not surprisingly, the Secret Service concluded the parts in question were indeed created later than they were supposed to have been created. Also not surprisingly, the appeals board that exonerated Imanishi-Kari noted that the forensic analysis "provided no independent or convincing evidence that the data or documents were not authentic or could not have been produced during the time in question." Ironically, the appeals panel found that the Secret Service's report contained the same flaws that cast suspicion over Imanishi-Kari—results were altered and data omitted. The U.S. attorney in Maryland, to whom Dingell sent the information he accumulated in his hearings, declined to prosecute Imanishi-Kari. One reason was that an independent forensic analysis of the notebooks found the Secret Service conclusions to be "erroneous."
When Dingell set up the Office of Scientific Integrity (OSI), whose purpose was to investigate scientific misconduct, he did not let the office run an impartial inquiry. Dr. Brian Kimes, the OSI's first director, recalled, "The N.I.H. was trying to separate from Dingell. Our leadership wanted us to make our own independent decision. But that was almost an impossibility." The Dingell subcommittee, holding its hearing in parallel to the OSI's investigation, bullied OSI staff members into supplying them with documents and confidential information. The OSI investigators conducted their hearings in fear of being second-guessed by Dingell. When the appeals panel was allowed to conduct its own scientific investigation, without the political pressure from Congressman Dingell, Imanishi-Kari was fully exonerated of all charges against her. During the appeals process, unlike the previous hearings, Imanishi-Kari was allowed to see the allegations against her, call witnesses, and examine the evidence. She was therefore able to respond to the accusations leveled at her. Prior to her appeal, she was denied due process. In The Baltimore Case, David Kevles asserts that Dingell's committee treated her like a defense contractor, not a scientist.
It is interesting to contrast the treatment Baltimore and Imanishi-Kari received at the hands of Dingell and his associates with the treatment NASA received after the Challenger explosion. With the exception of legendary physicist Richard Feynman, who was strongly critical of NASA and its methods, the Rogers Commissions treated the agency with kid gloves. The fact that NASA knew of the design flaw in the Solid Rocket Booster and, to paraphrase Feynman's words, continued to play Russian roulette with the lives of the astronauts, was swept under the rug. Congressman Rogers declined to press criminal negligence charges against responsible parties at NASA because it would "not be in the national interest." Yet Dingell contacted the U.S. attorney in Maryland, suggesting Imanishi-Kari be prosecuted because, in his mind, she was guilty of fraud. The damage to the image of science that was caused by the Dingell committee hearings was also not in the national interest.
Can Science Police Itself?
Most of the time, yes, science can police itself. Most claims of fraud are investigated quietly and handled well. On the other hand, many of the ORI's (the former OSI) major decisions were overturned over the years by the federal appeals board that exonerated Imanishi-Kari. Obviously, science can police itself better than the government can, most of the time. One of the problems we face in our society is the elevation of the scientist to a god-like status, where he or she can't be human, and in fact is expected to be more than human. Mistakes in science happen all the time. Mistakes are part of the scientific method, one might say, but the public reacts with fury when they come to light. Fraud is unacceptable to most scientists, but like all human beings some will be tempted to commit fraud. And because those who supervise scientists are also human, they want to believe there is nothing wrong in their realm. Do whistle-blowers get punished? Unfortunately yes, sometimes. Do inquiries gloss over misconduct? Unfortunately yes, sometimes. But instead of holding a kangaroo court and wasting taxpayers' money trying to settle scientific disputes, the government would be better off holding hearings that investigate the pressures on scientists today.
Science used to be about ideas as much as results. Published papers showed a work in progress. Diminished funding and, therefore, the need for concrete results drive science today. As Goodstein points out, monetary gain is rarely a motive in science fraud. But the need to produce results in order to receive your next grant is a powerful motive. Scientists want to stay in science, and the only way to do so is by receiving grants, which are in short supply. By understanding how science today differs from science 30 years ago, for example, the government, and the public, might be able to engage in a constructive dialog with the scientific community, and perhaps devise a way to ease the burden on scientists, so that all they have to concentrate on is the science, not the politics of funding.
The argument above isn't meant to excuse scientific fraud—outright fraud is inexcusable. And the government can request that academic institutions receiving federal funding put in place better procedures to handle investigations of fraud and protection of whistle-blowers. In fact, the government has done so. And as Good-stein points out, at least at Cal Tech and the universities that copied Cal Tech, the procedures he instituted require a purely scientific investigation, not a judicial process. And their record, at the time the article was written, was better than the government's record, in terms of having the decisions upheld on appeal.
In addition to the above, not all allegations of misconduct or fraud are easy to judge. Dr. Bernadine Healy brought an example of an investigation at the Cleveland Clinic, where a three-month extensive investigation into allegations of misconduct by a scientist resulted in "no conclusive evidence" of misconduct. The scientist was penalized for sloppy techniques, but nothing else. The ORI spent two years on its own investigation, reaching the same conclusion in April 1992 and reversing itself in July that same year. A year later the appeals board overturned the ORI's decision, exonerating the scientist of the charges of misconduct, as the Cleveland Clinic did in the beginning.
Science can police itself. It isn't perfect, and never will be, just like every other human endeavor. But while scientists are trying to police themselves, and doing a better job at it, who polices the lawmakers? Congressional power should never be abused the way it was during Dingell's witch-hunt. The congressman would have been wise to remember the words of Senator Margaret Chase Smith, who in 1950 stated: "… I am not proud of the way we smear outsiders from the floor of the Senate, hide behind the cloak of congressional immunity and still place ourselves beyond criticism."
What happens when Congress interferes with science? The best summary may be the federal appeals panel's final decision in the Baltimore Case. Dingell's brain-child, the OSI/ORI, which charged Imanishi-Kari with fraud in its 1994 verdict, presented evidence that was "irrelevant, had limited probative value, was internally inconsistent, lacked reliability or foundation, was not credible or not corroborated, or was based on unwarranted assumptions."
—ADI R. FERRARA
Viewpoint: No, the Dingell investigation of scientific fraud was not unjustified, because peer review and self-regulation cannot work alone in maintaining scientific integrity.
In 1996, Dr. Thereza Imanishi-Kari was acquitted of 19 charges of scientific misconduct in what had become known simply as the "Baltimore Case." The fact that it had taken 10 years, five investigations, and three congressional hearings to get to this "truth" is often cited as proof that the federal inquiries headed by Representative John Dingell (a Michigan Democrat and chairman of the House Energy and Commerce Committee) were unjustified and that Imanishi-Kari should not have been investigated for fraud in the first place. However, many people—scientists and non-scientists alike—believe that there is a positive role for Congress today in investigating scientific fraud.
Science as Part of Society
Scientists and the science they engage in are a fundamental part of society. In 1995, the National Academy of Scientists (NAS) issued a booklet for practicing scientists entitled On Being a Scientist: Responsible Conduct in Research, which states that "scientific knowledge obviously emerges from a process that is intensely human, a process indelibly shaped by human virtues, values, and limitations and by societal contexts." Many scientists do not dispute that this is so. For example, neurologist Dr. Oliver Sacks describes science as "a human enterprise through and through, an organic, evolving, human growth."
By contrast, some scientists like to claim that their work is value-free and completely objective, and that science is somehow on a higher plane than the rest of society. However, the NAS booklet suggests that "science offers only one window on human experience. While upholding the honor of their profession, scientists must seek to avoid putting scientific knowledge on a pedestal above knowledge obtained through other means." So, if science is a human activity practiced by highly intelligent and knowledgeable—but nonetheless, ordinary—people, it follows that this endeavor is vulnerable to the same kinds of abuses as activities in other areas of society.
As human beings, scientists are not immune from making mistakes. In fact, the scientific process itself depends on researchers continually examining evidence, trying to improve on earlier work, coming up with alternative interpretations of data, and eliminating error from the established scientific record. However, that process can only work if scientists are able to examine and question the data presented by their peers and replicate the original experimental results in their own laboratories. Even in the absence of any intention of fraud, the possibility of a future federal investigation might make some researchers keep better records.
Imanishi-Kari was the subject of several investigations, including those by the Office of Scientific Integrity (OSI; later the Office of Research Integrity, or ORI), Rep. Dingell's Oversight and Investigations Subcommittee, and even the Secret Service. Eventually, the appeals board of the Health and Human Services (HHS) judged her to have been guilty of nothing more than being "aberrant in her data recording and rounding patterns" when working on the experiments that form the basis of the flawed 1986 Cell paper. But this was enough for post-doctoral fellow Margot O'Toole to have difficulty in reproducing the results and to claim that the paper was "riddled with errors." In 1989, three of the coauthors—Imanishi-Kari, D. Weaver, and Baltimore—corrected some of these errors in another Cell paper, although they contend that these mistakes were not very important scientifically. In 1991, the original paper was retracted completely by its coauthors.
Unfortunately, misconduct, like errors, is found in science as much as in other social contexts, and it can be difficult to tell the difference between inadvertent errors and deliberate fraud without a full and proper investigation. In science, misconduct is traditionally defined as fabrication, falsification, and plagiarism (together known as FFP), and other departures from standard scientific practice, although the Commission on Research Integrity (CRI) replaced these with misappropriation, interference, and misrepresentation as part of its new definition of scientific misconduct in 1995 (see Key Terms).
The Importance of Scientific Integrity
Not only are scientific researchers human like the rest of us, in many cases the work they do also has an important impact on how we all live our lives. This is particularly true in the fields of medicine and genetics. False or misleading information can hold up scientific progress and—in the worst-case scenario—even delay and seriously hinder the development of medical cures or provoke health scares. In turn, this feeds public fear and skepticism about science and gives ammunition to those that would attack science endeavor.
These days, a large amount of public money is used to fund scientific research, and in return scientific establishments need to be open and accountable. For this reason in particular, research misconduct is no different than fraud in other areas of society. Rep. Dingell states that "… Congress authorizes approximately $8 billion annually for [the National Institutes of Health (NIH)] alone, and it is the subcommittee's responsibility to make sure that this money is spent properly and that the research institutions, including the NIH, that receive these federal funds behave properly."
Scientists are also hired and promoted on the basis of how well they can attract federal funds to their departments, and the reputations (and hence financial power) of researchers, as well as their colleagues and associated institutions, rest firmly on their incorruptibility. Since 1990, federal regulations have stipulated that institutions receiving training grants from the NIH must include training in "principles of scientific integrity" and have written procedures to follow when complaints of scientific misconduct are reported.
It is highly unlikely that fraud can ever be stamped out completely. Misconduct will occur wherever large numbers of people are involved in a social system, so it is essential that institutions have adequate machinery in place for dealing quickly and efficiently with cases of suspected scientific fraud. Some people believe that if the scientific community cannot be trusted to do this alone, Congress should intervene and ensure that research integrity is maintained.
Peer Review and Self-Regulation
There have been many critics of increased congressional involvement in the sciences, not least David Baltimore himself. They argue that the process of peer review and self-regulation are enough to keep scientific honor intact. This is true, in theory, most of the time, but practice has occasionally proved otherwise.
In recent years, the peer review system itself has even been shown to be vulnerable to corruption. The problem lies in the fact that while researchers in the same field are often in direct competition with each other, they are generally the best people to understand, review, and judge the validity of each other's work. Implicit in this system is the concept of trust. Scientists must be able to trust each other not to steal each other's ideas and words or to mislead each other when reviewing pre-publication papers, presenting posters or giving conference lectures, and even just talking about their work with other researchers.
Once a scientist suspects a colleague of committing errors, or even scientific misconduct, it is essential to the atmosphere of trust within research institutions that there be a way for a person to voice his or her concerns without fear of reprisals. If there was such measure in place, the scientific community could certainly be left alone to regulate itself. However, many people question whether this is the case and claim that scientific establishments have done a very poor self-regulatory job in the past. Allegations of misconduct can have grave consequences for both the accused and the whistle-blower and should not be undertaken, or dealt with, lightly. If the coauthors of the 1986 Cell paper had taken Margot O'Toole's claim of scientific error seriously and re-examined their findings and conclusions in the beginning, science would have been advanced and several careers would have been saved earlier. Rep. Dingell believes that research institutions are too slow to react to fraud allegations and are usually more concerned with keeping the situation away from the public eye than with examining the accusations and dealing with them fairly. He asserts that "… science is essentially a quest for truth. The refusal to investigate concerns raised by fellow scientists, and to correct known errors, is antithetical to science…."
Involvement of Congress
Some scientists are worried that congressional intervention in scientific affairs will stifle creativity and the kind of novel thinking that sometimes leads to huge advances in science. There has even been talk of the "science police"—a threat that has not really materialized. Rep. Dingell and his associates believe that federal rules and regulations merely serve to help universities and other research institutions act quickly and decisively when allegations of scientific misconduct surface. This gives researchers the freedom to concentrate on being experts in their own field—science, rather than law.
Another claim is that federal investigations favor the whistle-blower over the accused scientist and are unfair because they do not allow researchers under investigation to find out what charges and evidence are being used against them. However, this was rectified in 1992, when the HHS appeals board was set up for this purpose. The existence of such an appeals procedure might also have the advantage of deterring researchers from deliberately making false accusations against their colleagues and competitors, in itself a form of scientific misconduct.
In conclusion, science and scientists do not stand outside society but are an integral part of it. Modern scientific activity is so dependent on the public and the government for money and good will that it must be seen as accountable and beyond reproach. The peer review process and self-regulation are extremely important to scientific progress, but they are not sufficient for maintaining research integrity. Although Rep. Dingell has been criticized for his heavy-handed approach to the investigations of the "Baltimore Case," the fact remains that an outside body needs to ensure that allegations of scientific misconduct are investigated thoroughly and that researchers complaining about the behavior of other scientists are taken seriously and protected from retaliation. In an ideal situation, peer review, self-regulation, and federal investigation will work hand-in-hand to preserve scientific integrity.
—AMANDA J. HARMAN
Baltimore, David. Correspondence in "Shattuck Lecture—Misconduct in Medical Research." The New England Journal of Medicine 329 (September 2, 1993): 732-34.
Bulger, Ruth Ellen, Elizabeth Heitman, and Stanley Joel Reiser, eds. The Ethical Dimensions of the Biological Sciences. Cambridge, England: Cambridge University Press, 1993.
Goodman, Billy. "HHS panel issues proposals for implementing misconduct report." The Scientist [cited July 16, 2002]. <http://www.the-scientist.com/yr1996/july/miscond>.
———. "Multiple Investigations." The Scientist [cited July 16, 2002]. <http://www.the-scientist.com/yr1996/august/aftermath>.
Goodstein, David. "Conduct and Misconduct in Science" [cited July 16, 2002]. <www.its.caltech.edu/~dg/conduct.html>.
Guston, David H. "Integrity, Responsibility, and Democracy in Science." Scipolicy 1 (Spring 2001): 167-340.
Healy, Bernadine. "The Dingell Hearings on Scientific Misconduct—Blunt Instruments Indeed." The New England Journal of Medicine 329 (September 2, 1993): 725-27.
———. "The Dangers of Trial by Dingell." New York Times (July 3, 1996).
Imanishi-Kari, T., D. Weaver, and D. Baltimore. Cell 57 (1989): 515-16.
Kevles, Daniel J. The Baltimore Case: A Trial of Politics, Science and Character. New York: W.W. Norton & Company, 1998.
Lewis, Anthony. "Abroad at Home; Tale of aBully." New York Times (June 24, 1996).
National Academy of Sciences. On Being a Scientist: Responsible Conduct in Research. Washington: National Academy Press, 1995.
"Noted Finding of Science Fraud Is Overturned by Federal Panel." New York Times (June 22, 1996).
Sacks, Oliver. Introduction to Hidden Histories of Science by R. B. Silvers. London: Granta Books, 1997.
Weaver, D., et al. "Altered Repertoire of Endogenous Immunoglobulin Gene Expression in Transgenic Mice Containing a Rearranged mu Heavy Chain Gene." Cell 45 (1986): 247-59.
"What Can We Learn from the Investigation of Misconduct?" The Scientist [cited July 16, 2002]. <http://www.the-scientist.com/yr1989/jun/opin>.
Making up false data.
Altering or distorting data.
Purposefully damaging or taking material related to another scientist's research.
Deliberately plagiarizing or using information about other scientists' work when reviewing grant applications or manuscripts.
Intentionally leaving out important facts or lying about research.
The process by which a scientist's research is examined by other scientists before being accepted for publication in a scientific journal.
Stealing other people's ideas or words without giving due credit.
A person who informs the authorities or the public about the wrong actions of another person or institution.