Research, Human: Historical Aspects

views updated



In Western civilization, the idea of human experimentation, of evaluating the efficacy of a new drug or procedure by outcomes, is an ancient one. It is discussed in the writings of Greek and Roman physicians and in Arab medical treatises. Scholars like Avicenna (980–1037) insisted that "the experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man" (Bull, p. 221). But records of how often ancient physicians conducted experiments, with what agents, and on which subjects, are very thin. The most frequently cited cases involve testing the efficacy of poisons on condemned prisoners, but the extent to which other human research was carried on remains obscure.

Experimentation was frequent enough to inspire a discussion of the ethical maxims that should guide would-be investigators. Moses Maimonides (1135–1204), the noted Jewish physician and philosopher, instructed colleagues always to treat patients as ends in themselves, not as means for learning new truths. Roger Bacon (1214–1294) excused the inconsistencies in therapeutic practices on the following grounds:

It is exceedingly difficult and dangerous to perform operations on the human body. The operative and practical sciences which do their work on insensate bodies can multiply their experiments till they get rid of deficiency and errors, but a physician cannot do this because of the nobility of the material in which he works; for that body demands that no error be made in operating upon it, and so experience [the experimental method] is so difficult in medicine. (quoted in Bull, p. 222)

Human Experimentation in Early Modern Western History

Human experimentation made its first significant impact on medical practice through the work of the English country physician Edward Jenner (1749–1823). Observing that dairy farmers who had contracted the pox from swine or cows seemed to be immune to the more virulent smallpox, Jenner set out to retrieve material from their pustules, inject the material into another person, and see whether the recipient could then resist challenges from smallpox materials. The procedure promised to be less dangerous than the more standard one of inoculating people with small amounts of smallpox that had been introduced into Europe and America from the Ottoman Empire in the first half of the eighteenth century.

In November 1789, Jenner inoculated his son, then about a year old, with swinepox. When this intervention proved ineffective against a challenge of smallpox, Jenner tried cowpox several months later with another subject. As he recalled: "The more accurately to observe the progress of the infection, I selected a healthy boy, about eight years old, for the purpose of inoculation for the cow-pox. The matter … was inserted … into the arm of the boy by means of two incisions" (Jenner, pp. 164–165). A week later Jenner injected him with smallpox, and noted that he evinced no reaction. The cowpox had rendered him immune to smallpox. One cannot know whether the boy was a willing or unwilling subject or how much he understood of the experiment. But this was not an interaction between strangers. The boy was from the neighborhood, Mr. Jenner was a gentleman of standing, and the experiment did have potential therapeutic benefit for the subject.

For most of the nineteenth century, human experimentation throughout western Europe and the United States was a cottage industry, with individual physicians trying out one or another remedy on neighbors or relatives or on themselves. One German physician, Johann Jorg (1779–1856), swallowed varying doses of seventeen different drugs in order to analyze their effects. Another, Sir James Young Simpson (1811–1870), an Edinburgh obstetrician who was searching for an anesthesia superior to ether, in November 1847 inhaled chloroform and awoke to find himself lying flat on the floor (Howard-Jones). Perhaps the most extraordinary self-experiment was conducted by Werner Forssman. In 1929 he passed a catheter, guided by radiography, into the right ventricle of his heart, thereby demonstrating the feasibility and the safety of the procedure.

The most unusual nineteenth-century human experiment was conducted by the American physician William Beaumont (1785–1853) on Alexis St. Martin. A stomach wound suffered by St. Martin healed in such a way as to leave Beaumont access to the stomach and the opportunity to study the action of gastric juices. To carry on this research, which was very important to the new field of physiology, Beaumont had St. Martin sign an agreement, not so much a consent form as an apprenticeship contract. Under its terms, St. Martin bound himself to "serve, abide, and continue with the said William Beaumont … [as] his covenant servant," and in return for board, lodging, and $150 a year, he agreed "to assist and promote by all means in his power such philosophical or medical experiments as the said William shall direct or cause to be made on or in the stomach of him" (Beaumont, pp. xii–xiii).

The most brilliant human experiments of the nineteenth century were conducted by Louis Pasteur (1822–1895), who demonstrated an acute sensitivity to the ethics of his investigations. Even as he conducted his animal research to identify an antidote to rabies, he worried about the time when it would be necessary to test the product on a human being. "I have already several cases of dogs immunized after rabic bites," he wrote in 1884. "I take two dogs: I have them bitten by a mad dog. I vaccinate the one and I leave the other without treatment. The latter dies of rabies: the former withstands it." Nevertheless, Pasteur continued, "I have not yet dared to attempt anything on man, in spite of my confidence in the result.… I must wait first till I have got a whole crowd of successful results on animals.… But, however I should multiply my cases of protection of dogs, I think that my hand will shake when I have to go on to man" (Vallery-Radot, pp. 404–405).

The fateful moment came some nine months later when his help was sought by a mother whose nine-year-old son, Joseph Meister, had just been severely bitten by what was probably a mad dog. Pasteur agonized as to whether to carry out what would be the first human trial of his rabies inoculation. He consulted with two medical colleagues, had them examine the boy, and at their urging and on the grounds that "the death of the child appeared inevitable, I resolved, though not without great anxiety, to try the method which had proved consistently successful on the dogs." With great anxiety he administered twelve inoculations to the boy, and only weeks later did he become confident of the efficacy of his approach and the "future health of Joseph Meister" (Vallery-Radot, pp. 414–417).

Claude Bernard (1813–1878), professor of medicine at the College of France, not only conducted ground-breaking research in physiology, but also composed an astute treatise on the methods and ethics of experimentation. "Morals do not forbid making experiments on one's neighbor or one's self," Bernard argued in 1865. Rather, "the principle of medical and surgical morality consists in never performing on man an experiment which might be harmful to him to any extent, even though the result might be highly advantageous to science, i.e., to the health of others." To be sure, Bernard did allow some exceptions; he sanctioned experimentation on dying patients and on criminals about to be executed, on the grounds that "they involve no suffering of harm to the subject of the experiment." But he made clear that scientific progress did not justify violating the wellbeing of any individual (Bernard, p. 101).

Anglo-American common law recognized both the vital role of human experimentation and the need for physicians to obtain the patient's consent. As one English commentator explained in 1830: "By experiments we are not to be understood as speaking of the wild and dangerous practices of rash and ignorant practitioners … but of deliberate acts of men from considerable knowledge and undoubted talent, differing from those prescribed by the ordinary rules of practice, for which they have good reason … to believe will be attended with benefit to the patient, although the novelty of the undertaking does not leave the result altogether free of doubt." The researcher who had the subject's consent was "answerable neither in damages to the individual, nor on a criminal proceeding. But if the practitioner performs his experiment without giving such information to, and obtaining the consent of this patient, he is liable to compensate in damages any injury which may arise from his adopting a new method of treatment" (Howard-Jones, p. 1430). In short, the law distinguished carefully between quackery and innovation, and—provided the investigator had the subject's agreement—research was a legitimate and protected activity.

With the new understanding of germ theory in the 1890s and the growing professionalization of medical training in the next several decades, the amount of human experimentation increased and the intimate link between investigator and subject weakened. Typically, physicians administered a new drug to a group of hospitalized patients and compared their rates of recovery with past rates or with those of other patients who did not have the drug. (Truly random and blinded clinical trials, wherein a variety of patient characteristics were carefully matched and where researchers were kept purposely ignorant of which patient received the new drug, did not come into practice until the 1950s.) Thus, German physicians tested antidiphtheria serum on thirty hospitalized patients and reported that only six died, compared to the previous year at the same hospital when twenty-one of thirty-two patients died (Bull). In Canada, Frederick G. Banting and Charles Best experimented with insulin therapy on diabetic patients who faced imminent death, and interpreted their recovery as clear proof of the treatment's efficacy (Bliss). So too, George R. Minot and William P. Murphy tested the value of liver preparations against pernicious anemia by administering them to forty-five patients in remission and found that they all remained healthy so long as they took the treatment; the normal relapse rate was one-third, and three patients who on their own accord stopped treatment relapsed (Bull). It is doubtful if many of these subjects were fully informed about the nature of the trial or formally consented to participate. They were, however, likely to be willing subjects since they were in acute distress or danger and the research had therapeutic potential.

As medicine became more scientific, some researchers did skirt the boundaries of ethical behavior in experimentation, making medical progress—rather than the subject's welfare—the goal of the research. Probably the most famous experiment in this zone of ambiguity was the yellow-fever work of Walter Reed (1851–1902). When he began his experiments, mosquitoes had been identified as crucial to transmission but their precise role was unclear. To understand more about the process, Reed began a series of human experiments in which, in time-honored tradition, the members of the research team were the first subjects (Bean). It soon became apparent that larger numbers of volunteers were needed and no sooner was the decision reached than a soldier happened by. "You still fooling with mosquitoes?" he asked one of the doctors. "Yes," the doctor replied. "Will you take a bite?" "Sure, I ain't scared of 'em," responded the man. And in this way, "the first indubitable case of yellow fever … to be produced experimentally" occurred (Bean, pp. 131, 147).

After one fellow investigator, Jesse William Lazear, died of yellow fever from purposeful bites, the other members, including Reed himself, decided "not to tempt fate by trying any more [infections] upon ourselves." Instead, Reed asked American servicemen to volunteer, and some did. He also recruited Spanish workers, drawing up a contract with them: "The undersigned understands perfectly well that in the case of the development of yellow fever in him, that he endangers his life to a certain extent but it being entirely impossible for him to avoid the infection during his stay on this island he prefers to take the chance of contracting it intentionally in the belief that he will receive … the greatest care and most skillful medical service." Volunteers received $100 in gold, and those who actually contracted yellow fever received a bonus of an additional $100, which, in the event of their death, went to their heirs (Bean, pp. 134, 147). Although twenty-five volunteers became ill, none died.

Reed's contract was a step along the way to more formal arrangements with human subjects, complete with enticements to undertake a hazardous assignment. But the contract was also misleading, distorting in subtle ways the risks and benefits of the research. Yellow fever was said to endanger life only "to a certain extent"; the likelihood that the disease might prove fatal was unmentioned. And on the other hand, the prospect of otherwise contracting yellow fever was presented as an absolute certainty, an exaggeration that aimed to promote recruitment.

Some human experiments in the pre-World War II period in the United States and elsewhere used incompetent and institutionalized populations for their studies. The Russian physician V. V. Smidovich (publishing in 1901 under the pseudonym Vikentii Veresaev) cited more than a dozen experiments, most of them conducted in Germany, in which unknowing patients were inoculated with microorganisms of syphilis and gonorrhea (Veresaev). George Sternberg, the Surgeon General of the United States in 1895 (and a collaborator of Walter Reed), conducted experiments "upon unvaccinated children in some of the orphan asylums in … Brooklyn" (Sternberg and Reed, pp. 57–69). Alfred Hess and colleagues deliberately withheld orange juice from infants at the Hebrew Infant Asylum of New York City until they developed symptoms of scurvy (Lederer). In 1937, when Joseph Stokes of the Department of Pediatrics at the University of Pennsylvania School of Medicine sought to analyze the effects of "intramuscular vaccination of human beings … with active virus of human influenza," he used as his study population the residents of two large state institutions for the retarded (Stokes et al., pp. 237–243). There are also many examples of investigators using prisoners as research subjects. In 1914, for example, Joseph Goldwater and G. H. Wheeler of the U.S. Public Health Service (PHS) conducted experiments to understand the causes of pellagra on convicts in Mississippi prisons.

One of the few instances of an individual investigator being taken to task for the ethics of his research involved Hideyo Noguchi (1876–1928) of the Rockefeller Institute for Medical Research. He was investigating whether a substance he called luetin, an extract from the causative agent of syphilis, could be used to diagnose syphilis; through the cooperation of fifteen New York physicians, he used 400 subjects, most of them inmates in mental hospitals and orphan asylums and patients in public hospitals. Before administering luetin to them, Noguchi and some of the physicians did first test the material on themselves, with no ill effects. But no one, including Noguchi, informed the subjects about the experiment or obtained their permission to do the tests.

Noguchi's work was actively criticized by the most vocal opponents of human experimentation during those years, the antivivisectionists. They were convinced that a disregard for the welfare of animals would inevitably promote a disregard for the welfare of humans. As one of them phrased it: "Are the helpless people in our hospitals and asylums to be treated as so much material for scientific experimentation, irrespective of age or consent?" (Lederer, p. 336). Despite their opposition, such experiments as Noguchi's did not lead to prosecutions, corrective legislation, or formal professional codes. The profession and the wider public were not especially concerned with the issue, perhaps because the practice was still relatively uncommon and mostly affected disadvantaged populations.

Research at War

The transforming event in the conduct of human experimentation in the United States was World War II. Between 1941 and 1945, practically every aspect of American research with human subjects changed. What were once occasional and ad hoc efforts by individual practitioners now became well-coordinated, extensive, federally funded team ventures. At the same time, medical experiments that once had the aim of benefiting their subjects were now frequently superseded by experiments whose aim was to benefit others, specifically soldiers who were vulnerable to the disease. Further, researchers and subjects were far more likely to be strangers to each other, with no sense of shared purpose or objective. Finally, and perhaps most importantly, the common understanding that experimentation required the agreement of the subjects, however casual the request or general the approval, was superseded by a sense of urgency so strong that it paid scant attention to the issue of consent.

In the summer of 1941, President Franklin Roosevelt created the Office of Scientific Research and Development (OSRD) to oversee the work of two parallel committees, one devoted to weapons research, the other—the Committee on Medical Research (CMR)—to combat the health problems that threatened the combat efficiency of American soldiers. Thus began what one participant called "a novel experiment in American medicine, for planned and coordinated medical research had never been essayed on such a scale" (Keefer, p.62). Over the course of World War II, the CMR recommended some 600 research proposals, many of them involving human subjects, to the OSRD for funding. The OSRD, in turn, contracted with investigators at some 135 universities, hospitals, research institutes, and industrial firms. The accomplishments of the CMR effort required two volumes to summarize (the title, Advances in Military Medicine, did not do justice to the scope of the investigations); and the list of publications that resulted from its grants took up seventy-five pages (Andrus). All told, the CMR expended some $25 million. In fact, the work of the CMR was so important that it supplied not only the organizational model but also the intellectual justification for creating, in the postwar period, the National Institutes of Health.

The CMR's major concerns were dysentery, influenza, malaria, wounds, venereal diseases, and physical hardships (including sleep deprivation and exposure to frigid temperatures). To create effective antidotes required skill, luck, and numerous trials with human subjects, and the CMR oversaw the effort with extraordinary diligence. Dysentery, for example, proliferated under the filth and deprivation endemic to battlefield conditions, and no effective inoculations or anti-dotes existed. With CMR support, investigators undertook laboratory research and then, requiring sites for testing their therapies, turned to custodial institutions where dysentery was often rampant (OSRD, 1944b). Among the most important subjects for the dysentery research were the residents of the Ohio Soldiers and Sailors Orphanage in Xenia, Ohio; the Dixon, Illinois, institution for the retarded; and the New Jersey State Colony for the Feeble-Minded. The residents were injected with experimental vaccines or potentially therapeutic agents, some of which produced a degree of protection against the bacteria but, as evidenced by fever and soreness, were too toxic for common use.

Probably the most pressing medical problem the CMR faced immediately after Pearl Harbor was malaria, "an enemy even more to be feared than the Japanese" (Andrus, vol. 1, p. xlix). Not only was the disease debilitating and deadly, but the Japanese controlled the supply of quinine, one of the few known effective antidotes. Since malaria was not readily found in the United States, researchers chose to infect residents of state mental hospitals and prisons. A sixty-bed clinical unit was established at the Manteno, Illinois, State Hospital; the subjects were psychotic, backward patients who were purposefully infected with malaria through blood transfusions and then given antimalarial therapies (OSRD, 1944a). With the cooperation of the commissioner of corrections of Illinois and the warden at Stateville Prison (better known as Joliet), one floor of the prison hospital was turned over to the University of Chicago to carry out malaria research and some 500 inmates volunteered to act as subjects. Whether these prisoners were truly capable of consenting to research was not addressed by the researchers, the CMR, or prison officials. Almost all the press commentary was congratulatory, praising the wonderful contributions the inmates were making to the war effort.

In similar fashion, the CMR supported teams that tested anti-influenza preparations on residents of state facilities for the retarded (Pennhurst, Pennsylvania) and the mentally ill (Michigan's Ypsilanti State Hospital). The investigators administered the vaccine to the residents and then, three or six months later, purposefully infected them with influenza (Henle). When a few of the preparations appeared to provide protection, the Office of the Surgeon General of the U.S. Army arranged for the vaccine to be tested by enrollees in the Army Specialized Training Program at eight universities and a ninth unit made up of students from five New York medical and dental colleges.

Because the first widespread use of human subjects in medical research for nontherapeutic purposes occurred under wartime conditions, attention to the consent of the subject appeared less relevant. At a time when the social value attached to consent gave way before the necessity of a military draft and obedience to commanders' orders, medical researchers did not hesitate to use the incompetent as subjects of human experimentation. One part of the war machine conscripted a soldier, another part conscripted a human subject, and the same principles held for both. In effect, wartime promoted teleological as opposed to deontological ethics; "the greatest good for the greatest number" was the most compelling precept to justify sending some men to be killed so that others might live. This same ethic seemed to justify using institutionalized retarded or mentally ill persons in human research.

Human Research and the War Against Disease

The two decades following the close of World War II witnessed an extraordinary expansion of human experimentation in medical research. Long after peace returned, many of the investigators continued to follow wartime rules, this time thinking in terms of the Cold War and the war against disease. The utilitarian justifications that had flourished under conditions of combat and conscription persisted, in disregard of principles of consent and voluntary participation.

The driving force in post-World War II research in the United States was the National Institutes of Health (NIH). Created in 1930 as an outgrowth of the research laboratory of the U.S. Public Health Service, the NIH assumed its extraordinary prominence as the successor agency to the Committee on Medical Research (Swain). In 1945, its appropriations totaled $700,000. By 1955, the figure had climbed to $36 million, and by 1970, $1.5 billion, a sum that allowed it to award some 11,000 grants, about one-third requiring experiments on humans. In expending these funds, the NIH administered an intramural research program at its own Clinical Center, along with an extramural program that funded outside investigators.

The Clinical Center assured its subjects that it put their well-being first. "The welfare of the patient takes precedence over every other consideration" (NIH, 1953a). In 1954, a Clinical Research Committee was established to develop principles and to deal with problems that might arise in research with normal, healthy volunteers. Still, the relationship between investigator and subject was casual to a fault, leaving it up to the investigator to decide what information, if any, was to be shared with the subject. Generally, the researchers did not divulge very much information, fearful that they would discourage patients from participating. No formal policies or procedures applied to researchers working in other institutions on studies supported by NIH funds.

The laxity of procedural protections pointed to the enormous intellectual and emotional investment in research and to the conviction that the laboratory would yield answers to the mysteries of disease. Indeed, this faith was so great that the NIH would not establish guidelines to govern the extramural research it supported. By 1965, the extramural program was the single most important source of research grants for universities and medical schools, by the NIH's own estimate, supporting between 1,500 and 2,000 research projects involving human research. Nevertheless, grant provisions included no stipulations about the ethical conduct of human experimentation and the universities did not fill the gap. In the early 1960s, only nine of fifty-two American departments of medicine had a formal procedure for approving research involving human subjects and only five more indicated that they favored this approach or planned to institute such procedures (Frankel).

One might have expected much greater attention to the ethics of human experimentation in the immediate postwar period in light of the shadow cast by the trial of the German doctors at Nuremberg. The atrocities that the Nazis committed—putting subjects to death by long immersion in subfreezing water, deprivation of oxygen to learn the limits of bodily endurance, or deliberate infection by lethal organisms in order to study the effects of drugs and vaccines—might have sparked a commitment in the United States to a more rigorous regulation of research. (Japanese physicians also conducted experiments on prisoners of war and captive populations, but their research was never subjected to the same judicial scrutiny.) So too, the American research efforts during the war might have raised questions of their own and stimulated closer oversight.

The Nuremberg Code of 1946 itself might have served as a model for American guidelines on research with human subjects. Its provisions certainly were relevant to the medical research conducted in the United States. "The voluntary consent of the human subject is absolutely essential," the code declared. "This means that the person involved should have legal capacity to give consent." By this principle, the mentally disabled and children were not suitable subjects for research—a principle that American researchers did not respect. Moreover, according to the Nuremberg Code, the research subject "should be so situated as to be able to exercise free power of choice" (Germany [Territory Under …], p. 181), which rendered at least questionable the American practice of using prisoners as research subjects. The Nuremberg Code also stated that human subjects "should have sufficient knowledge and comprehension of the elements of the subject matter involved as to make an understanding and enlightened decision" (Germany [Territory Under …], p. 181), thus ruling out the American practice of using the mentally disabled as subjects.

Nevertheless, with a few exceptions, neither the Code nor these specific practices received sustained analysis before the early 1970s. Only a handful of articles in medical or popular journals addressed the relevance of Nuremberg for the ethics of human experimentation in the United States. Perhaps this silence reflected an eagerness to repress the memory of the atrocities. More likely, the events described at Nuremberg were not perceived by most Americans as relevant to their own practices. From their perspective, the Code had nothing to do with science and everything to do with Nazis. The guilty parties were seen less as doctors than as Hitler's henchmen (Proctor).

In the period 1945–1965, several American as well as world medical organizations did produce guidelines for human experimentation that expanded upon the Nuremberg Code. Most of these efforts, however, commanded little attention and had minimal impact on institutional practices whether in Europe or in the United States (Ladimer and Newman). The American Medical Association, for example, framed a research code that called for the voluntary consent of the human subject, but it said nothing about what information the researchers were obliged to share, whether it was ethical to conduct research on incompetent patients, or how the research process should be monitored (Requirements for Experiments on Human Beings). In general, investigators could do as they wished in the laboratory, limited only by what their consciences defined as proper conduct and by broad, generally unsanctioned statements of ethical principle.

The World Medical Association in 1964 issued the Helsinki Declaration, stating general principles for human experimentation, and has revised that document four times. The declaration is modeled on the Nuremberg Code, requiring qualified investigators and the consent of subjects. The 1975 revision recommended review of research by an independent committee (Annas and Grodin).

How researchers exercised discretion was the subject of a groundbreaking article by Henry Beecher, professor of anesthesia at Harvard Medical School, published in June 1966 in the New England Journal of Medicine. His analysis, "Ethics and Clinical Research," contained brief descriptions of twenty-two examples of investigators who risked "the health or the life of their subjects," without informing them of the dangers or obtaining their permission. In one case, investigators purposefully withheld penicillin from servicemen with streptococcal infections in order to study alternative means for preventing complications. The men were totally unaware that they were part of an experiment, let alone at risk of contracting rheumatic fever, which twenty-five of them did. Beecher's conclusion was that "unethical or questionably ethical procedures are not uncommon" among researchers. Although he did not provide footnotes for the examples or name the investigators, he did note that "the troubling practices" came from "leading medical schools, university hospitals, private hospitals, governmental military departments … government institutes (the National Institutes of Health), Veterans Administration Hospitals, and industry" (Beecher).

Two of the cases that Beecher cited were especially important in provoking public indignation over the conduct of human research. One case involved investigators who fed live hepatitis virus to the residents of Willowbrook, a New York State institution for the retarded, in order to study the etiology of the disease and attempt to create a protective vaccine against it. The other case involved physicians injecting live cancer cells into twenty-two elderly and senile hospitalized patients at the Brooklyn Jewish Chronic Disease hospital without telling them that the cells were cancerous, in order to study the body's immunological responses.

Another case that sparked fierce public and political reactions in the early 1970s was the Tuskegee research of the U.S. Public Health Service. Its investigators had been visiting Macon County, Alabama, since the mid-1930s to examine, but not to treat, a group of blacks who were suffering from secondary syphilis. Whatever rationalizations the PHS could muster for not treating blacks in the 1930s, when treatment was of questionable efficacy and very complicated to administer, it could hardly defend instructing draft boards not to conscript the subjects for fear that they might receive treatment in the army. Worse yet, it could not justify its unwillingness to give the subjects a trial of penicillin after 1945 (Jones).

During the 1950s and 1960s, not only individual investigators but government agencies conducted research that often ignored the consent of the subjects and placed some of them at risk. Many of these projects involved the testing of radiation on humans. Part of the motivation was to better understand human physiology; even more important, however, was the aim of bolstering the national defense by learning about the possible impact of radiation on fighting forces. Accordingly, inmates at the Oregon State Prison were subjects in experiments to examine the effects on sperm production of exposing their testicles to X-rays. Although the prisoners were told some of the risks, they were not informed that the radiation might cause cancer. So too, terminally ill patients at the Cincinnati General Hospital underwent whole-body radiation, in research supported by the U.S. Department of Defense, not so much to measure its effects against cancer but to learn about the dangers radiation posed to military personnel. During this period, the Central Intelligence Agency also conducted research on unknowing subjects with drugs and with psychiatric techniques in an effort to improve interrogation and brainwashing methods. It was not until the 1980s that parts of this record became public, and not until 1994 that the full dimensions of these research projects were known.

Regulating Human Experimentation

The cases cited by Beecher and publicized in the press over the period 1966 to 1973 produced critical changes in policy by the leadership of the NIH and the U.S. Food and Drug Administration (FDA). Both agencies were especially sensitive to congressional pressures and feared that criticisms of researchers' conduct could lead to severe budget cuts. They also recognized that the traditional bedrock of research ethics, the belief that investigators were like physicians and should therefore be trusted to protect the well-being of their subjects, did not hold. To the contrary, there was a conflict of interest between investigator and subject: One wanted knowledge, the other wanted cure or well-being.

Under the press of politics and this new recognition, the NIH and the FDA altered their procedures. The fact that authority was centralized in these two agencies, which were at once subordinate to Congress and superordinate to the research community, guaranteed their ability to impose new regulations. Indeed, this fact helps explain why the regulation of human experimentation came first and more extensively to the United States than to other developed countries (Rothman, 1991).

Accordingly, in February 1966, and then in revised form in July 1966, the NIH promulgated through its parent body, the PHS, guidelines covering all federally funded research involving human experimentation. The order of July 1, 1966, decentralized the regulatory apparatus, assigning "responsibility to the institution receiving the grant for obtaining and keeping documentary evidence of informed patient consent." It then mandated "review of the judgment of the investigator by a committee of institutional associates not directly associated with the project." Finally it defined, albeit very broadly, the standards that were to guide the committee: "This review must address itself to the rights and welfare of the individual, the methods used to obtain informed consent, and the risks and potential benefits of the investigation" (Commission on Health Science and Society, pp. 211–212). In this way and for the first time, decisions traditionally left to the conscience of individual physicians came under collective surveillance.

The new set of rules was not as intrusive as some investigators feared, or as protective as some advocates preferred. At its core was the superintendence of the peer review committee, known as the Institutional Review Board (IRB), through which fellow researchers approved the investigator's procedures. With the creation of the IRB, the clinical investigator could no longer decide unilaterally whether the planned intervention was ethical, but had to answer formally to colleagues operating under federal guidelines. The events in and around 1966 accomplished what the Nuremberg trials had not: They moved medical experimentation into the public domain and revealed the consequences of leaving decisions about clinical research exclusively to the individual investigator.

The NIH response focused attention more on the review process than on the process of securing informed consent. Although it recognized the importance of the principle of consent, it remained skeptical about the ultimate feasibility of the procedure. Truly informed consent by the subject seemed impossible to achieve ostensibly because laypeople would not be able to understand the risks and benefits inherent in a complex research protocol. In effect, the NIH leadership was unwilling to abandon altogether the notion that doctors should protect patients and to substitute instead a thoroughgoing commitment to the idea that patients could and should protect themselves. Its goal was to ensure that harm was not done to the subjects, not that subjects were given every opportunity and incentive to express their own wishes (Frankel).

The FDA was also forced to grapple with the problems raised by human experimentation in clinical research. With a self-definition that included a commitment not only to sound scientific research (like the NIH) but to consumer protection as well, the FDA did attempt to expand the prerogatives of the consumer—in this context, the human subject. Rather than emulate the NIH precedent and invigorate peer review, it looked to give new meaning and import to the process of consent.

In the wake of the reactions set off by Beecher's article, the FDA, on August 30, 1966, issued a "Statement on Policy Concerning Consent for the Use of Investigational New Drugs on Humans." Distinguishing between therapeutic and nontherapeutic research, in accord with various international codes like the Helsinki Declaration, it now prohibited all nontherapeutic research unless the subjects gave consent. When the research involved "patients under treatment," and had therapeutic potential, consent was to be obtained except in what the FDA labeled the "exceptional cases," where consent was not feasible or not in the patient's best interest. "Not feasible" meant that the doctor could not communicate with the patient (its example was when the patient was in a coma); and "not in the best interest" meant that consent would "seriously affect the patient's disease status" (its example here was the physician who did not want to divulge a diagnosis of cancer) (Curran, pp. 558–569).

In addition, the FDA, unlike the NIH, spelled out the meaning of consent. To give consent, the person had to have the ability to exercise choice and to have a "fair explanation" of the procedure, including an understanding of the experiment's purpose and duration, "all inconveniences and hazards reasonably to be expected," what a controlled trial was (and the possibility of the use of placebos), and any existing alternative forms of therapy available (Curran, pp. 558–569).

The FDA regulations represented a new stage in the balance of authority between researcher and subject. The blanket insistence on consent for all nontherapeutic research would have prohibited many of the World War II experiments and eliminated most of the cases on Beecher's roll. The FDA's definitions of consent went well beyond the vague NIH stipulations, imparting real significance to the process. To be sure, ambiguities remained. The FDA still confused research and treatment, and its clauses governing therapeutic investigations afforded substantial discretion to the doctor-researcher. But authority tilted away from the individual investigator and leaned, instead, toward colleagues and the human subjects themselves.

The publicity given to the abuses in human experimentation, and the idea that a fundamental conflict of interest characterized the relationship between the researcher and the subject, had an extraordinary impact on those outside of medicine, drawing philosophers, lawyers, and social scientists into a deeper concern about ethical issues in medicine. Human experimentation, for example, sparked the interest in medicine of Princeton University's professor of Christian ethics, Paul Ramsey. Ethical problems in medicine "are by no means technical problems on which only the expert (in this case, the physician) can have an opinion," Ramsey declared, and his first case in point was human experimentation. He worried that the thirst for more information was so great that it could lead investigators to violate the sanctity of the person. To counter the threat, Ramsey had two general strategies. The first was to make medical ethics the subject of public discussion. We can no longer "go on assuming that what can be done has to be done or should be.… These questions are now completely in the public forum, no longer the province of scientific experts alone" (Ramsey, p. 1). Second, and more specifically, Ramsey embraced the idea of consent; consent, in his formulation, was to human experimentation what a system of checks and balances was to executive authority, that is, the necessary limitation on the exercise of power. "Man's capacity to become joint adventurers in a common cause makes the consensual relationship possible; man's propensity to overreach his joint adventurer even in a good cause makes consent necessary.… No man is good enough to experiment upon another without his consent" (Ramsey, pp. 5–7).

Commissioning Ethics

The U.S. Congress soon joined the growing ranks of those concerned with human experimentation and medical ethics. In 1973, it created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, whose charge was to recommend to federal agencies regulations to protect the rights and welfare of subjects of research. The idea for such a commission was first fueled by an awareness of the awesome power of new medical technologies, but it gained congressional passage in the wake of newly uncovered abuses in human experimentation, most notably the Tuskegee syphilis studies.

The U.S. National Commission for the Protection of Human Subjects was composed of eleven members drawn from "the general public and from individuals in the fields of medicine, law, ethics, theology, biological science, physical science, social science, philosophy, humanities, health administration, government, and public affairs." The length of the roster and the stipulation that no more than five of the members could be researchers indicated how determined Congress was to have human experimentation brought under the scrutiny of outsiders. Senator Edward Kennedy, who chaired the hearings that led to the creation of the commission, repeatedly emphasized this point: Policy had to emanate "not just from the medical profession, but from ethicists, the theologians, philosophers, and many other disciplines." A prominent social scientist, Bernard Barber, predicted, altogether accurately, that the commission "would transform a fundamental moral problem from a condition of relative professional neglect and occasional journalistic scandal to a condition of continuing public and professional visibility and legitimacy.… For the proper regulation of the powerful professions of modern society, we need a combination of insiders and outsiders, of professionals and citizens" (Commission on Health Science and Society, part IV, pp. 1264–1265).

Although the National Commission was temporary rather than permanent, and advisory (to the Secretary of Health, Education, and Welfare), without any enforcement powers of its own, most of its recommendations became regulatory law, tightening still further the governance of human experimentation. It endorsed the supervisory role of the IRBs and successfully recommended special protection for research on such vulnerable populations as prisoners, mentally disabled persons, and children. It recommended that an Ethical Advisory Board be established within the Department of Health and Human Services to deal with difficult cases as they arose. This board was inaugurated in 1977 but expired in 1980, leaving a gap in the commission's plan for oversight of research ethics. However, the Office for Protection from Research Risks at NIH exercised vigilance over institutional compliance with research regulations. Finally, the commission issued the Belmont Report, a statement of the ethical principles that should govern research, namely, respect for autonomy, beneficence, and justice. This document not only had an influence on research ethics but on the emerging discipline of bioethics(U.S. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research).


In the United States, and to a growing degree in other developed countries, many of the earlier practices that had raised such troubling ethical considerations have been resolved. Oversight of research has been accomplished without stifling it, and without violating the prerogatives of research subjects. Almost everyone who has served on IRBs, or who has analyzed the transformation that their presence has secured on medical experimentation, will testify to their salutary impact. To be sure, the formal composition and decentralized character of these bodies seem to invite a kind of back-scratching, mechanistic review of colleagues' protocols, without the kind of adversarial procedures that would reveal every risk in every procedure. Similarly, IRB review of consent forms and procedures rarely takes the concern from the committee room onto the hospital floor to inquire about the full extent of the understanding of subjects who consent to participate. Nevertheless, IRBs do require investigators to be accountable for the character and severity of risks they are prepared to let others run, knowing that their institutional reputation may be harmed if they minimize or distort it. This responsibility unquestionably has changed investigators' behavior, and social expectations of them. To be sure, abuses may still occur. IRBs must be ready to minimize the amount of risk involved in certain protocols so as to enable researcher-colleagues to pursue their investigations. But they happen considerably less often now that IRB regulation is a fact of life. Scientific progress and ethical behavior turn out to be compatible goals.

david j. rothman (1995)

bibliography revised

SEE ALSO: Aging and the Aged: Healthcare and Research Issues; AIDS: Healthcare and Research Issues; Autoexperimentation; Autonomy; Children: Healthcare and Research Issues; Coercion; Commercialism in Scientific Research; Embryo and Fetus: Embryo Research; Empirical Methods in Bioethics; Freedom and Free Will; Genetics and Human Behavior: Scientific and Research Issues; Holocaust; Infants: Public Policy and Legal Issues; Informed Consent: Consent Issues in Human Research; Mentally Ill and Mentally Disabled Persons: Research Issues; Military Personnel as Research Subjects; Minorities as Research Subjects; Paternalism; Pediatrics, Overview of Ethical Issues in; Public Policy and Bioethics; Prisoners as Research Subjects; Race and Racism; Research, Human: Historical Aspects; Research Methodology; Research, Multinational; Research, Unethical; Responsibility; Scientific Publishing; Sexism; Students as Research Subjects;Virtue and Character; and other Research Policy subentries


Andrus, Edwin C.; Bronk, D. W.; Carden, G. A., Jr.; et al., eds. 1948. Advances in Military Medicine Made by American Investigators under CMR Sponsorship. Boston: Little, Brown.

Annas, George J., and Grodin, Michael A. 1992. The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. New York: Oxford University Press.

Baxby, Derrick. 1981. Jenner's Smallpox Vaccine: The Riddle of Vaccinia Virus and Its Origin. London: Heinemann Educational Books.

Bean, William B. 1982. Walter Reed: A Biography. Charlottesville: University of Virginia Press.

Beaumont, William. 1980 (1833). Experiments and Observations on the Gastric Juice and Physiology of Digestion. Birmingham, AL: Classics of Medicine Library.

Beecher, Henry K. 1966. "Ethical and Clinical Research." New England Journal of Medicine 274(24): 1354–1360.

Berg, Kare, and Tranoy, Knut E., eds. 1983. Proceedings of Symposium on Research Ethics. New York: Alan Liss.

Bernard, Claude. 1927. An Introduction to the Study of Experimental Medicine, tr. Henry Copley Greene. New York: Macmillan.

Bliss, Michael. 1982. The Discovery of Insulin. Chicago: University of Chicago Press.

Bull, J. P. 1959. "The Historical Development of Clinical Therapeutic Trials." Journal of Chronic Diseases 10(3): 218–248.

Commission on Health Science and Society. 1968. Hearings of 90th Congress, 2nd session.

Curran, William J. 1969. "Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies." Daedalus 98(2): 542–594.

Cushing, Harvey. 1925. The Life of Sir William Osler. Oxford: Clarendon Press.

Fox, Renée. 1959. Experiment Perilous: Physicians and Patients Facing the Unknown. Glencoe, IL: Free Press.

Frankel, Mark S. 1972. The Public Health Service Guidelines Governing Research Involving Human Subjects. Monograph no.10. Washington, D.C.: Program of Policy Studies in Science and Technology, George Washington University.

Germany (Territory Under Allied Occupation, 1945–1955: U.S. Zone) Military Tribunals. 1947. "Permissible Medical Experiments." In vol. 2 of Trials of War Criminals Before the Nuremberg Tribunals Under Control Law No. 10, pp. 181–184. Washington, D.C.: U.S. Government Printing Office.

Henle, Werner; Henle, Gertrude; Hampil, Bettylee; et al. 1946. "Experiments on Vaccination of Human Beings Against Epidemic Influenza." Journal of Immunology 53(1): 75–93.

Howard-Jones, Norman. 1982. "Human Experimentation in Historical and Ethical Perspectives." Social Science Medicine 16(15): 1429–1448.

Jenner, Edward. 1910 (1798). "Vaccination Against Smallpox." In Scientific Papers, pp. 145–220, ed. Charles W. Eliot. New York: P. F. Collier.

Jones, James. 1981. Bad Blood. New York: Free Press.

Katz, Jay; Capron, Alexander M.; and Glass, Eleanor Swift. 1972. Experimentation with Human Beings. New York: Russell Sage Foundation.

Kaufman, Sharon R. 1997. "The World War II Plutonium Experiments: Contested Stories and Their Lessons for Medical Research and Informed Consent." Culture, Medicine and Psychiatry 21: 161–197.

Keefer, Chester S. 1969. "Dr. Richards as Chairman of the Committee on Medical Research." Annals of Internal Medicine 71(8): 61–70.

Ladimer, Irving, and Newman, Roger. 1963. Clinical Investigation in Medicine: Legal, Ethical, and Moral Aspects. Boston: Law-Medicine Institute of Boston University.

Lederer, Susan. 1985. "Hideyo Noguchi's Luetin Experiment and the Antivivisectionists." Isis 76(1): 31–48.

Lederer, Susan. 1992. "Orphans as Guinea Pigs." In In the Name of the Child: Health and Welfare, 1880–1940, ed. Roger Cooter. London: Routledge.

McNeil, Paul M. 1993. The Ethics and Politics of Human Experimentation. Cambridge, Eng.: Cambridge University Press.

National Institutes of Health. 1953a. Handbook for Patients at the Clinical Center. Publication no. 315. Bethesda, MD: Author.

National Institutes of Health. 1953b. The National Institutes of Health Clinical Center. Publication no. 316. Washington, D.C.: U.S. Government Printing Office.

Numbers, Ronald L. 1979. "William Beaumont and the Ethics of Experimentation." Journal of the History of Biology 12(1): 113–136.

Office of Scientific Research and Development (OSRD). Committee on Medical Research (CMR). 1943. National Archives of the United States, Record Group 227: Contractor Records, S. Mudd, University of Pennsylvania (Contract 120, Final Report), March 3.

Office of Scientific Research and Development (OSRD). Committee on Medical Research (CMR). 1944a. Contractor Records, University of Chicago, Contract 450, Report L2, Responsible Investigator Dr. Alf S. Alving, Bimonthly Progress Report, August 1.

Office of Scientific Research and Development (OSRD). Committee on Medical Research (CMR). 1944b. Contractor Records, University of Pennsylvania, Contract 120, Responsible Investigator Dr. Stuart Mudd, Monthly Progress Report 18, October 3.

Proctor, Robert N. 1988. Racial Hygiene: Medicine under the Nazis. Cambridge, MA: Harvard University Press.

Ramsey, Paul. 1970. The Patient as Person: Explorations in Military Ethics. New Haven, CT: Yale University Press.

"Requirements for Experiments on Human Beings." 1946. Journal of the American Medical Association 132: 1090.

Rothman, David J. 1991. Strangers at the Bedside. New York: Basic Books.

Rothman, David J. 2003. "Serving Clio and Client: The Historian as Expert Witness." Bulletin of the History of Medicine 77: 25–44.

Sternberg, George M., and Reed, Walter. 1895. "Report on Immunity Against Vaccination Conferred upon the Monkey by the Use of the Serum of the Vaccinated Calf and Monkey." Transactions of the Association of American Physicians 10: 57–69.

Stokes, Joseph, Jr.; Chenoweth, Alice D.; Waltz, Arthur D.; et al. 1937. "Results of Immunization by Means of Active Virus of Human Influenza." Journal of Clinical Investigation 16(2): 237–243.

Swain, Donald C. 1962. "The Rise of a Research Empire: NIH, 1930 to 1950." Science 138(3546):1233–1237.

United States. Advisory Committee on Human Radiation Experiments. 1996. Final Report of the Advisory Committee on Human Radiation. New York: Oxford Press.

U.S. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont Report: Ethical Principles and Guidelines for Protection of Human Subjects of Research. Washington, D. C.: Author.

Vallery-Radot, René. 1926. The Life of Pasteur, tr. Henriette C. Devonshire. New York: Doubleday.

Veresaev, Vikentii V. 1916 (1901). The Memoirs of a Physician, tr. Simeon Linden. New York: Knopf.

About this article

Research, Human: Historical Aspects

Updated About content Print Article