Science Philosophy and Practice: Ethical Principles for Medical Research Involving Human Subjects
Science Philosophy and Practice: Ethical Principles for Medical Research Involving Human Subjects
Medicine is both an art and a science. As an art, it creates mutually beneficial relationships between healers and patients, cures patients when possible, and comforts patients when death, loss of function, or deterioration of health is imminent. As a science, it rigorously studies biology, physiology, anatomy, mechanics, chemistry, and many other fields of inquiry in order to gain knowledge as precise as methods and conditions allow, then determines how to use this knowledge most effectively in the healing art. Teaching medicine properly involves imparting to students both this art and this science in depth. Medicine is thus frequently and aptly described as a “three-legged stool,” consisting of patient care (the art), research (the science), and education. If any of these three legs should break, the stool would collapse. Each of these three aspects is equally important to maintaining the integrity of the whole medical profession.
Unfortunately, medical research cannot be conducted only in test tubes or petri dishes, under microscopes, or with laboratory animals. It sometimes also requires human experimentation. When it does, the absolute responsibility of the scientist is to ensure that the safety, autonomy, privacy, dignity, and values of each human subject are respected and protected to the greatest extent possible. The key to protecting human subjects of medical research encompasses the paired concepts of informed consent and informed refusal, usually called just “informed consent.”
Informed consent recognizes the autonomy and essential human dignity of each subject or potential subject. By this principle, all potential human subjects or their guardians or legal representatives are completely free to decide without coercion or fear of retribution whether or not to participate in any scientist's experiment. No penalties shall arise if the decision is to refuse, even if the participant has second thoughts and walks away from the experiment in the middle or at any other time. The researcher's corresponding duty is to provide enough clear, honest, and easily understood informationfor each potential subject, guardian, or surrogate to make a well reasoned decision. Before this decision is reached, the scientist must explain the entire protocol of the envisioned experiment, including disclosure of all anticipated risks and benefits, in plain language. The potential subject must explicitly acknowledge understanding. Each prospective research subject has the right to all of this information.
Historical Background and Scientific Foundations
Modern medical research began in 1747 when Scottish naval surgeon James Lind (1716–1794) conducted the world's first controlled clinical trial. His aim was to discover the cause of and cure for scurvy, which was devastating the British Navy and merchant fleet. Aboard HMS Salisbury, Lind drafted 12 sailors with similar early symptoms of scurvy after about a month a sea. He divided them into six pairs to test six different possible remedies. To their daily rations he added, respectively, one quart of apple cider, a gargle of 25 drops of dilute sulfuric acid three times a day, two teaspoonfuls of vinegar three times a day, two oranges and a lemon, a half pint of sea water, and a strong herbal purgative three times a day. Only the pair that ate the citrus fruit showed significant improvement. Lind's success led the Royal Navy to order in 1795 that citrus juice be given to all sailors.
Lind informed the sailors of his plans, but did not ask their consent. He ordered them to participate, in accordance with contemporaneous British naval protocol. In his experiment, no further harm could come to any of the participants beyond what they were already likely to suffer from scurvy. Such lack of added risk in clinical trials is not always the case. Many clinical trials are downright dangerous. The general recognition that human research subjects have the right to be protected from these dangers was long in coming.
Acceptance of Lind's new empirical method of medical research was also slow. At the end of the eighteenth century most of the medical world was still accepting the wisdom of the ancients and other authorities, experimenting by trial and error, and taking advantage of unusual patients, especially those who owed their lives to the investigating physicians. Only rarely in this era was medical research on human subjects done as rigorous science, and even when it was, doctors enlisted participants by pressure, coercion, or guilt more often than by request.
English country doctor Edward Jenner (1749–1823) was an exception to this trend. In 1796 one of his patients, Gloucestershire milkmaid Sarah Nelmes, had cowpox. Local folklore claimed that people who got cowpox never got smallpox. Jenner wanted to test this hypothesis. With Nelmes's permission, he obtained fluid from her lesions. He then asked a local farmer, James Phipps, the father of an eight-year-old, for permission to use Phipps's son in a smallpox immunization experiment. Jenner explained his entire theory and obtained the farmer's free consent. He then inoculated the healthy boy with the dried cowpox fluid. Phipps got cowpox. After he recovered, Jenner inoculated him with dried fluid from a smallpox victim. Phipps did not get smallpox.
Jenner repeated this experiment on several other Gloucestershire children, including his own son, always with informed consent, and always with the same outcome. He published his results in 1798 as An Inquiry into the Causes and Effects of Variolæ Vaccinæ. Even though his experiments were entirely successful and even though he had the support of his patients, both the established medical community and the church condemned his work as preposterous and unnatural. Until about 1802 he was generally ridiculed, but gradually his findings were accepted. By 1810 most English-speaking doctors vaccinated their patients. The University of Oxford awarded Jenner an honorary degree in 1813.
On the other side of the informed consent spectrum from Jenner was American army surgeon William Beaumont (1785–1853). At Fort Mackinac, Michigan, in 1822 he saved the life of French-Canadian fur trader Alexis St. Martin (1794–1880), who had suffered a shotgun wound in his lower thorax and upper abdomen. The wound did not close properly. The inside of the stomach remained open, so that whatever St. Martin ate or drank would leak out unless he wore a plug, compress, and bandage.
Beaumont could have repaired the hole surgically, but he perceived a unique opportunity to study the physiology of digestion, which until then was very poorly understood. Seduced by the prospect of fame as a scientist, Beaumont decided not to close the hole. In 1823 he hired St. Martin as his personal servant, then in 1824, when St. Martin was fully recovered, he began performing physiological experiments through the hole that he had allowed to become permanent. As their relationship was both doctor/patient and employer/employee, one easily infers how Beaumont used his power to compel St. Martin to participate. Three times St. Martin ran away. Twice Beaumont tracked him down and convinced him to return for more experiments. The third time, in 1833, Beaumont could not persuade him. The experiments ended.
Beaumont's science was good. He published his first article about St. Martin's stomach in 1825 and his monumental book, Experiments and Observations on the Gastric Juice, and the Physiology of Digestion, in 1833. St. Martin was bitter for the rest of his life about Beaumont gaining prestige at St. Martin's expense and without his consent. Disabled by his injury, he had no dignified way to earn a living, but occasionally made money by exhibiting himself as a freak. He and his family subsisted in poverty. When he died, his wife allowed his body to decompose before she told anyone that he was dead. He had instructed her to do this so that the medical community could not preserve his stomach and continue to exploit him after death.
Early medical empiricism grew most significantly in France. French researchers such as Jean-Nicolas Corvisart (1755–1821), Xavier Bichat (1771–1802), Guillaume Dupuytren (1777–1835), René-Théophile-Hyacinthe Laënnec (1781–1826), François Magendie (1783–1855), Pierre-Charles-Alexandre Louis (1787–1872), and Jean Cruveilhier (1791–1874) obtained valuable results and exerted tremendous influence on medical research methodology throughout the Western world. By the middle of the nineteenth century, the typical medical publication title, “Observations on …,” had given way to “Researches on…” This shift indicated that meticulous observation, which physicians had always encouraged, was still honored, but that the emphasis was now on methodically designed empirical study. Especially in the work of Louis, the move toward rigorous empiricism was a major event in the history of medicine. Yet none of these scientists were noted for their consideration of their human subjects.
American military physician Walter Reed (1851–1902) was, for his era, unusually careful to obtain informed consent from his research subjects. In 1900 Surgeon General of the Army George Miller Sternberg (1838–1915) appointed Reed to lead the Yellow Fever Commission, charged with controlling that disease in Cuba. Reed decided to test the hypothesis of Cuban physician Carlos Juan Finlay (1833–1915) that yellow fever was transmitted by mosquitoes. He asked for volunteers. The first two were Private William Hanaford Dean (1877–1928) and a member of Reed's team, surgeon James Carroll (1854–1907), who allowed mosquitoes to bite them under controlled conditions. Both caught yellow fever but survived. Then another member of Reed's team, surgeon Jesse William Lazear (1866–1900), allowed himself to be bitten. He caught yellow fever and died.
Using Lazear's noteboks, Reed deduced that yellow fever was noncontagious and vector borne. Even though he did not know the identity of the pathogen, he correctly deduced its life cycle from human to mosquito and back to human. He then asked for more volunteers to test his conclusions. His commanding officer, Leonard Wood (1860–1927), authorized military funds to pay these volunteers, who included both local civilians and American soldiers.
The immediate benefits of Reed's yellow fever research are incalculable. In 1901 William Crawford Gorgas (1854–1920), the U.S. Army's chief sanitation officer for Cuba, instituted strict policies to destroy mosquitoes around Havana. Within three months that area was free of yellow fever. In 1904 Gorgas established similar but more extensive policies in Panama, where yellow fever and malaria were decimating workers and hindering construction of the canal. Yellow fever was eradicated from the Canal Zone by 1906 and the frequency of malaria there was greatly reduced by 1914.
Sometimes occasions for medical research occur fortuitously or unsystematically, rather than by design with specific ends in view. This was true in St. Martin's case and is particularly true in war, where plentiful presentations of unusual injuries provide military and naval surgeons with unique opportunities to advance medical, surgical, biomechanical, and even physiological knowledge, even while they are trying their utmost to save lives. Soldiers and sailors near death from battle wounds are in no condition to give or withhold consent. Their surgeons, with wide leeway to invent, experiment, and explore, naturally learn much from what would scarcely be encountered in civilian practice. This is different from St. Martin's case insofar as the military or naval patient's relationship with the particular military or naval doctor typically ends with discharge from the hospital.
The long-term consequences of military and naval surgery are often beneficial for generations of future patients. French surgeon Ambroise Paré (1510–1590) served in the war between King François I and the Duke of Savoy. At the battle of Turin in 1536, the Savoyards shot so many French soldiers that Paré ran out of oil to cauterize wounds. Until that time most European surgeons had followed the ancient Arabic practice of cauterizing puncture wounds with hot oil. In desperation, Paré just wrapped the newer wounds with bandages soaked in egg yolk, rose oil, and turpentine. He was soon surprised to learn that the bandaged patients healed better than the cauterized ones. He immediately abandoned cauterization. After he published his treatise on wounds in 1545, other surgeons throughout Europe abandoned it too. In 1552 King Henri II made Paré the royal physician.
Whatever new knowledge military and naval surgeons gain from their war service is theirs to use and publish as they see fit, regardless of how their patients feel about it. The United States Surgeon General's massive six-volume Medical and Surgical History of the War of the Rebellion, 1861–65 (1870–1888) was the first official report of such new knowledge. The two world wars produced similar sets: The Medical Department of the United States Army in the World War (15 volumes, 1921–1929) and The History of the Medical Department of the United States Army in World War II (35 volumes, 1952–1976). In none of this published research was the consent of patients taken into account. All have substantially advanced medical science.
Scientific and Cultural Preconceptions
With regard to human research subjects, the standard presumption of doctors throughout the nineteenth century, and of military and naval doctors well into the twentieth century, was that consent, informed or not, was unnecessary. Patients should be willing to make sacrifices for the sake of science, and that was that. Society seldom questioned the paternalism or judgment of the medical establishment and generally believed that “the doctor knows best.” Patient-initiated lawsuits for fraud, negligence, or malpractice were rare and typically unsuccessful.
Ironically, Germany, later infamous as the site of Nazi crimes against research subjects, was the first country to enact informed consent laws. In the 1890s in Breslau, Prussia, German dermatologist and bacteriologist Albert Neisser (1855–1916) studied syphilis and intentionally infected several prostitutes, but then claimed that they had contracted the disease while following their profession. He had neither obtained his subjects' permission nor told them the nature of his experiments. Reacting to Neisser's fiasco, the imperial Prussian government decreed in 1900 that medical research could not be conducted without each subject's consent. The decree had little effect, but was significantly strengthened by the Weimar government in 1931. This legal standard remained in effect in Germany even while the Nazis violated it in the 1930s and 1940s.
Twenty years after the Nazi biomedical research crimes were revealed, the West still remained falsely and complacently confident that such heartless travesties of science could only happen under military tyrannies, not in progressive democracies. Yet, while not nearly as bad as the Nazis, Western researchers continued routinely to violate the principle of informed consent. They harmed their human subjects and saw nothing wrong with doing so. The typical victims of such improper research were defenseless or marginalized populations. Three American instances of such abuse were the syphilis study of African-Americans in Tuskegee, Alabama, conducted by the federal government from 1932 to 1972; the hepatitis study of children at the Willowbrook State School for the Retarded, Staten Island, funded by New York State from 1955 to 1972; and the use of inmates at Holmesburg Prison, Philadelphia, as guinea pigs for chemicals and drugs from 1951 to 1974.
In Tuskegee, the United States Public Health Service (USPHS) ran a study to chart the full natural course of syphilis. Federal government physicians in the first few years of the study identified and enrolled 399 poor and undereducated African-American men who suffered from the disease. The USPHS lied to these victims about the nature of their ailment and the purpose of the study, purposefully withheld treatment from them, and compensated them only with trinkets, cigars, cigarettes, bus rides, funeral expenses, and tiny amounts of cash. Even after penicillin was determined to be an effective antisyphilis therapy in the 1940s, the USPHS refused to allow participants in the study to receive it or even learn about it. On May 16, 1997 President Bill Clinton offered a formal, written apology to the African-American community for the Tuskegee syphilis study.
Saul Krugman (1911–1995), later the discoverer of the hepatitis B vaccine and president of the American Pediatric Society, used mentally retarded children at Willowbrook as research subjects for experiments on hepatitis. He deliberately infected hundreds of newly admitted children, either without the informed consent of their parents or legal guardians or with consent obtained by deceit. Willowbrook was severely overcrowded, sanitation was minimal, and most of children there already had hepatitis when Krugman began his study. That was why he chose only new admittees as subjects. His team fed extracts of human feces to subjects in order to test the transmission of the disease.
Biomedical and other experiments on criminals were common in the United States until 1979, when the Belmont Report of the U.S. Department of Health, Education, and Welfare (DHEW) provided the spark to end the practice. Before then, any individual, corporation, organization, or government entity with a legitimate interest in testing drugs or chemicals on human subjects could enjoy nearly unimpeded access to prisoners. Nationwide, the most notorious of these
facilities was the Philadelphia county prison at Holmesburg, Pennsylvania.
Instigated by dermatologist Albert M. Kligman (1916–), Holmesburg's research program was remarkable for its sheer size and range. Hundreds of researchers conducted experiments on thousands of poorly informed prisoners, who were deliberately kept ignorant about the nature of the substances they were testing and the possibly dangerous implications of their participation. Prisoners were paid for their time, but many felt compelled to cooperate with the researchers so that they could buy protection from homosexual rape, which was rampant in the cellblocks. Kligman exposed them to poison ivy, dioxin, radioactive materials, bizarre potions, and many other painful or hazardous substances. The army performed chemical warfare tests on inmates. Ortho Pharmaceutical Corporation developed its popular acne medication, Retin-A at Holmesburg in the 1960s without providing the prisoners with either treatment for side effects or relief from pain. Holmesburg experiments concerned not only pharmaceuticals and chemicals. In one protocol prisoners allowed their fingernails to be extracted for $150 each so that researchers could study the untreated wounds.
The first inkling of the principle of informed consent in America arose from a 1914 court case, Schloen-dorff v. the Society of the New York Hospital. The plaintiff, as a patient under general anesthesia in 1908, had awakened to discover that she had undergone a surgical procedure which she had neither anticipated nor approved. The surgeon had removed a fibroid tumor, but she had agreed only to an exploratory examination. Gangrene in her left arm and other complications had set in, and two fingers had to be amputated. The New York State Court of Appeals Judge Benjamin Cardozo wrote: “… the wrong complained of is not merely negligence. It is trespass. Every human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient's consent commits an assault, for which he is liable in damages.” Some lawyers complained that Cardozo was technically incorrect, as the offense in question was in fact battery, not assault. Yet the principle that Cardozo established in this decision remains a key element in American jurisprudential thinking about informed consent.
In 1935 a Michigan case, Fortner v. Koch, established biomedical research on humans as a social and scientific necessity conditional on strict legal regulation, such as informed consent. The term “informed consent,” was known earlier in other contexts, but its use in American medical jurisprudence began when attorney Paul G. Gebhard (1928–1998) employed it in his amicus curiæ brief from the American College of Surgeons for the 1957 malpractice case of Salgo v. Leland Stanford Jr. University Board of Trustees.
The only strictly scientific question relevant to the ethical conduct of medical research is methodological: Does obeying these principles make better science and, conversely, does flouting them make worse science? The statistical significance of the results of clinical trials improves proportionately to the number of subjects. Therefore one responsibility of physicians and scientists is to recruit as many subjects as possible for each study. Potential subjects, if they are to consent freely to the proposed experiment, must have faith in the process. If they do not trust the physician or the scientist, then they will not consent, the subject pool will be smaller, and the science will be poorer.
Cultivating this faith and trust is easier when the public has a high regard for the whole medical and bioscientific enterprise. Consistently encouraging informed consent and allowing informed refusal should be an integral part of each physician's or scientist's best effort to develop this good public image. Each encounter between physician and patient or between scientist and subject has the potential to affect public relations in general. Investigators should therefore be honest, respectful, plainspoken, and polite with their subjects, not only out of simple human decency, but also because to do so is good science.
In the landmark 1966 article “Ethics and Clinical Research,” in the New England Journal of Medicine, Henry K. Beecher listed 22 examples, including Wil-lowbrook, of physicians experimenting on patients for the sake of scientific knowledge and imaginary future patients instead of providing therapies for these present patients. Insofar as all physicians' first duty is to their present patients, such preference for scientific research over patient care violates the Hippocratic Oath, especially when patients are deceived or underinformed.
Beecher argued against the then-prevalent notion among scientists that bowing to the principle of informing potential subjects of likely risks would compromise science and retard progress. They believed that they had to deceive the public to gain enough subjects for research, that they were not acting unethically because scientific progress was a greater good than being frank with their subjects, and that science was so ultimately important that a few deaths or maimings along the way in its service would not matter. Within a few years after Beecher's article appeared, regulatory agencies, grant-funding organizations, and the general public, who had previously taken little notice of the ethical dimension of biomedical research, began to ferret out and stop unethical research protocols. His article did not trigger much outrage among the readers of the New England Journal, who were mostly physicians and bioscientists. But when the regular press discovered the Willowbrook study in the late 1960s, the public outcry succeeded in getting that research shut down.
IN CONTEXT: MENGELE AT AUSCHWITZ
Josef Mengele (1911–1979) was a wealthy Bavarian who joined the Sturmabteilung (SA, or storm troopers) early in the Nazi era. In 1935 the University of Munich awarded him a Ph.D. in physical anthropology for a dissertation on racial differences in the jaw. The University of Frankfurt gave him an M.D. in 1938 for similar research on the lip, jaw, and palate. That same year he joined the Schutzstaffel (“protective echelon,” or SS), Heinrich Himmler's elite military unit. As a medical officer in the Waffen SS, the most dreaded wing of the SS, Mengele was ordered to Auschwitz in May 1943 as camp physician.
Mengele's natural cruelty had free rein at Auschwitz. He indulged his bizarre fascination with deformities, eye colors, dwarfs, giants, and other genetic differences, especially in twins. He was the absolute judge of life and death throughout the camp. The experiments that he and his staff conducted on those he selected not to die in the gas chambers were arbitrary, heartless, sexually perverted, and of no scientific merit. He did not use anesthesia for any kind of surgery. He did not care about the pain, infections, maimings, or deaths he caused among his captive human research subjects. Of about 3,000 twin children on whom he experimented, only about 200 survived.
Mengele escaped the Russian advance in January 1945. American soldiers captured him, but Mengele tricked his guards into releasing him after he obtained a false set of papers. He went into hiding, and four years later sneaked into South America. West Germany issued a warrant for his arrest in 1959, both universities revoked his degrees in 1964, and Israeli agents actively sought him, but he eluded capture and punishment for the rest of his life.
Beecher claimed that most reasonable patients would not knowingly risk their lives or their good health for the sake of research. He further suggested that biomedical experiments that involved significant and obvious risks to their subjects were ipso facto unethical because he could confidently assume in these cases that informed consent had either not been obtained or had been obtained dishonestly. He questioned whether the results of such experiments should be allowed to be published, even if these results were solid or useful.
Influences on Science and Society
In reaction to documented abuses of human research subjects, the principle of informed consent began to be recognized in the late nineteenth century and was gradually codified in the twentieth as the key concept in biomedical research ethics, one that was enforced by governments and the biomedical enterprise itself. Thus, informed consent has played an increasingly important role in both research and clinical medicine, especially since the verdict in the Nazi “Doctors' Trial” was handed down in Nuremberg, Germany, in 1947. Developments beyond this include the 1964 Helsinki Declaration of the World Medical Association (WMA), 1974 U.S. National Research Act, 1979 Belmont Report, and the 1991 Common Rule of the U.S. Department of Health and Human Services (DHHS).
The international military tribunals in which the Allies tried Nazi war criminals after World War II included the Doctors' Trial from December 9, 1946 to August 20, 1947. The four judges, all Americans, acquitted seven and convicted 16 German physicians of torture, murder, or performing inhumane experiments of questionable scientific value on unwilling subjects. The court further found that the Nazi regime had
sanctioned cruel biomedical research on concentration camp prisoners as a matter of policy.
The most significant result of this trial was not the convictions but the “Nuremberg Code,” the 10 ethical principles contained in the part of the verdict subtitled “Permissible Medical Experiments”:
- The voluntary consent of the human subject is absolutely essential.
- The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.
- The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.
- The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.
- No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.
- The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.
- Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.
- The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.
- During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.
- During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.
The Nuremberg Code reaffirmed and reinterpreted the ancient principle of medical beneficience that is a central element in all codes of professional medical ethics, including the Hippocratic Oath. Just as physicians must always put the interests of their patients first, so biomedical researchers must put the interests of their human subjects first. Accordingly, the Nuremberg Code has been the basis of all subsequent bioscientific jurisprudence and regulation.
The first major document to serve as a corollary to the Nuremberg Code was the Helsinki Declaration. Its creator, the WMA, was founded in September 1947 specifically in response to the Doctors' Trial verdict. Its mission was to work with national medical associations across the world to develop and uphold high standards in medical ethics. At its Eighteenth General Assembly in 1964 it drafted a comprehensive policy statement designed to help prevent not only deliberate crimes such as those the Nazis committed, but also accidental biomedical innovation tragedies such as the thalidomide experiments of the 1950s, which caused thousands of unforeseen birth defects. Recognizing the need to keep the Helsinki Declaration current with situational changes in bioethics and emergent subtleties of legal interpretation, the WMA approved updates in 1975, 1983, 1989, 1996, 2000, 2002, and 2004.
Among the strengths of the Helsinki Declaration is its insistence that researchers evaluate and respect each subject's competence, or ability to think in general, and capacity, or ability to understand and decide a particular question. In this way the autonomy of the subject should be more strongly protected, not only during the informed consent process but also throughout the experiment itself. Along these same lines of legal and ethical reasoning, the National Bioethics Advisory Commission issued an important report, Research Involving Persons with Mental Disorders That May Affect Decisionmaking Capacity, in 1999.
Reacting to the negative publicity surrounding both Tuskegee and Willowbrook, in 1974 DHEW appointed the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Their comprehensive policy statement was named after Belmont House in Elkridge, Maryland, where they met from 1976 to 1978. The Belmont Report established strict ethical guidelines for federally funded research, strengthened the 1974 DHEW mandate that each entity receiving federal money for research must have an institutional review board (IRB) to approve each protocol, and inspired further regulation and legislation.
Until 1991 no uniform American policy existed to cover all federal agencies that award scientific research grants. The Department of Health and Human Services (DHHS), which succeeded DHEW in 1980, rectified this situation by writing the Federal Policy for the Protection of Human Subjects, known informally as the Common Rule because of its wide application. It emphasized the ascendancy of the individual's rights over the scientist's need to find humans on whom to experiment. IRBs gained additional powers and responsibilities under the Common Rule and the Office for Protection from Research Risks (OPRR), founded by DHEW in 1972 and renamed by DHHS in 2000 as the Office for Human Research Protections (OHRP).
Impact on Science
Science in general has lost much popular credibility since its heyday in the 1950s and 1960s, partially because the public has come to see it as unconcerned with human values. The Western debut of acupuncture in the early 1970s began a massive public shift toward preferring alternatives to established methods and products of rigorous Western biomedical science. This is largely due to a pervasive opinion that this science is cold and heartless while other medical systems are warm and responsive. At the dawn of the twenty-first century, science is not always seen as the ultimate deliverer of humankind from its earthly problems, but as part of these problems. In response to widespread reaction against the social optimism of science, slogans like DuPont's “Better Things for Better Living … Through Chemistry” and General Electric's “Progress Is Our Most Important Product” disappeared from advertising rhetoric and “No Nukes!” replaced “Our Friend the Atom.” Scientists began to realize that a more human face was needed to restore good public relations.
Internal Review Boards (IRBs) are part of a strategy to restore science's good name. One of their purposes is to ensure that decisions about nonscientists participating as subjects in scientific experiments are not made in coldly logical, fanatically motivated, or financially conflicted ways by the scientists themselves, but by a cross-section of the community, including nonscientists, presumably impartial. An IRB typically consists of a wide variety of well educated individuals. Each must have a minimum of five members, including at least one scientist, at least one non-scientist, and at least one person who is not in any way affiliated with the institution that is proposing the research protocol. It is also expected to reflect gender, ethnic, and racial balance.
IRB procedures, even when well followed, cannot prevent occasional tragedies in research. Healthy subjects sometimes die unexpectedly as a result of experimentation. In 1999 at the University of Pennsylvania, 18-year-old Jesse Gelsinger, who suffered from a rare metabolic disease, died during experimental gene therapy. In 2001 Ellen Roche, a healthy 24-year-old medical technician, volunteered for an asthma study at the Johns Hopkins University and within a month was dead from a reaction to an experimental medication. Such cases routinely receive immediate federal review, to try to prevent their recurrence and to remedy inadequacies in the protection of human research subjects. In both cases, federal authorities immediately stopped the experiments. In the Gelsinger case, they fined the university $517,000 for misstatements to the IRB, placed restrictions on the professional activities of the three researchers, and cited one of them, James M. Wilson, for conflict of interest because of his financial ties to Genovo Inc., and Targeted Genetics Corporation. In the Roche case, OHRP investigators determined that the IRB had failed to do its job properly, either because the research team did not give it enough accurate information or because of its own negligence.
American scientists often complain about IRBs for two basic reasons. First, the requirement demands so much red tape that the research schedule often suffers. Second, the nonscientific IRB members often do not understand specific scientific protocols well enough to make reasoned judgments about them. Yet, given the general recognition since the Common Rule was enacted that biomedical science conducted without the approval of its human subjects and their home communities is bad science, IRBs seem to have become a permanent fixture on the biomedical research landscape. Without having to write protocols for general inspection or undergo the scrutiny of reviewers before experiments are allowed to begin, scientists might be tempted to revert to pre-Nuremberg standards, thus further alienating the public and making subsequent recruitment of human subjects more difficult.
Impact on Society
There are two kinds of medical research—clinical and nonclinical. Correspondingly, there are two kinds of human medical research subjects—patients in experimental therapies supervised by a physician and otherwise healthy people supervised by a nonmedical provider, such as a PhD biochemist. Physicians may try to enroll their appropriate patients in clinical trials, but if they do so, they must be especially mindful of their supreme duty to serve each patient's best interest. The boundary between physician as clinical practitioner and physician as medical researcher or promoter of medical research is often blurry. Not all patients can benefit from experimental procedures, unusual protocols, or new drugs.
Physicians must simply treat such patients and not try to enroll them in studies, even if approved, safe, and germane research programs are readily accessible. Trying to persuade a patient to enter a clinical trial is ethical only if the prognosis is discouraging and if all available treatments are either unproven or insufficient. If the prognosis with an approved and proven treatment is good, the physician would act unethically even to suggest enrolling that patient in a trial.
Patients are naturally under stress. Physicians have a duty either to try to reduce this stress or at least not add to it. They act unethically when they try to persuade a patient to enter a research study knowing, or even suspecting—but not revealing—that so doing would likely increase that patient's stress.
Clinical researchers must minimize the possibility of iatrogenic (doctor-caused) injuries, illnesses, or adverse effects. Experimentation on a particular patient increases the danger for that patient, but may serve the greater good for many future patients. Because whether to allow such individual sacrifice must be each patient's free decision, the physician must tell the patient the whole truth. Part of proper informed consent procedure is that the physician must tell patients who may become research subjects whether or not they might receive placebos in lieu of actual therapy. Placebos, which are often used as controls in clinical trials, may put patients at undue risk. The Helsinki Declaration approves placebos only when no reliable therapy exists for the patient's condition.
Some physicians receive financial rewards, favored status, or other incentives from medical instrument or pharmaceutical companies for referring patients for clinical trials of experimental equipment or drugs. conflicts of interest may arise for physicians who stand to gain materially for enrolling patients in trials of new therapies that may not be best for these patients.
Scientists investing in corporations that conduct research in which these scientists are even peripherally involved has been shown to compromise the integrity of the science. Entities paying for trials expect favorable, not objective or impartial, reports. Even payments to subjects might skew scientific results. Subjects should be compensated for their time and trouble, but payments should be neither so low as to belittle each subject's contribution nor so high as to affect the potential subject's judgment whether to participate or the actual subject's judgment whether to continue or withdraw.
Modern Cultural Connections
Ever since theologian Paul Ramsey (1913–1988) founded the consumer health movement in 1970 with his book, The Patient as Person, a major focus in the ethics of both clinical medicine and biomedical research has been the tension between the traditional paternalism or condescension of the physician or scientist and the rightful autonomy of the patient or subject. Ramsey's legacy has been a dramatic upsurge in patient activism, which has also developed into research subject activism or at least assertiveness.
IN CONTEXT: JOHN MOORE'S SPLEEN
On October 5, 1976, Alaska pipeline engineer John Moore entered the University of California at Los Angeles (UCLA) Medical Center with an enlarged spleen. Three days later his physician, David W. Golde (1940–2004), diagnosed hairy cell leukemia and told Moore that his life would be in danger unless his spleen were removed. Moore consented to the surgery. Under Golde's instructions, surgeon Kenneth Fleming performed a successful splenectomy on October 20 and gave Moore's spleen to Golde. The spleen weighed 14 pounds, about 22 times its normal size. Golde and his research assistant, Shirley G. Quan, immediately recognized the enormous commerical potential of an immortal cell line cultured from this organ.
Moore was not told either that Golde had a financial interest in his body tissue or that Golde had kept his spleen. He naturally assumed that the spleen had been discarded like any other medical waste. Between November 1976 and September 1983, at Golde's insistence, Moore made increasingly expensive and inconvenient flights to Los Angeles, so that, as Golde told him, follow-up checks could be performed to ensure that Moore's cancer remained in remission. Eventually Moore became suspicious of Golde requiring him to come to California instead of allowing him to have these follow-ups done at a local medical facility. He and his lawyer did some sleuthing. They discovered that Golde and Quan had created the “Mo” cell line, published their research, and had obtained a lucrative patent. Moore sued.
In the case of Moore v. Regents of the University of California, the Supreme Court of California ruled in 1990 that Golde was at fault by deliberately concealing from Moore the financial implications of the splenectomy, but that, even if Golde had fully disclosed this information, Moore would still not be entitled to any share of the profits.
Religion is the basis of many of the claims of patients and subjects against physicians and scientists. Among such religious concerns are beliefs about transfusions, transplants, stem cells, embryos, DNA, autopsies, the preparation and burial or cremation of the dead, and the integrity of the body. Consistent with its key principle of protecting the autonomy of the individual, American law in each of these instances supports the demands of religion over those of science and personal beliefs over scientific evidence.
IN CONTEXT: THE MILGRAM EXPERIMENT
In 1961 social psychologist Stanley Milgram (1933–1984) began an experiment at Yale University to test the border between cruel obedience and compassionate disobedience. He solicited male subjects between 20 and 50 years old and paid them each $4.50. The experimenter tested the subjects in pairs. An apparently random, but actually rigged, drawing selected one as teacher and the other as learner. The teacher stayed in the room with the experimenter, while the learner went into another room. The teacher and the learner could communicate but could not see each other. The teacher then asked the learner a series of questions supplied by the experimenter. For each wrong answer the experimenter ordered the teacher to give the learner increasingly powerful electric shocks to a maximum of 450 volts.
No shocks actually occurred. The learner was in fact not a subject, but an accomplice of the experimenter. The fact that the experimenter lied to the subjects, putting them under emotional stress by making them believe that they were inflicting pain by following orders, created an ethical controversy. Yet such lies were necessary to perform the experiment at all, because this stress was precisely what Milgram was investigating. Except for the stress, there was no risk to any of the subjects, and they were all told the truth immediately after their sessions ended.
Milgram's results were important. He determined, and subsequent reproductions of his experiment have confirmed, that about 65% of ordinary people will obey commands even when doing so means violating their conscience or being sadistic. He thus advanced knowledge of the darker side of human nature and provided strong empirical evidence for German philosopher Hannah Arendt's (1906–1975) theory of the banality of evil. Both Milgram and Arendt were motivated by the question of whether Nazi war criminal Adolf Eichmann (1906–1962) was truly guilty or just a tool of his commanders.
In 2001 the Institute of Medicine, one of four independent components of the National Academies, published a landmark report, Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. This report investigated the frequency and efficacy of government intervention in biomedical research programs that either IRBs or federal overseers determined were dangerous to human subjects. It offered new and stronger recommendations for accreditation, accountability, and initial protocol approval.
Society, government, and the biomedical community are obligated to protect vulnerable populations, such as children, prisoners, immigrants, researchers' employees, and physicians' patients, from exploitation. When patients or potential research subjects are incompetent, incapacitated, or underage, the informed consent of the family or other legal surrogate is necessary to fulfill the ethical requirements. While American researchers no longer have access to prisoners in the United States, or relatively easy access to the American poor, they continue to have access to Third World populations, particularly in Africa, whom they can use as subjects. Just as American manufacturers open factories overseas in order to avoid expensive, unionized American labor, so American researchers conduct tests overseas in order to circumvent strict American regulations and to avoid having to use savvy Americans as subjects.
Gina Kolata's story in the November 22, 2006 New York Times, “Study Questions Need to Operate on Disk Injuries,” probed research subject recruitment tactics and expressed misgivings about the ethics of a successful surgical test. Because of possible disservice to patients, some of the researchers themselves had doubted its ethics from the beginning, but participated anyway. A complicated but reliable and relatively risk-free operation to relieve pain from a herniated or ruptured vertebral disk had long been known. At the time of the experiment, about 300,000 Americans a year were electing to undergo this surgery. Some surgeons wondered whether the operation was necessary. They hypothesized that patients who either waited or did not have the surgery at all would not suffer beyond what they were already suffering from the condition. Over a period of two years, 2,000 patients for whom herniated disk surgery was indicated simply waited. Their outcomes were not appreciably different from those of patients who had chosen to have the surgery.
The informed consent movement since Nuremberg, and especially since Belmont, has reduced physician paternalism in clinical medicine and self-righteousness among medical research scientists. Decisions whether to treat and how to treat are no longer the doctor's, but the patient's, or ideally a consensus of both. Informed consent is what allows human research subjects to be active and voluntary participants rather than victims or unwilling tools of the scientific process. Since Nuremberg, the principle that neither physicians nor scientists may do anything to anyone's body without that person's freely given consent has been universally recognized as a basic human right.
Primary Source Connection
Congress passed the National Research Act of 1974 in the wake of negative publicity for both the government and the scientific community regarding the cruel treatment of some medical research subjects. Among the consequences of this act was the Department of Health, Education, and Welfare (DHEW) creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which met at the Belmont House in Elkridge, Maryland, beginning in February 1976.
The Belmont Report prompted much new legislation and regulation in the decades following. In 1991 the Department of Health and Human Services (DHHS) codified the Federal Policy for the Protection of Human Subjects. This policy is called the “Common Rule” because, even though created by DHHS, it applies equally to all federal departments and agencies that grant money for scientific research that involves human subjects. Among other provisions, it strengthened the 1974 mandate that every institution receiving federal funds for research must have its own Institutional Review Board (IRB) to approve all research protocols. In 2000, the OPRR, now within DHHS, became the Office for Human Research Protections (OHRP).
Legal and ethical issues surrounding informed consent and informed refusal remain in a state of flux and continue to be a major concern of the federal government. Abuses, carelessness, and deaths still occur, even at the most prestigious American academic research institutions.
THE BELMONT REPORT: ETHICAL PRINCIPLES AND GUIDELINES FOR THE PROTECTION OF HUMAN SUBJECTS OF RESEARCH
Part B: Basic Ethical Principles
The expression “basic ethical principles” refers to those general judgments that serve as a basic justification for the many particular ethical prescriptions and evaluations of human actions. Three basic principles, among those generally accepted in our cultural tradition, are particularly relevant to the ethics of research involving human subjects: the principles of respect of persons, beneficence and justice.
Respect for Persons. —Respect for persons incorporates at least two ethical convictions: first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. The principle of respect for persons thus divides into two separate moral requirements: the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy.
An autonomous person is an individual capable of deliberation about personal goals and of acting under the direction of such deliberation. To respect autonomy is to give weight to autonomous persons' considered opinions and choices while refraining from obstructing their actions unless they are clearly detrimental to others. To show lack of respect for an autonomous agent is to repudiate that person's considered judgments, to deny an individual the freedom to act on those considered judgments, or to withhold information necessary to make a considered judgment, when there are no compelling reasons to do so.
However, not every human being is capable of self-determination. The capacity for self-determination matures during an individual's life, and some individuals lose this capacity wholly or in part because of illness, mental disability, or circumstances that severely restrict liberty. Respect for the immature and the incapacitated may require protecting them as they mature or while they are incapacitated.
Some persons are in need of extensive protection, even to the point of excluding them from activities which may harm them; other persons require little protection beyond making sure they undertake activities freely and with awareness of possible adverse consequence. The extent of protection afforded should depend upon the risk of harm and the likelihood of benefit. The judgment that any individual lacks autonomy should be periodically reevaluated and will vary in different situations.
In most cases of research involving human subjects, respect for persons demands that subjects enter into the research voluntarily and with adequate information. In some situations, however, application of the principle is not obvious. The involvement of prisoners as subjects of research provides an instructive example. On the one hand, it would seem that the principle of respect for persons requires that prisoners not be deprived of the opportunity to volunteer for research. On the other hand, under prison conditions they may be subtly coerced or unduly influenced to engage in research activities for which they would not otherwise volunteer. Respect for persons would then dictate that prisoners be protected. Whether to allow prisoners to “volunteer” or to “protect” them presents a dilemma. Respecting persons, in most hard cases, is often a matter of balancing competing claims urged by the principle of respect itself.
Beneficence. —Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. Such treatment falls under the principle of beneficence. The term “beneficence” is often understood to cover acts of kindness or charity that go beyond strict obligation. In this document, beneficence is understood in a stronger sense, as an obligation. Two general rules have been formulated as complementary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms.
The Hippocratic maxim “do no harm” has long been a fundamental principle of medical ethics. Claude Bernard extended it to the realm of research, saying that one should not injure one person regardless of the benefits that might come to others. However, even avoiding harm requires learning what is harmful; and, in the process of obtaining this information, persons may be exposed to risk of harm. Further, the Hippocratic Oath requires physicians to benefit their patients “according to their best judgment.” Learning what will in fact benefit may require exposing persons to risk. The problem posed by these imperatives is to decide when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks.
The obligations of beneficence affect both individual investigators and society at large, because they extend both to particular research projects and to the entire enterprise of research. In the case of particular projects, investigators and members of their institutions are obliged to give forethought to the maximization of benefits and the reduction of risk that might occur from the research investigation. In the case of scientific research in general, members of the larger society are obliged to recognize the longer term benefits and risks that may result from the improvement of knowledge and from the development of novel medical, psychotherapeutic, and social procedures.
The principle of beneficence often occupies a well-defined justifying role in many areas of research involving human subjects. An example is found in research involving children. Effective ways of treating childhood diseases and fostering healthy development are benefits that serve to justify research involving children—even when individual research subjects are not direct beneficiaries. Research also makes it possible to avoid the harm that may result from the application of previously accepted routine practices that on closer investigation turn out to be dangerous. But the role of the principle of beneficence is not always so unambiguous. A difficult ethical problem remains, for example, about research that presents more than minimal risk without immediate prospect of direct benefit to the children involved. Some have argued that such research is inadmissible, while others have pointed out that this limit would rule out much research promising great benefit to children in the future. Here again, as with all hard cases, the different claims covered by the principle of beneficence may come into conflict and force difficult choices.
Justice. —Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” An injustice occurs when some benefit to which a person is entitled is denied without good reason or when some burden is imposed unduly. Another way of conceiving the principle of justice is that equals ought to be treated equally. However, this statement requires explication. Who is equal and who is unequal? What considerations justify departure from equal distribution? Almost all commentators allow that distinctions based on experience, age, deprivation, competence, merit and position do sometimes constitute criteria justifying differential treatment for certain purposes. It is necessary, then, to explain in what respects people should be treated equally. There are several widely accepted formulations of just ways to distribute burdens and benefits. Each formulation mentions some relevant property on the basis of which burdens and benefits should be distributed. These formulations are (1) to each person an equal share, (2) to each person according to individual need, (3) to each person according to individual effort, (4) to each person according to societal contribution, and (5) to each person according to merit.
Questions of justice have long been associated with social practices such as punishment, taxation and political representation. Until recently these questions have not generally been associated with scientific research. However, they are foreshadowed even in the earliest reflections on the ethics of research involving human subjects. For example, during the nineteenth and early twentieth centuries the burdens of serving as research subjects fell largely upon poor ward patients, while the benefits of improved medical care flowed primarily to private patients. Subsequently, the exploitation of unwilling prisoners as research subjects in Nazi concentration camps was condemned as a particularly flagrant injustice. In this country, in the 1940's, the Tuskegee syphilis study used disadvantaged, rural black men to study the untreated course of a disease that is by no means confined to that population. These subjects were deprived of demonstrably effective treatment in order not to interrupt the project, long after such treatment became generally available.
Against this historical background, it can be seen how conceptions of justice are relevant to research involving human subjects. For example, the selection of research subjects needs to be scrutinized in order to determine whether some classes (e.g., welfare patients, particular racial and ethnic minorities, or persons confined to in-stitutions) are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied. Finally, whenever research supported by public funds leads to the development of therapeutic devices and procedures, justice demands both that these not provide advantages only to those who can afford them and that such research should not unduly involve persons from groups unlikely to be among the beneficiaries of subsequent applications of the research.
Part C: Applications
Applications of the general principles to the conduct of research leads to consideration of the following requirements: informed consent, risk/benefit assessment, and the selection of subjects of research.
1. Informed Consent. —Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.
While the importance of informed consent is unquestioned, controversy prevails over the nature and possibility of an informed consent. Nonetheless, there is widespread agreement that the consent process can be analyzed as containing three elements: information, comprehension and voluntariness.
Information. Most codes of research establish specific items for disclosure intended to assure that subjects are given sufficient information. These items generally include: the research procedure, their purposes, risks and anticipated benefits, alternative procedures (where therapy is involved), and a statement offering the subject the opportunity to ask questions and to withdraw at any time from the research. Additional items have been proposed, including how subjects are selected, the person responsible for the research, etc.
However, a simple listing of items does not answer the question of what the standard should be for judging how much and what sort of information should be provided. One standard frequently invoked in medical practice, namely the information commonly provided by practitioners in the field or in the locale, is inadequate since research takes place precisely when a common understanding does not exist. Another standard, currently popular in malpractice law, requires the practitioner to reveal the information that reasonable persons would wish to know in order to make a decision regarding their care. This, too, seems insufficient since the research subject, being in essence a volunteer, may wish to know considerably more about risks gratuitously undertaken than do patients who deliver themselves into the hand of a clinician for needed care. It may be that a standard of “the reasonable volunteer” should be proposed: the extent and nature of information should be such that persons, knowing that the procedure is neither necessary for their care nor perhaps fully understood, can decide whether they wish to participate in the furthering of knowledge. Even when some direct benefit to them is anticipated, the subjects should understand clearly the range of risk and the voluntary nature of participation.
A special problem of consent arises where informing subjects of some pertinent aspect of the research is likely to impair the validity of the research. In many cases, it is sufficient to indicate to subjects that they are being invited to participate in research of which some features will not be revealed until the research is concluded. In all cases of research involving incomplete disclosure, such research is justified only if it is clear that (1) incomplete disclosure is truly necessary to accomplish the goals of the research, (2) there are no undisclosed risks to subjects that are more than minimal, and (3) there is an adequate plan for debriefing subjects, when appropriate, and for dissemination of research results to them. Information about risks should never be withheld for the purpose of eliciting the cooperation of subjects, and truthful answers should always be given to direct questions about the research. Care should be taken to distinguish cases in which disclosure would destroy or invalidate the research from cases in which disclosure would simply inconvenience the investigator.
Comprehension. The manner and context in which information is conveyed is as important as the information itself. For example, presenting information in a disorganized and rapid fashion, allowing too little time for consideration or curtailing opportunities for questioning, all may adversely affect a subject's ability to make an informed choice.
Because the subject's ability to understand is a function of intelligence, rationality, maturity and language, it is necessary to adapt the presentation of the information to the subject's capacities. Investigators are responsible for ascertaining that the subject has comprehended the information. While there is always an obligation to ascertain that the information about risk to subjects is complete and adequately comprehended, when the risks are more serious, that obligation increases. On occasion, it may be suitable to give some oral or written tests of comprehension.
Special provision may need to be made when comprehension is severely limited—for example, by conditions of immaturity or mental disability. Each class of subjects that one might consider as incompetent (e.g., infants and young children, mentally disabled patients, the terminally ill and the comatose) should be considered on its own terms. Even for these persons, however, respect requires giving them the opportunity to choose to the extent they are able, whether or not to participate in research. The objections of these subjects to involvement should be honored, unless the research entails providing them a therapy unavailable elsewhere. Respect for persons also requires seeking the permission of other parties in order to protect the subjects from harm. Such persons are thus respected both by acknowledging their own wishes and by the use of third parties to protect them from harm.
The third parties chosen should be those who are most likely to understand the incompetent subject's situation and to act in that person's best interest. The person authorized to act on behalf of the subject should be given an opportunity to observe the research as it proceeds in order to be able to withdraw the subject from the research, if such action appears in the subject's best interest.
Voluntariness. An agreement to participate in research constitutes a valid consent only if voluntarily given. This element of informed consent requires conditions free of coercion and undue influence. Coercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance. Undue influence, by contrast, occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance. Also, inducements that would ordinarily be acceptable may become undue influences if the subject is especially vulnerable.
Unjustifiable pressures usually occur when persons in positions of authority or commanding influence—especially where possible sanctions are involved—urge a course of action for a subject. A continuum of such influencing factors exists, however, and it is impossible to state precisely where justifiable persuasion ends and undue influence begins. But undue influence would include actions such as manipulating a person's choice through the controlling influence of a close relative and threatening to withdraw health services to which an individual would otherwise be entitled.
United States Department of Health, Education, and Welfare. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research
national commission for the protection of human subjects of biomedical and behavior al r esearch.
the belmont report: ethical principles and guidelines for the protection of human subjects of research.washington, d.c.: usgpo, 1979.
Annas, George J. Law, Medicine, and the Market. New York: Oxford University Press, 1998.
Berg, Jessica W., Paul S. Appelbaum, Charles W. Lidz, and Lisa S. Parker. Informed Consent: Legal Theory and Clinical Practice. New York: Oxford University Press, 2001.
Blass, Thomas. The Man Who Shocked the World: The Life and Legacy of Stanley Milgram. New York: Basic Books, 2004.
Brody, Baruch A. The Ethics of Biomedical Research: An International Perspective. New York: Oxford University Press, 1998.
Coleman, Carl H. The Ethics and Regulation of Research with Human Subjects. Newark, NJ: Lexis-Nexis, 2005.
DeRenzo, Evan G., and Joel Moss. Writing Clinical Research Protocols: Ethical Considerations. Burlington, MA: Elsevier Academic, 2006.
Doyal, Len, and Jeffrey S. Tobias, eds. Informed Consent in Medical Research. London: BMJ Books, 2001.
Faden, Ruth R., Tom L. Beauchamp, and Nancy M.P. King. A History and Theory of Informed Consent. New York: Oxford University Press, 1986.
Freund, Paul A., ed. Experimentation with Human Subjects. New York: George Braziller, 1970.
Getz, Kenneth, and Deborah Borlitz. Informed Consent: The Consumer's Guide to the Risks and benefits of Volunteering for Clinical Trials. Boston: Center-Watch, 2003.
Hornblum, Allen M. Acres of Skin: Human Experiments at Holmesburg Prison: A True Story of Abuse and Exploitation in the Name of Medical Science. New York: Routledge, 1999.
Iltis, Ana Smith, ed. Research Ethics. New York: Routledge, 2006.
Institute of Medicine. Committee on Assessing the System for Protecting Human Research Subjects. Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. Washington, DC: National Academies Press, 2001.
Institute of Medicine. Committee on Ethical Considerations for Revisions to DHHS Regulations for Protection of Prisoners Involved in Research. Ethical Considerations for Research Involving Prisoners. Washington, DC: National Academies Press, 2007.
King, Nancy M.P., Gail E. Henderson, and Jane Stein, eds. Beyond Regulations: Ethics in Human Subjects Research. Chapel Hill: University of North Carolina Press, 1999.
Kodish, Eric, ed. Ethics and Research with Children: A Case-Based Approach. New York: Oxford University Press, 2005.
LaFleur, William R., Gernot Böhme, and Susumu Shimazono, eds. Dark Medicine: Rationalizing Unethical Medical Research. Bloomington: Indiana University Press, 2007.
Lavery, James V., Christine Grady, Elizabeth R. Wahl, and Ezekiel J. Emanuel, eds. Ethical Issues in International Biomedical Research: A Casebook. Oxford: Oxford University Press, 2007.
Macklin, Ruth. Double Standards in Medical Research in Developing Countries. Cambridge: Cambridge University Press, 2004.
Manson, Neil C., and Onora O'Neill. Rethinking Informed Consent in Bioethics. New York: Cambridge University Press, 2007.
Menikoff, Jerry, and Edward P. Richards. What the Doctor Didn't Say: The Hidden Truth about Medical Research. New York: Oxford University Press, 2006.
Milgram, Stanley. Obedience to Authority: An Experimental View. New York: HarperCollins, 2004.
Miller, Arthur G. The Obedience Experiments: A Case Study of Controversy in Social Science. New York: Praeger, 1986.
Moreno, Jonathan D. Undue Risk: Secret State Experiments on Humans. New York: Routledge, 2000.
National Bioethics Advisory Commission. Research Involving Persons with Mental Disorders that May Affect Decisionmaking Capacity. Springfield, VA: National Technical Information Service, 1999.
Quinn, Susan. Human Trials: Scientists, Investors, and Patients in the Quest for a Cure. Cambridge, MA: Perseus, 2001.
Rollin, Bernard E. Science and Ethics. New York: Cambridge University Press, 2006.
Schmidt, Ulf, ed. Justice at Nuremberg: Leo Alexander and the Nazi Doctors' Trial. New York: Palgrave Macmillan, 2004.
Shah, Sonia. The Body Hunters: Testing New Drugs on the World's Poorest Patients. New York: New Press, W.W. Norton, 2006.
Shamoo, Adil E., and David B. Resnik. Responsible Conduct of Research. Oxford: Oxford University Press, 2003.
Smith, Trevor. Ethics in Medical Research: A Handbook of Good Practice. Cambridge: Cambridge University Press, 1999.
Weindling, Paul Julian. Nazi Medicine and the Nuremberg Trials: From Medical War Crimes to Informed Consent. New York: Palgrave Macmillan, 2004.
Beecher, Henry K. “Ethics and Clinical Research.” New England Journal of Medicine 274 (1966):1354–1360.
Vollmann, Jochen, and Rolf Winau. “The Prussian Regulation of 1900: Early Ethical Standards for Human Experimentation in Germany.” IRB: Ethics and Human Research 18, 4 (July–August 1996):9–11.
Eric v.d. Luft