Human Subjects Research
HUMAN SUBJECTS RESEARCH
In the field of ethical issues in scientific research, the two most controversial topics concern involve the use of humans as research subjects and the use of non-human animals as research subjects. Each of those debates goes back over a hundred years, to the final decades of the nineteenth century, and thus has a substantial literature that has developed a sophisticated level of discussion. This article will briefly summarize the history of the field first, and then explain some of the regulations that have resulted, and close with identifying some of the most important future issues.
By 1900 there was ample evidence of an appreciation in the medical and scientific communities of the ethical issues that would have to be resolved before a person was used as a subject in experiments. In Prussia a ministerial directive issued in 1900 restricted research to the use of persons who could benefit from the research, who were told in advance of the risks of participation, and who gave their consent. This was in response to well-known experiments with the leprosy bacillus on unwitting subjects in Prussia around that time.
At around the same time in Cuba, United States General Walter Reed (1851–1902) conducted yellow fever studies but required that both soldiers and civilians volunteer first, be informed of the risks (including the risk of death), and sign a consent form. The form was written in both English and Spanish. This is said to have been the first use of a signed consent form and also could be considered the first example of ethical international research informed by cultural competence. Reed's caution was a response to an experiment in Italy in which five persons were infected with yellow fever without being told and an initial experiment in Cuba by two colleagues of Reed who intentionally infected themselves that led to the death of one of them.
In light of the degree of awareness shown at the beginning of the century, it is surprising that by mid-century some of the most barbaric things ever done in the name of science would come to pass. A combination of factors contributed to that decline in standards, including racism and anti-Semitism, exacerbated by nationalism and xenophobia; those problematic social elements were long established but were pushed to extremes by World War II.
Three examples of well-known and frequently cited unethical research involving human subjects occurred in the middle third of the twentieth century. The Tuskegee experiments, observing the consequences of untreated syphilis in American blacks, began in 1932, when there was no effective treatment, but continued until 1972, long after the discovery of penicillin. The research done by Nazi doctors was by far the most brutal and murderous. Those experiments included testing the limits of human endurance up to and including death from causes such as bullet and knife wounds; decompression at high altitudes, which was tested by putting people in decompression chambers and measuring when their lungs burst; and hypothermia, which was tested by keeping subjects immersed in ice water. Japanese experiments in the notorious unit 731 were just as grievous as the Nazi experiments, though less well known. The thalidomide tragedy revealed the importance of the oversight of drug trials and the recognition of the problems of self-policing by pharmaceutical companies that have a financial investment at stake. That experience propelled the U.S. congressional hearings known as the Kefauver hearings.
Ethically disturbing human experiments were done well after that period. Two examples in the United States were performed on institutionalized populations: testing gamma globulin treatment of hepatitis after infecting children at the Willowbrook State School in Willowbrook, New York, and tracing differences of rejection of live cancer cells in subjects after injecting those cells into people at the Jewish Chronic Disease Hospital in Brooklyn, New York without explaining what was in the injections. These were among twenty-two experiments described by Henry K. Beecher in an influential paper published in the New England Journal of Medicine in 1966, "Ethics and Clinical Research."
There are many ironies in this history. For example, the most brutal and murderous research was done in Germany, the country that had promulgated the first modern code for ethical research. Then the country that provided all the judges and all the lawyers at the Nuremberg Trial of Physicians (1946) that led to the Nuremberg Code (1947) acted as if the code did not apply to its citizens in the years after World War II. This history of the field seems to show that some of the lessons need to be learned and relearned periodically and that only revelations of scandals and abuses have the power to restrain research.
The last third of the twentieth century saw the codification of many of the lessons that had been learned and left a number of areas of great import that are still very much disputed. Several of those lessons have been accepted widely and codified into U.S. and international law.
In 1964 the original Declaration of Helsinki was passed by the World Medical Association. It reiterated the famous first line of the Nuremberg Code, stating that the voluntary consent of the human subject is absolutely essential, though it still left it up to the researcher to decide what to say, how much to disclose, and how to document the informed consent process. It has been revised and strengthened a number of times, most recently in 2000. The most important difference from U.S. regulations involves placebo controls, which generally are encouraged in the United States (especially by the Food and Drug Administration) and discouraged (though not forbidden) in the Declaration of Helsinki.
As a result of the public reaction to the Tuskegee experiments in 1974, the U.S. Congress authorized the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The National Commission resulted in the publication of the Belmont Report (1979) and the issuance of federal regulations in 1981 known as 45 CFR 46. Those regulations led to the requirement of prior review of research protocols by independent committees known as Institutional Review Boards (IRBs). This was modeled on prior peer review, which had been required at the National Institutes of Health (NIH) since 1965 and for all NIH-sponsored research since 1966. The basic protections of the regulations (outlined in subpart A) were consolidated into "the Common Rule" in 1991 and adopted by sixteen federal agencies.
IRB oversight, in contrast to peer review, required that there be at least one nonscientist, one community member, and should not be either all men or all women. Although many people still have concerns about the real independence one can expect in light of the fact that most of the members of the committees are usually employees at the same institution where the research is being done, it was an important innovation.
Before approving a proposed research protocol, the IRB must ascertain that the research is scientifically valid (the goals are worthwhile and achievable by the methods proposed) and that the risks to the subjects are kept to a minimum and are justified by the potential benefits to be gained. It also must determine that the selection of subjects has been equitable (no groups are excluded without good reason) and that the subjects have been recruited without any deception or coercion, that the confidentiality of the subjects has been adequately protected, that the subjects have been fully informed about the risks and have given voluntary consent that has been documented, that proper steps have been taken to ensure that the subjects understand all the information they have been given, and that they understand that they can withdraw from the research at any time. The IRB is also responsible for monitoring the research and has the power to stop any study that is dangerous to the participants, a task often assigned to a separate Data and Safety Monitoring Board (DSMB).
An IRB has the responsibility to ensure the voluntary participation of the research subjects as well as their safety. Thus, IRBs often focus on the informed consent form that will be given to potential subjects to ensure that the risks are portrayed realistically and not underplayed and that there is no misleading of the subjects about the likelihood of benefit. Terms such as the doctor, medicine, and therapy can be used by researchers without any intent to deceive yet can be read by subjects as meaning that they are enrolled in an experiment whose purpose is to help them rather than to improve the understanding of a drug or disease process. This is referred to as the therapeutic misconception. The same concern for language has made some IRBs to suggest using the term "participants" instead of "subjects" as a reminder to the researcher that she is seeking the cooperation of well informed volunteers, not passive recruits who don't ask questions. The regulations also require that extra attention be paid before any members of certain groups of persons known as vulnerable populations are used. These groups include children, the mentally handicapped or mentally ill, prisoners, pregnant women, and fetuses.
Ironically, since the 1990s there has been recognition in the United States by the Food and Drug Administration that drugs have been tested disproportionately on white men too frequently and that it would be scientifically helpful to have more studies with women, minorities, and children to test for variations in effectiveness and safety. However, the history of abuse probably has made researchers hesitant to enroll persons in these categories, not to mention the distrust that members of these groups might feel after the historical record at Tuskegee, Willowbrook, and the Jewish Chronic Disease Hospital.
All government funded research with human subjects is required to be reviewed by an IRB. This includes the behavioral and social sciences as well as biomedical sciences. Many of the same ethical issues arise, though the potential harms may be of a psychological nature, such as risk to privacy or to self-image, rather than a physical one. A concern that may occur with greater frequency in psychology is that fully informing a subject of the nature of the research could bias the answers the subject gives. Thus researchers will seek to reveal less of the purpose of the study than would be the case in medical research. This type of purposeful deception will have to be justified to the IRB, and assurances that any risks to the subjects are minimal. Assessing this kind of risk is difficult, as seen in the fact that the highly innovative and influential milgram experiments conducted in the 1960s are deemed controversial by some commentators to this day. The primary harm to the subjects was a loss of self-esteem as they reflected on their own willingness to submit to the orders of an authority figure and inflict pain on strangers. But it would not have been possible to do the experiment had the consent process told them in advance that the strangers in apparent pain were only actors. An honest debriefing, with counseling if necessary, may help to alleviate possible harms in cases where some initial deception cannot be avoided.
This also brings up the question of non-government funded research. Much pharmaceutical research and research on medical devices is funded by the FDA, and so falls under the common rule. But beyond government funding sources, there is currently no review needed in the U.S. for privately funded research. Should private enterprise, from marketing research to genetics and biotechnology, be unencumbered by regulations whose intent is to ensure the safety of citizens? Should civil rights and human rights be allowed to set restrictions on private companies in cases where there is, as yet, little risk identified? When one pictures marketing questionnaires, it is easy to be swayed towards a libertarian distrust of unnecessary and intrusive government regulations. But when one considers the potential profits from genetics and biotechnology research, there may be more reason to consider preemptive regulation, such as already exists with state commissions in many European Union countries concerning IVF.
Soon after the Belmont Report the Council for the International Organization of Medical Sciences (CIOMS) produced a report on the special issues that occur in international research. The beginning years of the twenty-first century have seen growth in funding for international research. Although some of this increase in funding could be due to economic globalization and the lessening of national identity for multinational corporations, there may be more ominous motivations. For example, funding sources for pharmaceutical research are often in first world countries such as the United States, the United Kingdom, France, Germany, Belgium, and Switzerland. However, when an even larger proportion of research in is done developing nations, it could be because of lax regulations (including ethical regulations) in the developing world.
A second topic that inevitably will grow in importance is the range of new research resulting from the Human Genome Project. That project was completed in less time than originally planned and has provided an enormous amount of raw data with which biologists hope to map a deeper understanding of normal development and pathogenesis. However, all genetic information has ethically complex properties, such as providing information about the relatives of research subjects as well as about the persons who volunteered to be involved in the research.
Another challenging ethical issue unique to genetics is the possibility of curing a disease by means of germline gene therapy, removing the disease from human history but at the risk of altering the human genome. Similarly, genetic interventions have the potential to blur the intuitive distinction between medical treatment for an illness or dysfunction and enhancement of traits which a person may find unsatisfactory yet fall within the normal range of human beings. Either way we are on the cusp of gaining the knowledge of the human genome that would allow genetic engineering with the purpose of improving the race (using Nazi terminology, creating a new master race). Might we soon enter a phase of deliberate evolution, or worse, develop into two sub-species, the feral and the enhanced?
The third topic of concern is stem cell research and the related issue of human cloning. Advances in invitro fertilization (IVF) and other assisted reproduction technologies (ARTs) have made the possibility of human cloning real. Many species of mammals already have been cloned, and it may be only a matter of time before a human is cloned. Although some people have argued that this should be considered an alternative technique for infertile couples to have a child, it has been outlawed in many countries as threatening the dignity inherent in the uniqueness of each life.
Stem cell research, which would find its best source of human embryonic stem cells in the excess embryos created by IVF programs, also has been opposed by critics who believe it violates the respect owed to human embryos or treats them as means rather than ends. However, attempts at broad bans have been less successful than with cloning for a number of reasons: The therapeutic potential could benefit many more people, and the majority of scholars and researchers in both ethics and developmental biology believe that there is a fundamental moral difference between a preimplantation embryo and an embryo or fetus that has been implanted successfully in a human womb.
Beyond issues related to transnational experimentation, genetics, and stem cells research, one might suggest that as the scientific and technological enterprise advances, all people become the subjects of scientific research. Mike Martin and Roland Schinzinger (1996) have argued for understanding engineering as a form of social experimentation. But even more broadly, the increasing use of medicines that often create therapeutic dependencies, unregulated uses of IVF and frozen embryos, and the popularization of plastic surgeries and advanced prosthetics all point toward people treating themselves (not just scientists treating people) as human subjects in scientifically based actions the full outcomes of which remain uncertain.
JEFFREY P. SPIKE
Altmann, Lawrence K. (1998). Who Goes First? Berkeley: University of California Press. A book that's fun to read, on an approach to choosing research subjects that was once quite honorable but is now strongly discouraged: try it on yourself first.
Dunn, Cynthia McGuire, and Gary L. Chadwick. (2004). Protecting Study Volunteers in Research: A Manual for Investigative Sites, 3rd edition. Boston: Thompson-Centerwatch. A very complete users manual for IRB coordinators and members.
Emanuel, Ezekiel E.; Robert A. Crouch; John D. Arras; Jonathan D. Moreno, and Christine Grady. (2003). Ethical and Regulatory Aspects of Clinical Research: Readings and Commentary. Baltimore: Johns Hopkins University Press. An excellent collection of primary sources; includes the Belmont report, Helsinki, CIOMS, and the Henry K. Beecher article mentioned in the text.
Martin, Mike W., and Roland Schinzinger. (1996). Ethics in Engineering, 3rd edition. New York: McGraw-Hill.
Milgram, Stanley. (1974). Obediance to Authority: An Experimental View. New York: Harper and Row.
Shamoo, Adil E., and David B. Resnick. (2003). Responsible Conduct of Research. New York: Oxford University Press. A useful textbook for a graduate level course in research ethics; includes many topics besides human subjects.
Zoloth, Laurie, Jane Maienschein, and Ronald M. Green, "Ethics of Stem Cell Research: A Target Article and Open Peer Commentaries." American Journal of Bioethics 2(1): 1–59. Three introductory target articles are followed by 19 diverse short commentaries.