Research Policy: I. General Background

views updated


Since the 1960s the challenges of human research have received increasing attention and have caused a great deal of concern. In 1966 Professor Henry Beecher captured the attention and aroused the ire of the academic research community in the United States with the disclosure of what he considered unethical research practices at some premier research facilities. Beecher initiated a cycle of disclosure and reaction that has characterized the country's approach to ensuring the well-being of participants in research for more than four decades (Papworth).

Early Criticisms of Research Procedures

Beecher's article came at a time when public investment in research and development, particularly in biomedicine and technology, was growing at an unprecedented rate and the prospects for medicine and the future of biotechnology appeared limitless. The boom in private, corporate-sponsored clinical trials had not yet materialized but was not beyond people's imagination. The disturbing events at the Jewish Chronic Diseases Hospital in New York (Katz), in which a physician scientist injected live cancer cells into unwitting recipients, had been noted by Dr. James Shannon, at that time the director of the National Institutes of Health.

Prompted by that disclosure, in 1966 Shannon moved to require for the first time a mechanism for peer review of proposed scientific research by individuals that was concerned primarily with the well-being and safety of research subjects. However, much of the scientific community remained oblivious or insensitive to the apparent disregard for the safety and the rights of subjects in the research practices of that period. For the first time the scientific community began to realize that scientists could not be allowed on their own to determine how they would conduct experimental studies on other human beings.

The First Cycle of Regulations

Beecher's article and the monumental work subsequently published by Jay Katz just as the U.S. Public Health Service syphilis study in rural Alabama came to light (Tuskegee Syphilis Study Ad Hoc Advisory Panel) evoked strong emotional reactions among scientists, the public, and government regulators. That scientists working for the government could intentionally, for research purposes, allow poor African-American men to live with untreated syphilis for thirty years after the discovery of safe, effective treatment was appalling. Studies of the transmission of hepatitis in institutionalized children at the Willowbrook School (see Katz) underscored the need for special societal and legal protections of those incapable of protecting their own interests, including children. Many people called for new government regulations to protect the safety of research subjects, and the government responded. Within two years Congress passed the National Research Act of 1974, establishing the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research and laying a course for regulatory action. The act required the U.S. Department of Health, Education and Welfare, the predecessor of the Department of Health and Human Services (DHHS), to codify its policy for the protection of human subjects in the form of regulations.

Almost immediately the perception of scientists and physicians who worked in human research was altered. Activities that once were held in the highest esteem, conducted by individuals who were trusted and respected as much as anyone in society, suddenly were cast in an unflattering light as potential sources of injury and harm from which individuals needed protection despite the potential benefit to humankind of those activities.

The National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, which conducted its deliberations over a period of several years before it was disbanded in the late 1970s, attempted to define a set of fundamental ethical principles underlying the responsible conduct of human research first for the general population and subsequently for special populations that were deemed to need special protections, notably children, prisoners, pregnant women, and fetuses. The Commission also recognized the special challenges posed by research involving individuals with mental illnesses and impaired decision-making capability, many of whom were institutionalized at the time of the its discussions.

The Commission did not state a preference for any particular philosophy or ideology, although traditional Western values of individual autonomy and justice were reflected prominently in its Belmont Report. The justification of human experimentation and the attendant exposure of individuals to uncertain risks for little or no direct benefit, but for the benefit of science and society is fundamentally utilitarian. At the time of the Commission's work, feminism, consumerism, and communitarian ethics were not yet part of mainstream thinking and thus were not reflected prominently in the debate. The lack of universality of ethical principles across cultures may limit the generalizability of the Commission's recommendations.

Today most parties to the human research process in the U.S. are at least aware of the commission's Belmont Report and are able to name the principles of respect for persons, beneficence, and justice discussed therein (National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research 1979), but this is a relatively recent development resulting primarily from the requirements imposed by the National Institutes of Health (NIH) that all individuals who participate in human research receive training in research ethics and regulatory requirements (National Institutes of Health). The fact that members of the research community would seek training in the responsible conduct of human research only as a condition of receiving funding from a federal agency is an unfortunate commentary on the way in which the research community establishes priorities. This pattern of behavior is what many critics and scholars of the human research process have come to expect and has not been lost on legislators.

The bioethicist Carol Levine once said that human research ethics were "born in scandal and reared in protectionism." That quip often is repeated because it resonates with current perceptions of reality. That statement captures the continuing cycle of disclosure and reaction that has characterized regulatory activities at the federal level, beginning with the amendment of the Public Health Service Act in 1974 and the subsequent promulgation of revised regulations by the DHHS for the protection of human subjects in 1981 (Code of Federal Regulations, Title 45, Part 46).

Although frequently cited as a framework for the ethical conduct of human research, those regulations do not constitute a set of ethical principles. The regulations are a set of rules established under the Public Health Service Act that attempt to operationalize the ethical principles set forth in the Belmont Report. They establish the minimum necessary requirements for implementing and maintaining a system for the protection of human subjects in research, including formal requirements for the establishment and operations of institutional review boards and the processes for obtaining and documenting informed consent, as recommended by the National Commission in 1978.

The DHHS expended considerable effort in crafting those regulations so that they would allow enough flexibility to encompass the wide variety of biomedical, behavioral, and social research it supported. The regulations reflected a well-intended effort to ensure that the ethical principles delineated in the Belmont Report would be applied in a uniform and appropriate manner by all recipients of federal research funds. Unfortunately, the DHHS was unable to establish a uniform set of regulations governing the oversight of all human research under its jurisdiction, most notably excluding privately sponsored clinical trials of new drugs, devices, and biologics performed under the regulatory authority of the U.S. Food and Drug Administration (FDA), which operates under a separate statutory authority, the Food Drug and Cosmetic Act of 1972. Those studies are covered by separate regulations (Code of Federal Regulations, Title 21, Parts 50 and 56) that are substantially similar to but more narrowly focused on clinical investigation than are the Public Health Service regulations. The lack of a uniform oversight process and standards has probably contributed to inconsistent and ineffective implementation and noncompliance with the regulations. This situation has been and is likely to continue to be a source of confusion and frustration to individual investigators, sponsors, institutions, and review boards that attempt in good faith to comply with the requirements of the often overlapping regulations and oversight processes that apply to their activities.

The Common Rule

The situation in the DHHS is compounded in other federal agencies. In 1991 sixteen agencies adopted 45 CFR 46 Subpart A, the main body of the DHHS's regulations, as signatories to the common Federal Policy on Protection of Human Subjects in Biomedical and Behavioral Research, informally known as the Common Rule. Many of those agencies, including the National Bioethics Advisory Commission (2001c), had noted previously that it had taken a full decade for some of the federal agencies to sign on to those important regulations, yet not all federal agencies have done that, including some that engage in or support human research. Those that have adopted the Common Rule do not always agree fully on the interpretation and application of the regulations and some continue to impose specific additional regulatory and administrative requirements of their own. Thus, research entities and individuals have been left to reconcile the differences as best they can, often with little specific guidance, support, and cooperation from the various federal agencies involved in the support and oversight of human research activities.

Both investigators and institutions, including their review boards, have complained that the complexity and inflexibility of the regulations have made it difficult for them to comply. Although these are contributing factors, there are more likely explanations for the widespread noncompliance discovered when the former Office for Protection from Research Risks (OPRR) began a series of not-for-cause site visits to major research institutions across the country in the late 1990s. The 1998 reports from the Office of Inspector General offered insight into the nature of the problems in the system by noting that institutional review boards "review too much too quickly, with too little expertise" (p. 5). The report also notes the inadequacy of resources provided to support their work.

Problems in the Implementation of the Regulations

Apparently, while implementing the requirements of the regulations, institutions that received research support failed to invest adequately in robust programs for the protection of human subjects despite dramatic growth in their research budgets and their assurances to the government that they would do so.

At most of those institutions funds to support programs for human research protection were allocated to so-called indirect costs as an administrative activity. Within the indirect cost pool the allocation for administration and facilities costs had been capped by the federal Office for Management and Budget (OMB) at 26 percent of the direct costs of research after some institutions had been discovered using those funds for unallowable expenses. As healthcare reform began to affect the flow of clinical revenues that could be used to subsidize research activities, funding for programs for human research protection were marginalized further and in many cases minimized. The overriding goal seemed to be to achieve regulatory compliance at the lowest possible cost. Accordingly, many institutions relied heavily on volunteers (or "conscripts") and part-time personnel, many of whom had little or no formal training in research ethics or regulatory affairs, to fulfill those important responsibilities.

Although it is easy to lay the blame for this situation on the research institutions, that would be unfair. From the outset research institutions, which did not ask for those regulations, considered the required implementation of programs for human research protection an unfunded or at least underfunded federal mandate. The dramatic growth in corporate-sponsored clinical trials that rely heavily on those programs, which was only beginning when the regulations first were adopted, may warrant the consideration of a mechanism through which industry can offset the associated costs at arm's length from the review and approval process as part of a comprehensive funding scheme for human research oversight.

However, without knowledge of the actual costs associated with implementing and maintaining effective programs for the protection of human research subjects, the allocation of appropriate funding for those programs is unlikely if not impossible. Few credible attempts have been made to measure those variables since the 1970s. The little information that is available regarding those costs reflects at best an estimate of what was being expended to support programs of questionable efficacy. Because there is no well-established approach to measuring efficacy, it is unlikely that a rational formula for supporting those programs will emerge in the near future despite the pressing need to develop one.

Public and Private Reports

The current state of dissatisfaction and anxiety that affects almost everyone in the human research enterprise is not a new phenomenon. Almost immediately after the adoption of the DHHS's regulations for the protection of human subjects in 1981, the first of what was to become a long series of reports on the challenges of human studies was issued in 1982 by the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. In the same year a report was issued by the Council for International Organizations of Medical Sciences (CIOMS). That report was followed in 1993 by the report of the President's Advisory Committee on Human Radiation Experiments, several reports from the National Bioethics Advisory Commission (1998, 1999, 2001a, b, and c), the General Accounting Office (1996, 2000, 2001), the Office of Inspector General of the DHHS (1998–2001), the recently disbanded National Human Research Protections Advisory Committee (2001), and the Institute of Medicine (1994, 2002). Many private organizations have issued reports or guidelines, including the Association of American Universities (2001), the Association of American Medical Colleges (2001), the American Association of University Professors (2001), the American Academy of Pharmaceutical Physicians (2001), the American Medical Association (2000), the American Society for Gene Therapy (2000), the American Society of Clinical Oncology (2002), and the Association of Clinical Research Professionals (2001).

This array of reports covers ethics, regulatory affairs, financial relationships, conflicts of interests, and the responsible conduct of research. Generally, all the reports recognize and emphasize the dependence of human research on the willingness of individuals to participate voluntarily as subjects, acknowledging the key role that trust plays in the relationship between investigators and subjects. They acknowledge the fact that past and present events have under-mined that sense of trust and that steps must be taken to rebuild and maintain it. They all offer recommendations, most of which are consistent or at least compatible, yet most observers agree that little progress has been made since the 1970s in implementing those recommendations apart from the adoption of the regulations and the implementation of institutional review boards and informed consent as the "twin pillars" of protection for human subjects. Some people think that those recommendations afford more of an impediment to research than effective protections for human subjects. Many are perplexed that it seems so hard for the scientific community and the government to do what is morally and legally appropriate when doing so is clearly in the interest of science and society.

The Ethical Issues in Human Research

There is no simple solution to this problem, which involves a complex interplay of ethics, economics, and expediency in a system affected by people, politics, and profits. The most fundamental issue is the moral dilemma inherent in human research: In all cases of human experimentation individuals are subject to risks for the benefit of science and society. Human research is an endeavor that exploits some individuals for a greater good, but that exploitation is considered acceptable and even justifiable as long as participation is voluntary and informed and the research is conducted within the well-established ethical framework of respect for persons, beneficence, and justice.

Human research entails a dynamic tension between the interests of those who do research and the interests of those on whom research is done. Can science and society justifiably place their own interests above the interests, rights, and well-being of research subjects? More correctly, should the interests of science and society prevail over those of individual subjects? Even if one identified compelling circumstances in which it would be ethically permissible to do that, those cases probably would be rare. However, it is tempting and easy to allow the pursuit of knowledge, the lure of fame and fortune, advantage in the marketplace, and the chance of academic promotion to color one's judgment and influence one's conduct.

The events the past three decades in which subjects have been harmed and misconduct revealed have shown that not all scientists, institutions, and sponsors are immune to temptation. Breaches of responsible conduct may go unnoticed and unreported, but when they are serious and are discovered and criticized, they evoke a host of reactions, including sorrow, anger, indignation, and defensiveness. The consequences of those breaches are far-reaching and long-lasting, leaving no party untouched. The corrective actions that follow may provide long-term benefits but are painful and costly in both human and financial terms.

The Deaths of Jesse Gelsinger and Ellen Roche

No two cases more aptly illustrate these points than the deaths of Jesse Gelsinger at the University of Pennsylvania in 1999 and Ellen Roche at Johns Hopkins University Medical Center in 2001. Gelsinger, suffering from a genetic metabloic disorder, died in a gene-transfer study just days after receiving an infusion of a corrected gene attached to a virus intended to introduce the new gene into his liver cells. Roche was a normal healthy young woman participating in a study of the mechanisms of airway responsiveness, a study that required inhalation of a chemical that blocked certain pathways of nerve transmission. The second case has been described (Steinbrook) and analyzed (Kreiger and De Pasquale) extensively. The death of Jesse Gelsinger was a critical event because it catalyzed a coalescence of will in the government and public to face the problems of human research directly, particularly the potential impact of financial relationships and conflicts of interests on the well-being of research participants (Shalala).

The Roche case eventually may have an even more farreaching impact. It is particularly relevant because it involves a failure to protect research participants adequately not only at the level of an individual study but also at the level of an institutional system as judged not only by government regulators but also by an external evaluating committee of peers selected by the institution. In this case attention was focused not just on an individual's untimely death, the failings of a single investigator, the shortcomings of an institutional review board, and a deficient institutional system for the protection of human subjects: The focus ultimately became the culture of the institution and more generally the culture of science as it relates to the responsible conduct of human research. The message here is the need to move beyond a culture of compliance to a culture of conscience in science (Koski, 2003a).

Resistance to Change

Since the Renaissance the pursuit of knowledge through science has been regarded as a noble profession. Recognizing the importance of the pursuit of truth in science, one might expect scientists to be intolerant of those among them who fail to respect truth or undermine the integrity of science. However, in this regard perception and reality sometimes diverge. Statements of ethical principles and codes of conduct have done much to guide the scientific community, along with the medical profession, in the pursuit of truth, but some members of those professions betray the truth. When a profession is willing to tolerate rather than hold accountable those whose behavior violates the principles and traditions of the profession, the credibility of the principles on which the profession is established are undermined. Pseudoaccountability, a term coined by a Jerome Kasirer, results in a profession that traverses the road of good intentions but does not arrive at its destination.

Accounts of Beecher's efforts to publish in the medical literature his concerns about the ethics of research studies conducted in the early 1960s suggest that that was not an easy task. Initial rejections finally gave way to an agreement with the editor of the New England Journal of Medicine to publish the paper only after Beecher agreed to limit the number of cases to a small fraction of those about which he was concerned and to withhold identification of the investigators and their institutions. As a respected physician, scientist, and professor at Harvard Medical School Beecher demonstrated courage and integrity in attempting to bring those issues before his peers, but many in the scientific community did not receive his paper enthusiastically.

One can only wonder how human research might be different today if the scientific community at that time had responded with a concerted effort to achieve a higher standard of conduct, promoted integrity with an expectation that all who engage in research involving other human beings would act in accordance with the highest ethical standards, and shown a willingness to hold accountable those who did not live up to those standards. If the scientific community rather than the government had taken action to ensure the well-being of research participants not because it was required to do so by regulations but out of concern for the integrity of science, the continuing pursuit of knowledge, and an earnest desire and commitment to prevent harm to fellow human beings while honoring the rights of others, there might not be regulations on the books requiring them to do so.

Laws and regulations are one way in which a society attempts to influence the behaviors of its citizens. Regulations may be used to prescribe certain actions and prohibit others. However, regulations can be a double-edged sword.

In a 2003 article published in the Emory Law Journal Robert Gatter discusses the normative and expressive functions of the law in the context of regulations that address continuing concerns about financial conflicts of interest in human research. Many laws are not directed toward criminal activity, but seek to establish a recognized norm of conduct, and to do so by expressing the normative message through regulations and guidance. The regulations for the protection of human subjects in research are analogous to those involving financial conflict in that they are intended to establish a norm of conduct for investigators and institutions through the expression and application of the ethical principles delineated in the Belmont Report. Laws, however, do not always achieve their desired goals, particularly if the regulated community is resistant to acceptance of the normative standard and the implementation or enforcement provisions make it unlikely that noncompliance will be discovered or punished. As Gatter points out, regulations can evoke "juridification," by which those who are subject to regulations try to find ways to avoid or circumvent them rather than embrace them. Although scientists and physicians may be no less hostile to regulation of their activities than are others, one might expect them to more readily accept such regulation in light of their codes of professional conduct that already express values compatible with those embodied in the regulations.

Since the 1970s researchers have worked within a regulatory framework in which the regulated parties too frequently have viewed the requirements of the regulations as unnecessarily complicated, costly, and onerous administrative impediments to their research activities. That viewpoint, which seems to contrast markedly with the values that society traditionally associates with scientists and the pursuit of knowledge, may reflect changes in the culture of science that occurred in the second half of twentieth century or may indicate significant juridification, to use Gatter's terminology, of the human research community in response to the imposition of regulations by the government in response to a limited number of high-profile breaches of responsible conduct.

There is no question that the American system for the responsible conduct of human research and the protection of human subjects is undergoing dramatic change. It may be far more difficult to effect cultural change that requires behavioral changes consistent with acceptance of fundamental values than it is to overcome and reverse the juridification that has occurred in response to failures in the normative and expressive functions of the applicable law and regulations.

New Initiatives

The death of Jesse Gelsinger launched a new cycle of reform in human research and the protection of the human subjects. Although the initial calls were for more stringent regulations and penalties, the DHHS, with strong leadership from the former secretary, Donna Shalala, has taken a different course. In June 2000 the department established a new Office for Human Research Protections (OHRP), replacing the Office for Protection for Research Risks. The new office was placed within the office of the secretary to give it the visibility and autonomy necessary to lead a major remodeling effort to improve the performance and effectiveness of the national system for the protection of human subjects in research. The strategy and approach taken by the DHHS were outlined in September 2000 in testimony delivered before the House Oversight Committee on Veterans Affairs (Koski, 2000).

Those initiatives mark a shift from a reactive, compliancefocused approach to the oversight of human research toward a proactive model focused on the prevention of harm. Recognizing the widely varying and sometimes idiosyncratic behavior of local institutional review boards, the new approach emphasizes education and support as the umbrella under which activities aimed at improving performance are conducted (Figure 1). The goal of current efforts is to move from an approach focused on achieving regulatory compliance to one that attempts to achieve excellence and trust. In this model activities to ensure the well-being of research participants are conducted in two primary domains: the compliance domain and the performance domain. The compliance domain includes both for-cause investigations and not-for-cause evaluations. Both types of compliance oversight activities are intended to ensure accountability and fall generally into the class of quality control and quality assurance processes. In this model the identification of deficiencies should be focused on system failures in an attempt to strengthen processes rather than use punishment or sanctions except in cases of gross negligence or willful disregard for regulatory requirements, thus avoiding the counterproductive impact of a reactive, juridifying approach to regulatory enforcement. Traditionally, these activities have been conducted primarily by government oversight agencies or parties acting on their behalf. Activities within the performance domain generally are classified as quality improvement activities, including continuous quality improvement, largely in the form of consultation and feedback on actual performance. Objective validation processes such as accreditation of institutions or programs and professional certification of individuals provide empirical evidence of proficiency and recognition of excellence. Education and support are overarching activities that work to improve the effectiveness and efficiency of the system. Realization of positive results and appropriate validation of excellence provide incentives to shift resources toward the performance domain. Ultimately, prevention of harm to human participants through responsible conduct builds trust and promotes public confidence in the research process, enhancing voluntary participation in research. Those activities are focused on improving, measuring, and validating the performance of the system in its entirety, utilizing proven continuous quality improvement methods to achieve those goals (Institute of Medicine, 2002).

In the past the government generally waited until it received a complaint from an outside source or a report from one of the institutions under its regulatory authority to initiate an investigation into the circumstances of an event. Those for-cause investigations, many of which were conducted through correspondence alone, were the mainstay of the OPRR's oversight activities. The bulk of its resources were dedicated to review, negotiation, and approval of assurances, documents submitted by entities receiving federal support for research to satisfy regulatory requirements that such a document be filed as a condition of receiving support. Too often those were empty assurances, paper commitments insufficiently backed by substantive actions and resources.

The creation of the OHRP added significant new resources to the office and a reorganization plan that redirected those resources toward enhanced educational programs and the development and implementation of a new quality improvement program through which the office provides consultation and support for institutions that seek to improve their programs for human research protection.


To a large extent that redistribution of resources was made possible by a dramatic simplification of the assurance process. Rather than continue the long-standing practice of negotiating and processing multiple types of assurances and interagency agreements, the office adopted a single standardized federal assurance that could be utilized by all participating federal agencies and was consistent with the original intent of the Common Rule. Significant progress is being made toward establishing a more effective system for the protection of human subjects in research (Koski, 2003b) despite the fact that the regulations adopted since the 1970s remain essentially unchanged. In large measure this progress is a direct result of a renewed willingness in the research community to adopt a more proactive, responsible approach toward the conduct of its activities. Whether this progress continues will be a principal determinant of the nature and scope of future regulatory actions in the area of human research.

greg koski

SEE ALSO: Aging and the Aged: Healthcare and Research Issues; AIDS: Healthcare and Research Issues; Autoexperimentation; Autonomy; Children: Healthcare and Research Issues; Commercialism in Scientific Research; Embryo and Fetus: Embryo Research; Empirical Methods in Bioethics; Genetics and Human Behavior: Scientific and Research Issues; Holocaust; Infants: Public Policy and Legal Issues; Informed Consent: Consent Issues in Human Research; Law and Bioethics; Mentally Ill and Mentally Disabled Persons: Research Issues; Military Personnel as Research Subjects; Minorities as Research Subjects; Paternalism; Pediatrics, Overview of Ethical Issues in;Public Policy and Bioethics; Prisoners as Research Subjects; Race and Racism; Research, Human: Historical Aspects; Research Methodology; Research, Multinational; Research, Unethical; Responsibility; Scientific Publishing; Sexism; Students as Research Subjects;Virtue and Character; and other Research Policy subentries


Advisory Committee on Human Radiation Experiments. 1995. Final Report: Advisory Committee on Human Radiation Experiments. Washington, D.C.: U.S. Government Printing Office.

American Association of University Professors. 2001. "Protecting Human Beings: Institutional Review Boards and Social Science Research." Academe 87(3): 55–67.

American Medical Association, Council on Ethical and Judicial Affairs. 2000. Code of Medical Ethics: Current Opinions. Chicago: Author.

Association of American Medical Colleges. 2001. Protecting Human Subjects, Preserving Trust, Promoting Progress—Policy Guidelines for the Oversight of Individual Financial Interests in Human Subjects Research. Washington, D.C.: Author.

Association of American Universities, Task Force on Research Accountability. 2001. Report on Individual and Institutional Financial Conflicts of Interest. Washington, D.C.: Author.

Beecher, Henry K. 1966. "Ethics and Clinical Research." New England Journal of Medicine 274(24): 1354–1360.

Council for International Organizations of Medical Sciences. 1993. International Ethical Guidelines for Biomedical Research Involving Human Subjects. Geneva: Author.

Gatter, R. 2003. "Walking the Talk of Trust in Human Subjects Research: The Challenge of Regulating Financial Conflicts of Interest." Emory Law Journal 52(1): 327–402.

General Accounting Office. 1996. Scientific Research: Continued Vigilance Critical to Protecting Human Subjects. Report No. GAO/HEHS–96–72. Washington, D.C.: Author.

General Accounting Office. 2000. VA Research: Protections for Human Subjects Need to Be Strengthened. Report No. GAO/HEHS–00–15. Washington, D.C.: Author.

General Accounting Office. 2001. Biomedical Research: HHS Direction Needed to Address Financial Conflicts of Interest. Report No. GAO/HEHS–02–89. Washington, D.C.: Author.

Institute of Medicine. 2001. Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. Washington, D.C.: National Academy Press.

Institute of Medicine. 2002. Responsible Research: A Systems Approach to Protecting Research Participants. Washington, D.C.: National Academy Press.

Kassirer, Jerome. 2001. "Pseudoaccountability." Annals of Internal Medicine 134: 587–590.

Katz, Jay. 1972. Experimentation on Human Beings. New York: Russell Sage Foundation.

Koski, G. 1999. "Resolving Beecher's Paradox: Getting Beyond IRB Reform." Accountability in Research 7: 213–225.

Koski, G. 2003a. "Research, Regulations and Responsibility: Confronting the Compliance Myth." Emory Law Journal 52(1): 403–416.

Kreiger, D., and DePasquale, S. 2002. "Trials and Tribulations." Johns Hopkins Magazine 54(1): 28–41.

National Bioethics Advisory Commission. 1998. Research Involving Persons with Mental Disorders That May Affect Decisionmaking Capacity. Rockville, MD: Author.

National Bioethics Advisory Commission. 1999. Research Involving Biological Materials: Ethical Issues and Policy Guidance. vol.1. Rockville, MD: Author.

National Bioethics Advisory Commission. 2001a. Ethical and Policy Issues in International Research: Clinical Trials in Developing Countries. Rockville, MD: Author.

National Bioethics Advisory Commission. 2001b. Ethical and Policy Issues in Research Involving Human Participants. vol. 1. Bethesda, MD: Author.

National Bioethics Advisory Commission. 2001c. "Federal Agency Survey on Policies and Procedures for the Protection of Human Subjects in Research." In Ethical and Policy Issues in Research Involving Human Participants. vol. 2: Commissioned Papers and Staff Analysis. Bethesda, MD: Author.

National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research. 1978. Report and Recommendations: Institutional Review Boards. Washington, D.C.:U.S. Government Printing Office.

National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research. 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D.C.: U.S. Government Printing Office.

Office of Inspector General, U.S. Department of Health and Human Services. 1998. Institutional Review Boards: A Time for Reform. Report No. OEI–01–97–0193. Washington, D.C.: Author and U.S. Government Printing Office.

Papworth, M. 1990. "Human Guinea Pigs—A History." British Medical Journal 301: 1456–1460.

President's Commission for the Study of Ethical Problems in Medicine and Biomedical Research. 1983. Implementing Human Research Regulations. Washington, D.C.: U.S. Government Printing Office.

Shalala, Donna. 2000. "Protecting Research Subjects—What Must Be Done." New England Journal of Medicine 343(11): 808–810.

Steinbrook, R. 2001. "Protecting Research Subjects—The Crisis at Johns Hopkins." New England Journal of Medicine 346: 716–720.

World Medical Association. 2000. Declaration of Helsinki: Principles for Medical Research Involving Human Subjects, rev. edition. Ferney-Voltaire, France: World Medical Association.


Koski, G. 2000. Statement of Greg Koski, PhD, MD, Director of the Office for Human Research Protection, Office of the Secretary, Department of Health and Human Services, for the Hearing on Human Subjects Protections before the Subcommittee on Oversight and Investigations, Committee on Veterans Affairs, United States House of Representatives, September 28, 2000, Washington, D.C. Available from <>.

Koski, G. 2003b. Statement of Greg Koski, PhD, MD, Former Director of the Office for Human Research Protection, Office of the Secretary, Department of Health and Human Services, for the Hearing on Human Subjects Protections before the Subcommittee on Oversight and Investigations, Committee on Veterans Affairs, United States House of Representatives, June 18, 2003, Washington, D.C. Available from <–18-03/gkoski.html>.

National Human Research Protections Advisory Committee. 2001. NHRPAC Recommendations on HHS's Draft Interim Guidance on Financial Relationships in Clinical Research. Available from <>.

National Institutes of Health. 2000. Required Education in the Protection of Human Research Participants. OD–00–39. Available from <>.

Office for Human Research Protections. 2000. OHRP Compliance Activities: Common Findings and Guidance. Available from <>.

About this article

Research Policy: I. General Background

Updated About content Print Article