Research Methodology: II. Clinical Trials

views updated

II. CLINICAL TRIALS

In the last half of the twentieth century, clinical trial methodology fundamentally transformed the nature of biomedical research. During this period, investigators developed ways to avoid certain biases in research design and to adapt methods of statistical analysis to empirical research. The story of biomedical research's progressive sophistication, however, does not begin in clinics or hospitals, but in a cornfield. Ronald A. Fisher (1890–1962), the famous British statistician, biologist, and geneticist, devised methods for testing hypotheses on how to improve crops (Gigerenzer et al.). By dividing fields into two or more groups, making them as similar as possible in composition and treatment, Fisher hoped to isolate the effects of one feature on the individuals studied. For example, would a fertilizer given to some of the corn improve yield? The resulting differences between groups could then be expressed as probabilities about whether outcomes were due to chance or their different treatment. By studying more individuals for longer periods, confidence levels increase that variations between group outcomes were due to their different treatment.

In the late 1940s, Fisher and others began to adapt and refine these pioneering principles for use with human research, and in 1948 clinical trial methodology was systematically launched into medicine with the testing of streptomycin to treat tuberculosis (Concato, Shah, and Horwitz). Since that time, investigators have used clinical trial methods to evaluate virtually everything affecting patients, including: therapies, diagnostic techniques, prevention of illnesses, vaccines, counseling, health delivery systems, and even the benefits of classical music, pets, and humor on health. In one study, for example, people were divided into large groups; some got a daily aspirin and others a placebo (an inert substance). This helped ensure that groups were treated alike even down to the number of pills that they were given. The group receiving aspirin suffered fewer heart attacks (Steering Committee). Like methods developed in agricultural research, the goal of clinical trial methodology is to compose and treat groups as similarly as possible except for the one feature under study. Investigators attempt to identify other features that are likely to affect outcomes and stratify or distribute individuals with those features equally between groups. For example, the healthiest individuals (whether people, pigs, or parsnips) should be stratified equally among the groups because health often affects outcomes.

To help further ensure that groups are similar, investigators generally use another method, randomization (nonhuman choice), such as, charts of random numbers, to assign individuals to groups. For example, suppose that investigators want to study the influence of caffeine upon alertness. They know other things affect alertness, such as people's interest in the subject or their intelligence, and the investigators try to stratify people with these variables equally between groups. But the investigators also know that many additional features affect alertness, such as people's sleeping, eating, or television-watching habits. Unable to identify all such variables or distribute people with similar features equally between groups, the investigators try to minimize the impact of these "nuisance" variables and achieve uniform groups through randomization. Even simple random methods, such as flipping a coin to determine group assignments, help ensure that people with distinctive features that could affect results do not cluster in one group. The larger the groups, the more likely that randomization will produce similar groups. The goal of randomization is to combat bias in group assignments by distributing individual characteristics whose effects are unknown equally among the study arms to minimize their influence. In human studies, randomized clinical trials (RCTs) use random assignment to eliminate, through equal distribution, the effects of variables such as nutritional habits, beliefs, attitudes, behavior, ancestry, and education in correlating the variable under investigation with its observed effects. Nonrandomized trials generally seem second best because of the risk of bias in the formation of the groups.

Investigators use other methods in addition to randomization and stratification to make groups similar and to eliminate bias. In single-blind studies, subjects do not know their group assignment, thereby minimizing the effects of their beliefs and expectations about the different modes of treatment. For unbiased results, the subjects should be treated so similarly that they cannot know which treatment they receive. Investigators' subconscious beliefs, preferences, or attitudes may also affect how they take care of individuals or evaluate outcomes. Believing one medicine works best, for example, may affect their estimates of how individuals respond. To combat such biases, investigators may use double-blind designs in which the group assignments are kept from subjects, their clinicians and investigators until after the trial so that clinicians' or investigators' own views will not contaminate the study's results.

Impartial studies can expose bias, prejudice, the flaws of common wisdom, the errors of standard practice, and the harms or benefits of established treatments. For example, in the 1940s and early 1950s doctors believed that giving copious amounts of oxygen to premature infants prevented death and brain damage. By 1953 this common wisdom was being challenged by clinical trials, and by 1954 the link between the lavish use of oxygen and blindness from retrolental fibroplasia was clearly established (Silverman). Other studies uncovered previously unforeseen adverse drug reactions. For example, systematic testing of commonly used antibiotics showed that premature infants receiving sulfisoxazole (gantrisin) had a much higher incidence of death and retardation than other groups. Further investigation revealed that premature infants could not metabolize and detoxify bilirubin, thus causing kernicterus, or neurological damage to the brain (Behrman and Vaughan).

Clinical trials also account for many treatment advances. In three decades of continual evaluation of alternative therapies through clinical trials, childhood leukemia went from a uniformly fatal disease to an often-curable illness. RCTs also demonstrated that coronary artery bypass surgery was ineffective for many of the diseases for which it had been widely used.

In a controlled clinical trial (CCT), investigators compare the outcomes for patients getting one treatment with those who do not. This allows investigators to separate the treatment's effects from other influences. The U.S. Department of Health and Human Services (HHS) cites five kinds of control groups distinguished, in part, upon whether the comparison involves a historical control group (in which patients' outcomes are compared with records from past patients) or a concurrent control group (in which patients' outcomes are compared with patients currently being treated):

  1. placebo concurrent control;
  2. dose comparison, concurrent control;
  3. no treatment concurrent control;
  4. active treatment concurrent control; and
  5. historical control.

Investigators often regard the double-blind RCT with a concurrent control group getting a placebo as the "gold standard" because it offers the greatest assurances that differences between groups have not been distorted by people's different diagnosis criteria, treatments, observations, measurements, or expectations (Ellenberg and Temple; Temple and Ellenberg).

Gaining General Acceptance: An Example Involving Breast Cancer

Enrolling patients in clinical trials involved fundamental shifts in how to think about patient–doctor relationships. Consequently, it was one thing to work out a good methodology and another to find clinicians and patients willing to participate in CCTs. For example, by 1968, 70 percent of women with breast cancer had radical mastectomy, which entails removing the breast, lymph nodes, and chest wall muscles on the affected side. Many clinicians believed this gave women their best chance of a "cure" (defined as surviving five years or longer), at no real loss, because in their view, the breast of an older woman was entirely expendable (Lerner). Beginning in the 1970s, these views changed gradually, but many clinicians clung to these beliefs into the 1990s, long after information gained from a series of RCTs showed radical mastectomy as unnecessarily mutilating and disabling. Ultimately, these trials established that removal of only the tumor or the breast, with or without radiation therapy, resulted in survival comparable to that achieved with the radical mastectomy (Fisher). Follow-up studies done twenty-five years later confirmed that there is no advantage to the more mutilating surgery (Fisher et al.).

Getting clinicians to agree to participate and women to enroll in CCTs or RCTs in the 1970s and 1980s was a crucial step to discrediting radical mastectomies. Investigators had to persuade skeptical physicians who believed that the radical mastectomy was necessary to give their patients the best chance of survival. Many clinicians asserted that they had a "therapeutic obligation" or duty to pick what they viewed as the best therapy for their patients. Some were so convinced radical mastectomy was best that they did not inform women of other options, let alone enroll them in RCTs; others did not want to communicate the uncertainties about which therapies were best or feared that informed consent would destroy trust in the doctor–patient relationship (Taylor, Margolese, and Soskolne).

Such paternalistic attitudes increasingly troubled both investigators (how did clinicians know what was best?) and women (do they not have a say about what is best for them?). Women were learning about the controversies over treatment options swirling in the medical literature at the same time that informed-consent policy took root. Consequently, investigators and clinicians had to make room for good informed consent and choice. In response, therapeutic research became an increasingly cooperative venture among doctors, patients, and investigators (Kopelman, 1994; Fisher).

Increasingly, patients and clinicians saw the advantages of participation in multi-institutional research using the same protocols. These large trials proved to have many research advantages, because they can involve many patients and get results quickly, and because they can help neutralize biases that result from distinctive groups of people who use certain institutions. In addition, large trials can even result in improved care for all groups and better fulfillment of consent requirements. This is because these cooperative studies are often designed by experts and include quality-control provisions. In addition, they are also reviewed for approval by many agencies. Moreover, expert panelists review data and stop the trials if early results show clear advantages to some assignments.

By the 1990s, great progress in treating cancer resulted, in part, from doctors' willingness to enroll patients in clinical trials and patients' willingness to participate. Patients often acted from altruism to help the next generation of patients, just as the last generation had helped them. Clinical trials, by this time, were also seen as a way to get good care, leading many people to be eager to enroll and disappointed if they were excluded. Largely gone were the sweeping general denunciations of the 1970s and 1980s when critics claimed an inherent incompatibility existed between these research methods on the one hand and doctors' duties to protect patients, patient's rights and welfare, and good patient–doctor relationships on the other (Fried; Gifford; Marquis; Wikler).

An Imperfect Consensus with Enduring Issues

For clinical trials to be morally acceptable, a consensus exists that they must meet the following conditions:

  1. The study is important.
  2. Patients or their representatives give informed consent including knowledge of all alternatives, of their right to withdraw at any time, and of clinicians' and investigators' conflicts of interest.
  3. Physicians and investigators place the well-being of the patients ahead of research interests.
  4. The study has gained appropriate approval from institutional review boards or research ethics committees.
  5. A data safety monitoring panel will end studies if it is demonstrated that one or more of the study arms prove better than others and will report significant new findings to doctors or patients.
  6. The uncertainty principle or null hypothesis is justified, meaning that the arms of the study are "equally good."

Before a trial begins, then, investigators must do a comprehensive review of the literature to show that all treatments being given and compared have a therapeutic success rate that is acceptably high for all arms, and that it is uncertain whether any one of the treatments being tested is better than any of the others. In addition, it must be shown that no study arm provides what is known to be inferior care (HHS; Beauchamp and Childress; Concato, Shah, and Horwitz; Emanuel, Wendler, and Grady; WMA).

Serious questions exist about implementing these assumptions. Patients have legitimate preferences about how they want to be treated, and doctors have responsibilities to try to give patients the best care to meet their individual needs, goals, and desires. Controlled trials restrict people's choices and limit the ways therapies can be adapted for them by the methodologies of stratification, randomization, in-flexible interventions, eligibility requirements, and single-blind or double-blind study designs. Some of these concerns are discussed below.

PHYSICIANS' ROLES AS CLINICIANS AND AS SCIENTISTS. When physicians enroll patients in clinical trials, they help patients collectively by gaining knowledge but may lose flexibility in tailoring treatments for individual patients. This can create a conflict between doctors' roles as scientists dedicated to conducting the best studies to gain knowledge, and as healers dedicated to adapting treatments to each patient's needs, goals, and values. To address this potential conflict, most agree that physicians should not enroll a patient in a clinical trial if they have reason to believe a patient might, thereby, obtain inferior care (Byar et al.; Chalmers, Block, and Lee; Kopelman, 1986; WMA; Ellenberg; Levine, Dubler, and Levine; Shaw and Chalmers; Zelen, 1990; Emanuel, Wendler, and Grady).

Although agreement exists that doctors should not enroll patients in studies in which they get inferior care, substantive disagreements remain about when arms of studies are considered equally good. One controversy concerns what values to employ in deciding if treatments are "equally good." Investigators tend to measure equality among treatments in terms of easily quantified outcomes such as survival after cancer treatments or reduction of blood pressure. Patients and some clinicians, however, also consider how treatments affect the quality of patients' lives and whether patients think the treatment makes them feel better (Levine, Dubler, and Levine). Views, therefore, about what treatments are equally good differ when people regard different things as relevant benefits and burdens. Hence nausea, hair loss, sexual impotence, weakness, extra costs, inconvenience, or more hospital visits may be more important outcomes from a patient's perspective than from an investigator's perspective in determining when treatments are equally good.

Another controversy that involves how to use the uncertainty principle may be called "the problem of clinician preference," or, should conscientious clinicians with any preference at all for one treatment arm enroll their patients in a clinical trial? Some argue that clinicians have a duty to provide what they believe to be the best available care for patients; consequently, as long as physicians have any preference about which treatment is best for their patients, they should not enroll their patients in clinical trials (Fried; Gifford; Waldenstrom). It is rare that clinicians have no preference whatsoever about what is best for their patients, especially for the treatment of serious illnesses where the outcomes, conveniences, risks, and possible benefits are different. Moreover, if asked, patients will often have preferences even if the clinicians do not, and this could break the tie for doctors. Consequently, these critics find trials, especially RCTs, generally unethical.

In his 1987 article, "Equipoise and the Ethics of Clinical Research," philosopher Benjamin Freedman tried to solve the problem of clinician preference by distinguishing between "theoretical equipoise" and "clinical equipoise." Theoretical equipoise is an epistemic (cognitive) state in which the evidence is exactly balanced, meaning that treatments are of equal value. Clinical equipoise, in contrast, is that state in which the community of expert clinicians is undecided as to the preferred treatment for the given population as determined by the study's eligibility criteria; the study should be designed to disturb clinical equipoise and to terminate when it is achieved. Freedman argued that clinical equipoise is a better way to understand that treatments are equally useful for a particular group and, thus, that the uncertainty principle has been reached. To decide equipoise, then, the focus should not be on the treatment that the particular clinician prefers, but on what the community of clinicians believes to be equally good treatments for some condition given their respective benefits and burdens. A clinician may have a preference for one treatment but respect colleagues with different views. Thus, as the trial begins, treatments (including any placebo arm) must be in clinical equipoise, or be regarded as having equal merit by the community of experts in treating some condition for a certain group. Disagreements should be expected in a rapidly advancing field such as medicine, and it is these disagreements that help explain why trials are important. Exceptions are sometimes made to this policy of requirement equipose if there is no more than minimal risk of harm to the subjects, such as testing the efficacy of nose drops in the common cold.

This solution presupposes agreement or justification about who should be in the community of expert clinicians deciding which treatments are equally good and whether their views adequately represent those of the potential patients. Disputes arise over this, however (Kopelman, 1994). Some people disvalue the views of any but the most acclaimed clinical investigators. Others contend that many perspectives, including those of investigators, clinicians, and patient advocates, represent patients' sometimes differing values. Increasingly, clinical trials are moving out of the academic centers and into private doctors' offices. Clinicians often find such arrangements professionally fulfilling, but they can also be financially lucrative when drug companies, who typically sponsor these studies, offer monetary incentives to enroll patients. In contrast to academic medical centers, little oversight or accountability exists in private offices, argued Jason E. Klein and Alan R. Fleischman in 2002; but more opportunity exists for patients to misunderstand that they are being enrolled in research programs not necessarily designed for their benefit. Klein and Fleischman argued that financial incentives to clinicians should be limited, patients should have an independent resource to answer their questions, and doctors should be required to disclose potential conflicts of interest. Arguably, in both the academic and private practice settings where there are genuine risks, the treating physician should not be the investigator.

STARTING TRIALS. Disagreements can erupt about the overall benefits of the new treatments or investigational new drugs when compared with standard care or to a placebo. To justify the time, energy, risks, and expense of testing a new therapy for some condition by means of a CCT or RCT, investigators must produce preliminary evidence of its safety, efficacy, and proper dose. Some knowledgeable people are likely to be more impressed with these findings than others, especially for serious diseases with no established treatments (Levine, Dubler, and Levine). Consequently, they disagree about if or when trials should begin. In addition, resources are limited so not all good studies can be funded. These funding choices depend not only upon the merits of the study but also on political and social interests because funding for studies is limited and often comes from tax revenues.

PLACEBO-CONTROLLED RCTS. One of the most persistent controversies concerns the use of the placebo arm in a controlled trial. A placebo is used because people's beliefs and expectations can influence how they react. Suppose there are two groups, and persons in one group get a red pill with specific activity. People sometimes react to getting pills. If one group gets a red pill, and if the two groups are being treated exactly the same, then arguably, the other group should also get a red pill, although without the same active preparation. The red pill might be a sugar pill. As noted, placebo-controlled RCTs are widely regarded as the gold standard for assessing the safety and efficacy of therapies.

A knotty problem exists over whether placebos should be used when there is a proven and effective treatment. Defenders of the use of a placebo arm in such cases cite its enormous methodological advantages in evaluating treatments and justify its use as long as subjects are not made worse off (Varmus and Satcher; Temple and Ellenberg). In one case, for example, investigators wanted to study the safety and efficacy of mood disorder medications adopted long ago without rigorous testing. Some of these drugs have a good track record of abating serious symptoms including suicidal ideation. Disputes arose over whether these drugs should be tested against a placebo because beliefs and expectations affect mood disorders. A distinguished panel of experts could not reach consensus and concluded: "Research is needed on the ethical conduct of studies to limit risks of medication-free intervals and facilitate poststudy treatment. Patients must fully understand the risks and lack of individualized treatment involved in research" (Charney et al., p.262). Yet obtaining consent for what can be risky studies from such patients may also be problematic because their illnesses often disturb their thought processes.

Perhaps the most contentious debate so far concerned using placebo-controlled trials to study perinatal transmission of HIV/AIDS when a proven and effective therapy existed (Angell; Temple and Ellenberg; Ellenberg and Temple; Lurie and Wolfe). The funding was from rich countries where, because proven and effective therapies were the standard of care, the studies could not be done. Some argued these studies were immoral because the stakes were life and death (Angell; Lurie and Wolfe); others said that the studies were needed and that these poor people were made no worse off by being given local standards of care (Temple and Ellenberg; Ellenberg and Temple; Varmus and Satcher). They maintained this was the most efficient way to obtain urgently needed information to fight the HIV/AIDS epidemic.

In 2000 the influential World Medical Association (WMA) took a stand. It issued a new draft of the Declaration of Helsinki stating that placebos should not be used if there was a proven and accepted treatment. This put the declaration on a collision course with the U.S. Food and Drug Administration (FDA), which often requires the use of placebo despite the existence of a proven and accepted treatment. Defenders also point out that if placebos are not permitted, trials may have to be a great deal larger and therefore more costly.

One possible middle ground is to consider the harm of not having the treatment. If there is only a minor risk of harm, such as minor discomfort or inconvenience to being denied the proven and effective treatment, then studies might be permitted. As potential harms to those on the placebo arm increase, it should become more difficult to approve the study, even with consent from subjects or their representatives.

An entirely different set of concerns exists, challenging the placebo as the gold standard.In their 2001 article, "Is the Placebo Powerless?" Asbjorn Hrobjartsson and Peter Gotzsche questioned whether the placebo is really as powerful as claimed. The placebo itself, they pointed out, was adopted without testing. They conducted a meta-analysis comparing placebo with no treatment arms, finding that in many cases, there was no difference between them at all. They wrote, "We found little difference in general that placebos have powerful clinical affects. Although placebos had no significant affects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for treatments of pain. Outside the setting of clinical trials, there is no justification for the use of placebos" (Hrobjartsson and Gotzsche, p. 1594). In a 2000 article, John Concato and colleagues also raised doubts about the ascendancy of the placebo-controlled RCT when compared to all other methods. They argued that even observational studies can, when carefully done, control bias as well as an RCT.

Kenneth J. Rothman and Karin B. Michaels, in a 1994 article titled "The Continuing Unethical Use of Placebo Controls," concluded that the FDA's insistence upon viewing the placebo as the gold standard not only has moral problems but also is essentially a political decision. The FDA scientists argued that the placebo-controlled studies make it easier to show statistical significance with smaller numbers of subjects; but larger studies would reduce statistical variability. Unfortunately, this is expensive. Concato and colleagues also objected, stating that it is the drug companies that benefit from the FDA policy of fostering small CCTs and RCTs given that such studies are less costly; it is the patients who bear the burdens of this policy because they are denied proven and effective treatments.

Yet another challenge to the use of placebos as the gold standard comes from those who study complementary and alternative medicines (CAMs). RCTs and CCTs try to eliminate nuisance variables, and they include in this category people's different hopes and beliefs. There is little doubt, however, that these are powerful forces in people's lives. Some argue that research that eliminates hope and belief has limited utility, just because mental attitude is so powerful. In 2002 Kenneth J. Schaffner argued that the study of CAMs "…might lead us to question"

a standard research design methodology that priorities randomized clinical trials and objective measures of health … and think about the arguments of [the American philosopher Thomas] Kuhn and the disunity of science proponents, and about varying local methodologies … [with their] different evidential standards … CAM can help make us realize both that the influence of the belief systems may have powerful effects on health and that discerning these effects may require a realization of these Procrustean standards. (Schaffner 2002, p. 12)

ENDING TRIALS. The goal of a study is to learn whether different treatments are equally good for certain conditions. But justification for claiming to know something is a matter of degree, and there can be substantial disagreements about where to draw the line for the purpose of saying that it is known that treatments are or are not equally good. Investigators should adopt rules about when to stop at the outset of a study. Although investigators generally do not release preliminary data, there are some exceptions. A data safety monitoring panel is often charged with monitoring the data and deciding if trials should be ended early because people in one arm of the study are doing far worse than others. For example, azidothymidine (AZT) was first tested against a placebo in a double-blind RCT to see if it helped patients with AIDS. Doctors and nurses believed they knew from the abatement of symptoms, which patients were getting AZT and which were getting a placebo. After several months, 16 of the 137 patients in the placebo arm died, whereas only 1 of the 145 patients receiving AZT died. The trial was ended and all received AZT (Beauchamp and Childress).

Deciding when to stop a trial is not an entirely scientific choice but is also a moral decision. Investigators, panels, and journal editors typically require a probability of at most 0.05 (five chances in a hundred) that the observed results between groups occurred by chance, as a ground for holding that sufficient evidence exists to say they know that the groups are different. Although the 0.05 standard is a reasonable and well-established convention, it should not be misunderstood. As Daniel Wikler (1981) and Loretta M. Kopelman (1986, 1994) have argued, it is at best a moral trade-off between continuing the study so long that some people receive obviously suboptimal care and stopping so early that some people are harmed because insufficiently verified treatments are adopted or discredited. Some will draw that line differently, especially when treatments are tested for serious illnesses with few other means of treatment, as in AIDS research (Kopelman, 1994).

INFORMED CONSENT AND RESEARCH INTEGRITY. For people to enroll in studies, they or their guardians must give informed consent, meaning authorization that is competent, adequately informed, and voluntary. Assuming that people are competent to give consent and do so voluntarily, what do they need to know to give informed consent for clinical studies?

Generally they must be told about the study's nature, purpose, duration, procedures, and foreseeable risks and benefits. Moreover, they need to know about any alternative treatments, inconveniences, additional costs, and extra procedures or hospitalizations resulting from enrollment. They must also be told of their right to withdraw from the study at any time should they agree to participate (U.S. 45 CFR46.116). If the study design includes different groups, randomization, or placebos, for example, prospective subjects need to be informed. Consent for therapy or research requires giving people all information that a reasonable person would want to know in order to make a choice.

These widely recognized consent requirements create tensions in relation to the research goals of clinical trials. For example, suppose in testing treatments, one study arm uses surgery with medical management resulting in a faster recovery if there are no complications, and the other study arm uses medical management alone, with fewer risks but a slower recovery. If distinctive groups have special preferences, such as the elderly preferring medical management and the young surgery, then the study of the different treatment results could be biased through self-selection.

Thus, there is a difficulty that may be called "the problem of subject preference": How can people's preferences be accommodated while preserving the scientific integrity of the CCT or RCT? Some criticize regulations on informed-consent doctrine as unrealistic, too individualistic, and shortsighted because they give too much weight to individual choice and make it hard to conduct good studies (Tobias; Zelen, 1979, 1990). Physicians and healthcare professionals, they argue, have a duty to take proper care of patients but are not typically required to educate them about these technical and complex matters; patients should get good treatment given by conscientious professionals, but patients do not need to know how, when, or why investigators evaluate their treatments. Most patients cannot understand the investigation's complexities, they argue, and would be harmed by learning of the uncertainties about what care is best or that they are being studied. Investigators should be free to design the best possible trials consistent with good care, they argue, and the current understanding of patients' rights disrupts clinical trials, thereby slowing medical progress. If people have only the right to good care and not the right to refuse to be enrolled in a study, it would be easier for investigators to conduct research and minimize problems of bias introduced by people's preferences. For example, Marvin Zelen devised schemas in which patients give their consent for a treatment without knowing that the treatment was selected by a random method and/or that they are in a study; other designs prerandomize people to group assignments before consent is sought (Zelen, 1979, 1990).

Such paternalism, in general, and Zelen's designs in particular, has garnered legal and moral criticism (Ellenberg, 1984, 1992; Kopelman, 1986, 1994). It not only denies people self-determination, but, without pertinent information, people do not have means to protect their own wellbeing. The doctrine of informed consent developed because many patients and activists wanted impartial information and participation in choices about their care, especially when they will be serving as research subjects. For example, statistician Susan S. Ellenberg criticized Zelen's prerandomization schemas in which patients are assigned to groups before consent is sought. She argued that this threatens impartiality in gaining consent, risking that the informational sessions will be shaped to enhance the benefits and minimize the risks of each individual assignment (Ellenberg, 1984, 1992).

On the other hand, others are skeptical that most subjects give genuine informed consent to research (Tobias; Wikler; Zelen, 1979). Most patients, they claim, do not understand the benefits or burdens of their treatment options, let alone the scientifically rigorous methodology used in testing. A related criticism is that investigators do not tell the patients, and most patients do not understand, that at some point in the trial it may become increasingly apparent that some groups are getting suboptimal care (Wikler). Investigators, they argue, put medical advances ahead of subject-patient rights and welfare because those rights typically violate physicians' duties to their patients (Fried; Gifford; Marquis; Wikler). Some support for this view comes from a study that George Annas reports was conducted by the FDA, which carried out spot checks on 1,000 investigations; the FDA found that investigators did not seek informed consent in 213 studies, did not follow their approved research protocol in 364 investigations, and failed to report adverse reactions for 140 test subjects. Unfortunately, the FDA results square with others, reports Annas (Annas).

In contrast to these two positions implying that one must choose between good trials and good informed consent, other commentators argue that clinical trials, including RCTs, can be cooperative ventures between patients and investigators (Freedman; Kopelman 1986, 1994; Levine, Dubler, and Levine; Levine, 1986). They believe that investigators and patients should work together with candor, respect, and trust about the goals and means of the research, and view consent as an on-going process. They maintain that with proper consent some studies (but not all) are morally justifiable. Subjects may have to be regarded as partners in a cooperative venture, however, if investigators expect people to enroll and cooperate. People can defeat trials if they do not identify with the investigators' goals. In one case, investigators were testing whether patients infected with HIV who were not yet showing symptoms of AIDS would benefit from AZT. At the end of the trial, researchers estimated that 9 percent of the patients in the placebo arm had been taking AZT. If more patients in the placebo group had secretly taken AZT, investigators might have judged a beneficial drug ineffective and refused to release it for this use (Merigan). These patients, facing a life-threatening disease, found a way to get the drug they believed useful and inadvertently jeopardized a clinical trial and the welfare of future patients. Poor cooperation results when the subjects fail to identify with the goals of the study, do not understand its importance, or are asked to risk too much in terms of health and convenience (Spilker).

PROTECTION OR ACCESS. During the period from the 1970s to the early twenty-first century, patients and physicians have gone from being wary of participating in CCTs and RCTs to seeking access to them. Studies were increasingly seen as opportunities for good care rather than as dangerous projects from which vulnerable people should be protected (Dresser; Kopelman, 1994). For example, AZT, the first effective drug to treat AIDS, was initially tested for safety and efficacy against a placebo in a double-blind RCT, as has been mentioned. Until the early 1990s, many biomedical research study populations excluded people of color, women, and children in order to "protect" what were considered to be these more vulnerable populations. Advocates argued that this was unfair because enrollment in trials often provides people the only or best available access to adequate or promising care. For example, children with AIDS initially could not get AZT because only adults could be enrolled in studies. Even after some studies showed that AZT was beneficial for treatment of adults, regulations initially forbade its prescription for children because it had not been tested with them (Pizzo). Moreover, a study excluding people of color, females, and children focuses upon a narrow range of the patient population (adult white males), making it uncertain whether the results of a study apply to other groups. There may be differences among groups; if there are, variations might be due to nature, nurture, or a combination of both. A study on depression, for example, conducted exclusively with white men, leaves uncertainty as to whether the results would be the same for other groups who have different social standing, burdens, genes, or physiologies.

More flexible eligibility requirements, advocates argue, would give all groups access to new treatments and would also yield results that more accurately reflect the entire patient population. Opponents respond that this would tend to make it harder to ensure that groups are comparable unless they have more subjects in the group. This would, of course, make the studies more costly. Despite these objections, policies were adopted to address unequal access and to revise eligibility criteria that excluded groups simply to save money and hold down the cost of trials, especially when studies were supported by tax dollars.

Patient-advocacy groups also demanded more access to preliminary information about the safety and efficacy of different modes of care. They wanted less secrecy regarding early trends, especially in cases in which patients have few treatment options for serious diseases. Many patients with severe or chronic diseases, or their families, have learned to follow closely relevant research, and they want greater access to promising new treatments.

These proposals generated a variety of responses (Byar et al.; Levine, Dubler, and Levine; Merigan; Schaffner, 1986; "Expanded Availability," 1990). For example, programs now make some investigational new treatments more available by means of expanded access or a "parallel track" ("Expanded Availability"). In the past, there was a single way, or track, for patients to get certain investigational new treatments, namely, participating in the study as a subject. Some people were excluded because they lived too far from the study site(s) or because of age, gender, or prognosis (Dresser; Kopelman, 1994). New programs expanded access or offered a parallel track to make it possible for some patients who are not subjects to have investigational new treatments. Patients with HIV-related diseases, for example, can sometimes obtain investigational new treatments even though they are not enrolled as trial subjects. Some investigators recommend this approach when there are no therapeutic alternatives, when the investigational new treatments are being tested, when there is some evidence of their efficacy, when there are no unreasonable risks for the patient, and when the patient cannot participate in the clinical trial (Byar et al.). This solution presupposes that there is agreement about who should make these verdicts. Community representation on panels that make these decisions may be reassuring to groups advocating more openness.

These and other proposals allow greater flexibility but also may make it harder to conduct and interpret the results (Ellenberg; Merigan). For example, if patients can get the investigational new treatment without enrolling in a clinical trial, some may refuse to participate in the study. Thus, even if these proposed changes are adopted, tensions still exist between individual and collective interests in conducting trials.

Conclusion

The CCT and RCT methodologies are powerful ways to combat the effects of bias. By using these methods, bias can be minimized, but it can never be entirely eradicated. People's beliefs, hopes, duties, prejudices, values, or interests can create biases in their choices about what studies to fund, when to begin and end studies, what measures will be used, how groups are established, and how results are interpreted. When people consider the adoption of procedures such as copious amounts of oxygen for premature infants (later found to cause blindness), a high premium is placed on protection of the public from someone's idea of promising new treatments; when they think of drugs that have proved to help sustain or improve people's lives, however, a high premium is placed on early access. Who should decide the optimal degree of testing or protection needed in order to establish the safety and efficacy of drugs before they are available? This question of access versus protection is a social and moral decision, not just a scientific matter. It is not unlike the decision about how much inspection of foods or buildings is necessary in order to protect the public. When the stakes are high, as in fatal or chronically degenerative diseases with no promising treatments, the disputes about when to begin or end trials are sometimes a tangle of scientific, moral, social, political, statistical, and medical problems.

loretta m. kopelman (1995)

revised by author

SEE ALSO: Aging and the Aged: Healthcare and Research Issues; AIDS: Healthcare and Research Issues; Autoexperimentation; Children: Healthcare and Research Issues; Commercialism in Scientific Research; Embryo and Fetus: Embryo Research; Empirical Methods in Bioethics; Genetics and Human Behavior: Scientific and Research Issues; Holocaust; Infants: Public Policy and Legal Issues; Informed Consent: Consent Issues in Human Research; Mentally Ill and Mentally Disabled Persons: Research Issues; Military Personnel as Research Subjects; Minorities as Research Subjects; Pediatrics, Overview of Ethical Issues in; Prisoners as Research Subjects; Race and Racism; Research, Human: Historical Aspects; Research, Multinational; Research Policy; Research, Unethical; Responsibility; Scientific Publishing; Sexism; Students as Research Subjects;Virtue and Character; and other Research Methodology subentries

BIBLIOGRAPHY

Angell, Marcia. 1997. "The Ethics of Clinical Research in the Third World." New England Journal of Medicine 337(12): 847–849.

Annas, George J. 1999. "Regs Ignored in Research." National Law Journal, 15 November, p. A20.

Beauchamp, Tom L., and Childress, James F. 1989, 2001. Principles of Biomedical Ethics, 3rd and 5th editions. New York: Oxford University Press.

Behrman, Richard E., and Vaughan, Victor C., III. 1987. Nelson Textbook of Pediatrics, 13th edition. Philadelphia: Saunders.

Byar, David P.; Schoenfeld, David A.; Green, Sylvan B.; et al.1990. "Design Considerations for AIDS Trials." New England Journal of Medicine 323(19): 1343–1348.

Chalmers, Thomas C.; Block, Jerome B.; and Lee, Stephanie. 1972. "Controlled Studies in Clinical Cancer Research." New England Journal of Medicine 287(2): 75–78.

Charney, Dennis S.; Nemeroff, Charles B.; Lewis, Lydia; et al. 2002. "National Depressive and Manic-Depressive Association Consensus Statement on the Use of Placebo in Clinical Trials of Mood Disorders." Archives of General Psychiatry 59(3): 262–270.

Concato, John; Shah, Nirav; and Horwitz, Ralph I. 2000. "Randomized, Controlled Trials, Observation Studies, and the Hierarchy of Research Designs." New England Journal of Medicine 342(25): 1887–1892.

Dresser, Rebecca. 1992. "Wanted: Single, White Male for Medical Research." Hastings Center Report 22(1): 24–29.

Ellenberg, Susan S. 1984. "Randomization Designs in Comparative Clinical Trials." New England Journal of Medicine 310(21): 1404–1408.

Ellenberg, Susan S. 1992. "Randomized Consent Designs for Clinical Trials: An Update." Statistics in Medicine 11(1): 131–132.

Ellenberg, Susan S., and Temple, Robert. 2000. "Placebo-Controlled Trials and Active-Control Trials in the Evaluation of New Treatments," Part 2: "Practical Issues and Specific Cases." Annals of Internal Medicine 133(6): 464–470.

Emanuel, Ezekiel J.; Wendler, David; and Grady, Christine. 2000. "What Makes Clinical Research Ethical?" Journal of the American Medical Association 283(20): 2701–2710.

"Expanded Availability of Investigational New Drugs through a Parallel Track Mechanism for People with AIDS and HIVRelated Disease." 1990. Federal Register 55, no. 98 (May 21): 20,856–20,860.

Fisher, Bernard. 1992. "Justification for Lumpectomy in the Treatment of Breast Cancer: A Commentary on the Underutilization of That Procedure." Journal of the American Medical Women's Association 47(5): 169–173.

Fisher, Bernard; Jeong, Jong-Hyeon; Anderson, Stewart; et al. 2002. "Twenty-five-Year Follow-up of a Randomized Trial Comparing Radical Mastectomy, Total Mastectomy, and Total Mastectomy Followed by Irradiation." New England Journal of Medicine 347(8): 567–575.

Freedman, Benjamin. 1987. "Equipoise and the Ethics of Clinical Research." New England Journal of Medicine 317(3): 141–145.

Fried, Charles. 1974. Medical Experimentation: Personal Integrity and Social Policy. New York: American Elsevier.

Gifford, Fred. 1986. "The Conflict between Randomized Clinical Trials and the Therapeutic Obligation." Journal of Medicine and Philosophy 11(4): 347–366.

Gigerenzer, Gerd; Swijtink, Zeno; Porter, Theodore; et al., eds. 1989. The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge, Eng.: Cambridge University Press.

Hrobjartsson, Asbjorn, and Gotzsche, Peter. 2001. "Is the Placebo Powerless? An Analysis of Clinical Trials Comparing Placebo with No Treatment." New England Journal of Medicine 344(21): 1594–1602.

Klein, Jason E., and Fleischman, Alan R. 2002. "The Private Practicing Physician-Investigator: Ethical Implications of Clinical Research in the Office Setting." Hastings Center Report 32(4): 22–26.

Kopelman, Loretta M. 1986. "Consent and Randomized Clinical Trials: Are There Moral or Design Problems?" Journal of Medicine and Philosophy 11(4): 317–345.

Kopelman, Loretta M. 1994. "How AIDS Activists Are Changing Research." In Health Care Ethics: Critical Issues, eds. John F. Monagle and David C. Thomasma. Gaithersburg, MD: Aspen.

Lerner, Barron H. 2001. The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America. New York: Oxford University Press.

Levine, Carol; Dubler, Nancy N.; and Levine, Robert J. 1991. "Building a New Consensus: Ethical Principles and Policies for Clinical Research on HIV/AIDS." IRB: A Review of Human Subjects Research 13(1–2): 1–17.

Levine, Robert J. 1986. Ethics and Regulation of Clinical Research, 2nd edition. Baltimore, MD: Urban and Schwarzenberg.

Lurie, Peter, and Wolfe, Sidney M. 1997. "Unethical Trials of Interventions to Reduce Perinatal Transmission of the Human Immunodeficiency Virus in Developing Countries." New England Journal of Medicine 337(12): 853–856.

Marquis, Don. 1986. "An Argument That All Prerandomized Clinical Trials Are Unethical." Journal of Medicine and Philosophy 11(4): 367–383.

Merigan, Thomas C. 1990. "You Can Teach an Old Dog New Tricks: How AIDS Trials Are Pioneering New Strategies." New England Journal of Medicine 323(19): 1341–1343.

Pizzo, Philip A. 1990. "Pediatric AIDS: Problems within Problems." Journal of Infectious Diseases 161(2): 316–325.

Rothman, Kenneth J., and Michels, Karin B. 1994. "The Continuing Unethical Use of Placebo Controls." New England Journal of Medicine 331(6): 394–398.

Schaffner, Kenneth F. 1986. "Ethical Problems in Clinical Trials." Journal of Medicine and Philosophy 11(4): 297–315.

Schaffner, Kenneth F. 2002. "Assessment of Efficacy in Biomedicine: The Turn toward Methodological Pluralism." In The Role of Complementary and Alternative Medicine: Accommodating Pluralism, edited by Daniel Callahan. Washington, D.C.: Georgetown University Press.

Shaw, Lawrence W., and Chalmers, Thomas C. 1970. "Ethics in Cooperative Trials." Annals of the New York Academy of Sciences 169(2): 487–495.

Silverman, William A. 1980. Retrolental Fibroplasia: A Modern Parable. New York: Grune and Stratton.

Spilker, Bert. 1992. "Methods of Assessing and Improving Patient Compliance and Clinical Trials." IRB: A Review of Human Subjects Research 14(3): 1–6.

Steering Committee of the Physicians' Health Study Research Group. 1989. "Final Report on the Aspirin Component of the Ongoing Physicians' Health Study." New England Journal of Medicine 321(3): 129–135.

Taylor, Kathryn M.; Margolese, Richard G.; and Soskolne, Colin L. 1984. "Physicians' Reasons for Not Entering Eligible Patients in a Randomized Clinical Trial of Surgery for Breast Cancer." New England Journal of Medicine 310(21): 1363–1367.

Temple, Robert, and Ellenberg, Susan S. 2000. "Placebo-Controlled Trials and Active-Control Trials in the Evaluation of New Treatments," Part 1: "Ethical and Scientific Issues." Annals of Internal Medicine 133(6): 455–463.

Tobias, Jeffrey Stuart. 1988. "Informed Consent and Controlled Trials." Lancet 1988, vol. 2(8621): 1194.

Varmus, Harold, and Satcher, David. 1997. "Ethical Complexities of Conducting Research in Developing Countries." New England Journal of Medicine 337(14): 1003–1005.

Waldenstrom, Jan. 1983. "The Ethics of Randomization." In Research Ethics, edited by Kare Berg and Knut Erik Tranoy. New York: Alan R. Liss.

Wikler, Daniel. 1981. "Ethical Considerations in Randomized Clinical Trials." Seminars in Oncology 8(4): 437–441.

World Medical Association (WMA). 1996, revised 2000. "Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects." Ferney-Voltaire, France: Author.

Zelen, Marvin. 1979. "A New Design for Randomized Clinical Trials." New England Journal of Medicine 300(22): 1242–1245.

Zelen, Marvin. 1990. "Randomized Consent Designs for Clinical Trials: An Update." Statistics in Medicine 9(6): 645–656.

INTERNET RESOURCE

U. S. "Protection of Human Subjects." 1993. Code of Federal Regulations. Title 45, pt. 46. Available from <http://ohrp.osophs.dhhs.gov/humansubjects/guidence/45cfr46.htm>.

About this article

Research Methodology: II. Clinical Trials

Updated About encyclopedia.com content Print Article