Skip to main content

Research Methodology: I. Conceptual Issues


Research in medicine, in the biomedical sciences, and in science in general is defined as "studious inquiry or examination; esp: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws" (Merriam-Webster, p. 992). The U.S. federal government's Common Rule for human-subject investigation (CR) echoes Webster's definition; according to the CR, "Research means a systematic investigation, including research development, testing, and evaluation, designed to contribute to generalizable knowledge" (Code of Federal Regulations, sec. 102). Research can refer to investigations that involve intentional manipulation of the objects studied, frequently termed experimental studies, as well as those inquiries that collect data generated by naturally occurring events, or observational studies. This entry focuses on the burdens and benefits scientific research has on human subjects (or perhaps better, on trial participants) and on society, as well as on laboratory animals. Research methodology comprises those general principles and designs used to describe valid and effective inquiries into nature, which includes humans. Research methodology has philosophical, scientific, and social dimensions.

General Aspects of Research Methodology

Beginning with Plato and Aristotle, philosophers have proposed a number of different though quite general approaches to scientific method. Philosophers René Descartes (1596–1650) and Francis Bacon (1561–1626) wrote on the subject in the seventeenth century, but the study of scientific method received its most systematic treatments in the work of the nineteenth-century philosophers and scientists William Whewell, Stanley Jevons (1835–1882, and John Stuart Mill (1806–1873), who forcefully re-presented the methods of agreement, difference, concomitant variation, and others that continue to influence contemporary philosophers; frequently these are referred to as Mill's Methods. Philosophers of science have continued to stimulate the imagination of practicing scientists. Since the early 1960s, Sir Karl Popper's falsificationist approach, T. S. Kuhn's account of revolutionary scientific changes as paradigm shifts, and the latter's criticisms of traditional rational and gradualist methodology have been cited in a number of scientific research articles.

Research methodology also involves more specific scientific components, including the analysis of different laboratory methodologies (e.g., molecular approaches and pure culture techniques); the utility of various animal models of diseases; and the characterization and assessment of the strengths of distinct study designs, ranging from the report of an individual case to the randomized controlled clinical trial (RCT). These scientific components may involve a considerable amount of sophisticated mathematical and statistical analysis. In this entry, both the philosophical and the scientific dimensions of research methodology will be pursued in the context of questions that they raise for bioethics.

A final major aspect of research methodology is the important social dimension of systematic empirical investigations. For the purposes of this entry, the term signifys the ethical, legal, political, and religious aspects of research methodology. More specifically, this rubric treats various moral implications of scientific investigation, including vulnerable or hitherto ignored subject populations (e.g., the disabled and women), from both descriptive and normative perspectives, as well as significant interactions among the philosophical, scientific, and social themes.

The Scope of Research

BIOMEDICAL AND BEHAVIORAL INVESTIGATIONS. Biomedical research (generally understood as also including behavioral research in the psychological and social sciences) covers a broad array of disciplines. The term biomedical is itself intended to bridge the gap between the more fundamental, pure, or basic sciences, such as physiology and biochemistry, and the more applied sciences, such as pathology and pharmacology. This interpretation, however, leaves the more clinical sciences, such as anesthesiology and medicine, less connected with the meaning of science than is appropriate. Better, perhaps, to follow a more expansive definition as found in Merriam-Webster's Tenth CollegiateDictionary, which gives as one definition of biomedical: "Of, relating to, or involving biological, medical, and physical science" (Merriam-Webster, p. 115). Dorland's Medical Dictionary (28th edition) offers as its preferred meaning "biological and medical" (Dorland, p. 199). In accordance with this expanded characterization of the term, virtually all of the natural, behavioral, and social sciences, as well as engineering, can be conceived of as biomedical sciences if the intent is to place them in the service of advancing generalizable knowledge in the domains of medicine and healthcare.

BASIC SCIENCE AND CLINICAL SCIENCE. A common division is found in the departmental organization in medical schools distinguishing between basic sciences, such as microbiology (but also including more applied sciences such as pharmacology), and the clinical sciences such as medicine and oncology, whose practitioners spend much of their time and effort working with patients. It must not be forgotten that studies employing systems ranging from in vitro (test tube) inquiries through research on bacterial viruses to animal-model investigations comprise the bulk of research in the biomedical sciences. Preliminary research on new drug therapies, as well as investigations into human immunodeficiency virus (HIV) pathophysiology, falls into this category. In addition, in recent years there has been heightened awareness of the ethical problems generated by the use of animals in biomedical research, and thus it is appropriate to comment briefly on this basic science dimension of research methodology.

In 1976 an important study investigated the type of research that led to the ten most important advances in the treatment of cardiovascular and pulmonary diseases (Comroe and Dripps). The investigators used a broad definition of clinically oriented research; studies involving animals, tissues, or cells (including cell fragments) were included in the definition if the author mentioned a possible clinical application even briefly. In spite of this expansive definition, some 41 percent of key articles involved in the development of these ten clinically relevant advances were not clinically oriented; that is, they reported on basic science research. This finding suggests that supporting only targeted or mission-oriented research is likely to have adverse effects on clinical research advances.

Another intensive investigation, conducted in 1985 by the National Research Council's Committee on Models for Biomedical Research, examined the nature of research methodology in the biomedical sciences and underscored the intimate and reciprocal relationship between research generally characterized as clinical and research generally characterized as basic. This report introduced the general notion of a biomatrix, which was defined as a "complex body, or matrix, of interrelated biological knowledge built from studies of many kinds of organisms, biological preparations, and biological processes at various levels" (National Research Council, p. 2). Within such a multidimensional matrix, biomedical research involves many-many modeling in which analogous features at various levels of aggregation (e.g., molecules, cells, and organs) are related to each other across various species. The committee suggested that an "investigator considers some problem of interest—a disease process, some normal physiological function, or any other aspect of biology or medicine. The problem is analyzed into its component parts, and for each part and at each level, the matrix of biological knowledge is searched for analogous phenomena.… Although it is possible to view the processes involved in interpreting data in the language of [simple] one-to-one modeling, the investigator is actually modelling back and forth onto the matrix of biological knowledge" (National Research Council, p. 67). The study conducted by Julius Comroe and Robert Dripps, as well as the council's report, thus indicate that clinically relevant advances emerge from research sources beyond those involving human subjects.

Before innovations can be tested on humans, ethical codes and governmental regulation require research involving chemical, cell-fragment, cell, tissue, and intact-animal-model systems. The Nuremberg Code (1947–1948), for example, recommends that human experimentation should be based on the results of animal experimentation. The Declaration of Helsinki (1964, most recently revised in 2000) requires that "medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and on adequate laboratory and, where appropriate, animal experimentation" (World Medical Association). These requirements are based on the belief that such inquiries will assist in identifying interventions that are both safer and more effective by the time they are finally applied to human subjects. In the biomedical sciences, including studies involving human subjects, biological diversity and the number of systems that strongly interact in living organisms create considerable complexity. Researchers must often pay special attention to ensuring the (near) identity of the organisms under investigation, except for those differences that are the focus of the scientist's inquiry.

Biomedical investigations involving virtually identical laboratory organisms can yield precise and often nonstatistical results that can then be utilized in more variable human populations. As is discussed in the section below on various study designs, human variability of both genetic and environmental sources will typically require the extensive use of statistical methodologies to uncover generalizable knowledge that is clinically applicable. In more rigidly controllable laboratory experiments—for example, in the rapidly advancing area of molecular genetics—biomedical scientists can often employ the classical methods of experimental inquiry, referred to earlier as Mill's Methods. These methods can be thought of as attempting to discover the causal structure of the world, and in their application scientists endeavor to identify and compensate for possible confounding factors that, if ignored, can lead to mistaken inferences about causes and effects. Thus all natural scientists attempt to compensate for interfering and extraneous factors, frequently by setting up a control comparison or a control group. Such controls are a direct implementation of what Mill termed the method of difference and Claude Bernard (1813–1878), the notable nineteenth-century French scientist and methodologist, the method of comparative experimentation.

The method of difference may be stated in a form similar to that in which Mill presented it. Suppose that in Case 1 some phenomenon occurs, and in Case 2, that is identical with Case 1 except for one factor, the phenomenon does not occur. Then the single difference between the two cases is the effect of that phenomenon, or the cause of that phenomenon, or an indispensable part of the cause of that phenomenon. (See Mill, p. 256, for his original language).

Claude Bernard judged that this focus on only one difference was far too stringent and reformulated the experimental idea as his method of comparative experimentation:

Physiological phenomena are so complex that we could never experiment at all rigorously on living animals if we necessarily had to define all the other changes we might cause in the organism on which we were operating. But fortunately it is enough for us completely to isolate the one phenomenon on which our studies are brought to bear, separating it by means of comparative experimentation from all surrounding complications. Comparative experimentation reaches this goal by adding to a similar organism, used for comparison, all our experimental changes save one, the very one which we intend to disengage. (p. 127–128)

Bernard referred to comparative experimentation as "the true foundation of experimental medicine."

General Ethical Issues Associated with Research on Human Subjects

The principal ethical controversies in biomedical (including behavioral and social) research have emerged from studies involving human subjects. Before discussing the general ethical requirements of studies involving human subjects, however, it is important to describe briefly the often contentious debate about the terms used to distinguish between different kinds of standard medical practice and research, among them therapeutic research, nontherapeutic research, innovative treatments, and experimentation.

TERMINOLOGICAL CONSIDERATIONS. It is a fundamental tenet of medical ethics that the well-being of human subjects should be protected. This tenet, together with another general ethical principle frequently associated with the name of philosopher Immanuel Kant (1724–1804), to treat oneself or another human being always as an end and never merely or only as a means, requires that a human research subject be expected to obtain some direct benefit from the investigation, or, if not, to waive such benefit on the basis of a free and informed consent. (This Kantian injunction is sometimes characterized as a principle of respect for persons.) The need to clarify the therapeutic/nontherapeutic distinction in the light of such principles should be evident.

Thoughtful scholars have generally agreed about the difficulty of drawing a clear distinction between research and accepted practice, but have differed about the usefulness of various terms proposed to assist with this task. Some find the distinction between therapeutic and nontherapeutic experimentation crucial, whereas others find it is better phrased as one between beneficial and nonbeneficial experimentation. Tom Beauchamp and James Childress urge caution with the use of the closely related term therapeutic research since "attaching the favorable term therapeutic to research can be dangerous, because it suggests justified intervention in the care of particular patients and may create a misconception"(p. 320). Robert Levine, an authority on research involving human subjects, contends that the expressions therapeutic research, nontherapeutic research, and experimentation (in human subject contexts) are "unacceptable" and "illogical"(p. 8). The problem arises in part because it is fairly common that a diagnostic and therapeutic plan involve some variation from the textbook norm, and because it is in only rare cases that biomedical research conveys absolutely no benefit on its subjects.

Levine suggests that we employ the term nonvalidated practices as a more encompassing term for innovative therapies, acknowledging that it is the uncertainty associated with variation in the outcomes of diagnostic and therapeutic maneuvers that is the principal issue. This suggestion seems to have been accepted in much of the recent literature, though frequently the narrower term nonvalidated therapy is also employed. Though no definitive algorithm can be provided that will unambiguously differentiate the various inquiries and activities discussed in the preceding paragraph, the general proposal that appears to emerge from the discussion involves three elements. First, the intent of the investigator is critical in determining whether the intervention (or the withholding of an intervention) is to be characterized as primarily beneficial to the subjects or as contributing to generalizable knowledge. A surgeon employing a novel suturing technique in an attempt to save a patient from bleeding to death does not evidence any intent of beginning a research project to evaluate a new operative technique. Second, the degree of variation from standard practice figures in this determination, and this may depend as well on the degree of possible harm that the intervention entails. Even small variations associated with significant harm are more likely to be seen as nonvalidated in contrast to small variations with minor adverse consequences. For example, a physician may believe that he or she must try a powerful immunosuppressive drug, usually used only in the case of potential organ-transplant rejection, to help a patient suffering with severe rheumatoid arthritis. The dangers associated with such drugs and the departure from their normal use argues that this would be a nonvalidated practice. Finally, there is the element of uncertainty, the degree of likelihood of a particular outcome or set of outcomes. These include both anticipated and unintended effects (side effects). Again, the example just cited of the immunosuppressive drug would be relevant here because of the difficulty of anticipating the effects of powerful drugs on systems as complex as the immune system.

For interventions from which the researcher intends to produce new general knowledge, that represent significant departures from accepted practice, and about which there is reasonable uncertainty regarding consequences, including intended outcomes, it would seem mandatory that the researchers develop a formal research protocol to be assessed by an appropriate institutional review board (IRB). Such a multidimensional sliding scale, possibly with thresholds that could be specified in particular areas of clinical investigation, may be the best possible mechanism for determining whether to require IRB review in this complex area.

ETHICAL REQUIREMENTS FOR RESEARCH ON HUMAN SUBJECTS. As noted in the preceding section, general principles requiring free and informed consent and a net balance of benefits over harms for the individual subject (unless this is waived by the subject in the interests of greater social benefits) will be assumed in all research contexts, and the present section will examine additional details regarding these requirements. Furthermore, however, in order both to safeguard research subjects and ensure that the resources used will generate valuable knowledge, a research study must conform to scientifically validated principles of design. To begin with, a prospective research project must be evaluated in terms of the risks of harm—physical, psychological, and social—to the subject(s), as well as in terms of the benefits that are likely to accrue to participants. Only studies in which the expected benefits outweigh the expected harms are morally permissible. Further, there must be no alternative and less risky means for the subject to obtain the anticipated benefits. Subjects must be selected equitably, with special sensitivity to the problems faced by vulnerable populations, such as children, prisoners, pregnant women, mentally disabled persons, or educationally disadvantaged persons. In recent years the practice of community consultation has developed, which involves meetings with representatives of the at-large subject community (e.g., HIV-infected individuals) to "assure a suitable balancing of the relevant values [such as respect for persons, individual beneficence and justice] in the design and conduct of a clinical trial" (Levine et al., p. 10).

An investigator must also obtain the legally effective informed consent of the subject or of the subject's legally authorized representative. Such consent must be voluntary and not obtained by coercive measures. The consent must be informed; this means that the investigator must specify the purposes of the research and how long the subject is expected to participate and provide a nontechnical description (in terms readily understandable to the subject) of any procedures to be followed, as well as a designation of procedures considered untested or experimental. The subject must also be provided with a description of any reasonable foreseeable risks or discomforts as well as reasonably anticipated benefits. Alternative procedures or courses of treatment that may be advantageous to the participant must be disclosed. Subjects are also to be provided with a statement about the extent of confidentiality of their records and, for research involving more than minimal risk, an explanation of what, if any, compensation or treatments will be available in the event of injury. According to the CR, subjects must be informed about whom to contact for answers about any questions or injuries that may arise in the course of or as a consequence of the research. They are to be told that their participation is voluntary and that they may refuse to participate or may withdraw from participation without any penalty or loss of benefits to which they would normally be entitled. Should the investigator come to believe in the course of the research that harm to the patient has become likely, the patient should be so informed and withdrawn from the project. The above requirements underscore the point that informed consent should not be conceived of only as a one-time event, but is best construed as an ongoing process involving clinical investigators and trial participants.

In certain types of behavioral and social-science research, investigators have maintained that scientifically valid conclusions can be obtained only if the subjects are kept uninformed or even deliberately deceived about the nature of the research. In a well-known example of this type of research, Stanley Milgram's studies on obedience to authority, subjects were falsely told they were causing pain to another human as part of a learning experiment. A majority of subjects proceeded to escalate the level of fictiously inflicted pain to agonizing levels on the instructions of the investigator. Subsequently, when the subjects were informed about this feature of themselves as part of the debriefing, they experienced severe, and in some cases, prolonged anxiety reactions (Milgram, 1963). Milgram defended his study against criticism and reported that most of the subjects had a positive view of their participation (Milgram, 1964).

The ethics of such studies continue to be controversial. Levine notes that he himself chairs an IRB that occasionally approves deceptive studies but generally disapproves of deception (Levine). Various guidelines regarding deceptive research methods have been published, such as those by the American Psychological Association, which can be viewed on their website. In response to many unethical research practices, ranging from Nazi atrocities before and during World War II to well-documented cases in the United States, the U.S. government has mandated a set of formal procedures to ensure compliance with ethical requisites. Institutions involved in research on human subjects are required to have their investigations reviewed and approved by IRBs whose composition, procedures, and record-keeping requirements are well-defined in law and governmental regulations. It should be noted, however, that the determination by a duly constituted IRB of the satisfaction of these ethical requirements does not in all cases resolve all ethical and practical stresses generated by research on human subjects. A number of authors have discerned a deeply rooted dilemma that the physician as healer and the physician as researcher confront in a search for generalizable knowledge employing human subjects. This dilemma has its source partly in the respect-for-persons principle cited above and partly in the ethical principle that the physician should do what is best for his or her patient. The dilemma is also most clearly evident in the context of the RCT but can also arise in less stringent research designs, which it will be necessary to discuss before turning to an account of this troublesome research predicament.

Study Designs

THE SPECTRUM OF STUDY DESIGNS IN BIOMEDICAL AND BEHAVIORAL RESEARCH. Diverse research designs guide research in the biomedical, behavioral, and clinical sciences. Since this topic can easily become quite technical and mathematically abstruse, this entry presents only a general introduction to this subject. (For specialized information including indications when, and why, one design is preferable to another, see works on clinical epidemiology and monographs devoted to specific research designs, e.g., Feinstein,; Fletcher et al.; Hulley, et al.; Lilienfeld and Lilienfeld; and Sackett et al.)

The chart depicted in Figure 1 can be used as a guide to the various research designs found in clinical research. (This figure is based in part on Lilienfeld and Lilienfeld, p. 192, and in part on Fletcher et al., p. 193.) To these designs should also be added the case report and the case series, in which a biomedically interesting individual's (or small group of similar individuals's) situation is described. Some writers characterize the case report or case series as another design; others view such a small series as conductible using any of the designs described in the chart below. (The use of small numbers of subjects in any trial design, however, raises concerns that errors of interpretation are likely because of chance events. Problems generated by chance events in biomedical research are analyzed using the tools of mathematical statistics.)

The interval of data collection refers to the period of time during which data are collected. If one or more populations are studied over a period of time, the study is described as a longitudinal one. Alternatively, we may wish to collect information within one time slice, yielding a cross-sectional study. Moving to the next line, the investigator may collect data by looking back in time—for example, inquiring (or reviewing chart records) to learn whether the population was exposed to a specific agent. At least one control group is assembled to provide a comparison, again retrospectively. This case control design is the type of approach that Arthur Herbst and his colleagues employed in his pioneering inquiry into the causes of vaginal cancer in daughters of mothers who had been given diethylstilbestrol (DES), a synthetic estrogen believed to help prevent miscarriages, during their pregnancies. The case-control type of study is generally thought to be open to a number of potential errors, termed biases. Potentially confounding elements therefore need to be monitored carefully.

If the putative active difference between the comparison groups, such as the administration of a new drug, is intentionally introduced by the investigators, a study is characterized as experimental. If the suspected active difference occurs by accident or is chosen by the subjects—for example, a subject's decision to begin cigarette smoking or to


reduce blood cholesterol by diet—the investigation is termed a cohort study. A longitudinal prospective experimental study is a clinical trial, but such trials may or may not involve a comparison control group. Good examples of uncontrolled types of clinical trials are Phase I and Phase II investigations of new drugs, though occasionally a Phase II investigation may involve randomized controls (see Byar et al.). Phase I studies look at the metabolism and toxicity of new drugs, often in normal subjects, and Phase II inquiries test for preliminary efficacy of a drug or a procedure. The terms Phase I and Phase II were introduced in 1977 by the U.S. Food and Drug Administration (FDA). (For details of the procedures by which toxicity and efficacy of interventions are evaluated, see Gilman et al., chapter 68.)

A Phase III investigation is almost always a RCT. Randomization refers to the process of assigning a patient to one rather than another treatment (or to the control group) by the flip of a coin or a more mathematically sophisticated but analogous procedure of using a table of random numbers. The RCT refers to that form of investigation that involves (1) one or more treatment groups and a control group that will typically receive a placebo (an inert substance) or the standard therapy (i.e., the traditionally accepted therapy); (2) randomized assignment of patients to the two or more groups (possibly after stratification or subgrouping based on known factors that will make a difference) sometimes referred to as arms of the trial; and (3) often a single- or double-blind design in which the assignments of the agents or procedures being tested are not known to the patients (single-blind) or possibly also to the treating health professionals (double-blind). (In place of the word blind, some accounts use the word masked.) In one unusual exception to that rule, the trial of the anti-HIV drug didanosine, or ddI, the whole experimental cohort were given ddI; these subjects were compared with historical, or retrospectively identified, control subjects (Waldholz; FDA).

Considerable debate has occurred about the methodological value and the ethical significance of randomization in controlled clinical trials. Various types of studies described above differ in their strength, that is, their ability to detect what is actually causing the changes that are being observed. The case series is traditionally the weakest of the research designs; other designs, in order of increasing strength, are the case-controlled study, the cohort study, and the RCT. The principal reason for the increase in design strength is the decrease in the likelihood of bias, or lack of comparability of the matched populations, as one moves from case series through to the RCT.

There are many types of bias, and some of them are quite subtle (Sackett). A major source of bias is selection or susceptibility bias, in which the groups compared have distinctly different outcome probabilities (more specifically, different prognostic likelihoods for the study's endpoint). This type of bias can occur within the study, or it can arise as part of the selection process and affect the generalizability of a study's results. In this type of situation, unrepresentative individuals are selected, and subgroups drawn from the unrepresentative class are then assigned to the arms of the study. An example of this type of bias would occur if only the sickest patients in a study were given the new drug and the better-off patients were assigned standard therapy (or a placebo). Another source of noncomparability is performance bias, in which the interventions in the trial are not reasonably equal. An example would be if the patients receiving the new drug were monitored much more closely and treated for concurrent health problems with no such monitoring and treatment being provided to the control group. A third type of bias is confounding bias, in which another, unsuspected causal variable travels along with the putative causal variable and actually accounts for the out-come. This could occur in a study to determine the effects of alcohol consumption on lung function, if alcohol drinkers were also much more likely to be smokers and the effect of smoking was not considered by the investigators. Other significant types of bias are detection or measurement bias, where the outcome event is detected differently in the comparison groups—for example, if the test group received MRIs and the control group standard X rays—and transfer bias, in which subject dropouts or reassignments may yield differences in outcome. The arguments for randomization in clinical investigations typically cite the ability of randomized assignment to decrease the likelihood of bias because, many maintain, randomizing will average together, and thus cancel out, factors that are not suspected by the investigators to affect the outcome.

RCTs can generate potential conflicts of interest between the roles of the physician as healer and physician as investigator, including questions about the suitability of placebo controls and its possible resolution using the concept of clinical equipoise.

META-ANALYSIS. Human variability, based on both genetics and environment, requires the extensive use of statistical methodologies to uncover generalizable, clinically applicable knowledge. This is in contrast to laboratory investigations in which virtually identical organisms yield cleaner and often deterministic results. Besides the variability of the subjects studied, many sources of bias such as the ones described can also lead to incorrect research conclusions.

Under these circumstances, researchers have turned increasingly to a method of clinical trial pooling and interpretation that seems to provide a better means of inferring correct conclusions from repeated clinical investigations. This methodology, known as meta-analysis, uses a set of formal statistical techniques to aggregate a group of separate but similar studies. In contrast to the widely employed scientific practice of summing up such studies qualitatively in a review article, meta-analysis purports to fulfill this summarizing function quantitatively and thus more precisely and objectively. Meta-analysis has been practiced for many years in a variety of scientific disciplines, from physics to the biomedical and the behavioral sciences, but only since the early 1980s has it had a major impact in the clinical arena, particularly in the areas of cardiovascular disease and obstetrics and gynecology (Chalmers et al., 1989; Mann).

Simple introductions as well as accessible authoritative accounts of the methodology are available. (See Mann, for an introduction, and Friedman et al, pp. 310–316, for a more comprehensive overview.) The technique remains controversial even as its use in biomedicine escalates exponentially.

EVIDENCE-BASED MEDICINE. Many of the issues reviewed above coalesce in what is termed evidence-based medicine (EBM), which is both a critical methodological approach as well as a kind of social movement. EBM had its origins in the 1980s discipline of clinical epidemiology, and developed rapidly in Canada, the United Kingdom, and then the United States and other countries in the 1990s. Initially EBM saw itself as representing a kind of Kuhnian paradigm shift, urging the replacement of the received view of medical evidence—seen as a combination of clinical expertise and basic science—with evidence based mainly on rigorously evaluated empirical clinical trials. (Haynes). More recently, EBM advocates have taken a more nuanced position on this replacement view though the distinction is still evident in EBMs databases (Haynes). EBM provides evaluations and clinician guidance through its literature, various websites, and electronically available systematic reviews including the Cochrane collaboration. EBM provides grades of recommendation from A (excellent) to D (poor) based on studies's empirical strengths following a detailed assessment protocol based on five levels of study types, several of which have sublevels. The levels range from the best (a systematic review with homogenous RCTs as the main element) to the worst (expert opinion without explicit critical appraisal, or essentially based on physiology, whether bench research or general principles). The specifics of these grades, the levels on which they are based, and the definitions of the concepts involved (such as homogeneity) can be obtained at <>. EBM has not gone uncriticized, both from without and within the movement. One of its founders, Brian Haynes, laments the fact that EBM itself has not, and probably ethically cannot, be subject to its own highest standards of evaluation: a series of homogeneous RCTs in which EBM is utilized as an intervention but is not employed in the control groups of patients (Haynes).


This entry has reviewed a number of conceptual issues associated with current research methodology in the biomedical sciences. It contains a review of research in the basic sciences, such as biochemistry and microbiology, but has concentrated on the clinical sciences, such as medicine, oncology, and virology, since it is in the latter that ethical issues affecting human subjects arise. Scientific research on humans takes place in the context of a complex web of ethical and legal requirements, and the interplay between methodological and ethical/legal components of research has been examined. Ethical and regulatory principles (primarily as affecting U.S. research) have been presented, and several conceptual issues regarding scientific inquiry have been outlined, including different types of research designs. This entry is limited to an introduction to these issues, which become very technical in their details; references to further reading have been provided.

Although scientific methodology has a venerable history, many current issues are of much more recent vintage. In point of fact, the RCT is essentially a post-World War II invention, and the discipline of meta-analysis is a creature of the late 1980s and 1990s. New issues will continue to arise as better methodologies and improved safeguards for human subjects are sought, and the reader is urged to consult on-line bibliographic services, such as the bioethics database at the U.S. National Library of Medicine, in addition to references provided in this entry, to keep up to date with a continuously evolving subject.

kenneth f. schaffner (1995)

revised by author

SEE ALSO: Aging and the Aged: Healthcare and Research Issues; AIDS: Healthcare and Research Issues; Autoexperimentation; Autonomy; Children: Healthcare and Research Issues; Commercialism in Scientific Research; Embryo and Fetus: Embryo Research; Empirical Methods in Bioethics; Genetics and Human Behavior: Scientific and Research Issues; Holocaust; Infants: Public Policy and Legal Issues; Informed Consent: Consent Issues in Human Research; Mentally Ill and Mentally Disabled Persons: Research Issues; Military Personnel as Research Subjects; Minorities as Research Subjects; Pediatrics, Overview of Ethical Issues in; Prisoners as Research Subjects; Race and Racism; Research, Human, Historical Aspects; Research, Multinational; Research Policy; Research, Unethical; Responsibility; Scientific Publishing; Sexism; Students as Research Subjects;Virtue and Character; and other Research Methodology subentries


Beauchamp, Tom L., and Childress, James F. 2001. Principles of Biomedical Ethics, 5th edition. New York: Oxford University Press.

Bernard, Claude. 1865 (reprint 1957). An Introduction to the Study of Experimental Medicine, tr. Henry C. Green. New York: Dover.

Byar, David P.; Schoenfeld, David A.; Green, Sylvan B.; et al. 1990. "Design Consideration for AIDS Trials." New England Journal of Medicine 323(19): 1343–1348.

Chalmers, Jain; Enkin, Murray; Keirse, Mark; and Enkin, Eleanor, eds. 1989. Effective Care in Pregnancy and Childbirth. New York: Oxford University Press.

Chalmers, Thomas C.; Block, Jerome; and Lee, Stephanie. 1972. "Controlled Studies in Clinical Cancer Research." New England Journal of Medicine 287(2): 75–78.

Code of Federal Regulations. 1993. 45 CFR 46 (Protection of Human Subjects).

Comroe, Julius H., Jr., and Dripps, Robert D. 1976. "Scientific Basis for the Support of Biomedical Science." Science 192(4235): 105–111.

Dorland, W. A. Newman. 1994. Dorland's Illustrated Medical Dictionary, 28th edition. Philadelphia: W. B. Saunders.

Federal Register. 1991. Federal Policy for the Protection of Human Subjects. 56(117): 28013–28028.

Feinstein, Alvin R. 1986. Clinical Epidemiology: The Architecture of Clinical Research. Philadelphia: W. B. Saunders.

Fletcher, Robert H.; Fletcher, Suzanne W.; and Wagner, Edward H. 1982. Clinical Epidemiology: The Essentials. Philadelphia: Williams & Wilkins.

Freireich, Emil, and Gehan, Edmund. 1979. "The Limitations of the Randomized Clinical Trial." In Methods of Cancer Research: vol. 17 Cancer Drug Development—Part B, eds. Vincent T. De Vita and Harris Busch. New York: Academic Press.

Friedman, Lawrence M.; Furberg, Curt D.; and DeMets, David L. 1998. Fundamentals of Clinical Trials, 3rd edition. New York: Springer.

Goodman, Louis S., and Gilman, Alfred G., eds. 1980. Goodman and Gilman's The Pharmacological Basis of Therapeutics, 6th edition. New York: Macmillan.

Haynes, R. Brian. 2002. "What Kind Of Evidence Is It That Evidence-Based Medicine Advocates Want Health Care Providers And Consumers To Pay Attention To?" BMC Health Services Research 2(1): 3-X.

Herbst, Arthur L.; Uhlfelder, Howard; and Poskanzer, David C. 1971. "Adenocarcinoma of the Vagina: Association of Maternal Stilbestrol Therapy with Tumor Appearance in Young Women." New England Journal of Medicine 284(16): 878–881.

Hulley, Stephen B.; Cummings, Steven R.; and Browner, Warren S., eds. 1988. Designing Clinical Research: An Epidemiologic Approach. Baltimore: Williams & Wilkins.

Levine, Carol; Dubler, Nancy N.; and Levine, Robert J. 1991. "Building a New Consensus: Ethical Principles and Policies for Clinical Research on HIV/AIDS." IRB 13(1–2): 1–17.

Levine, Robert J. 1986. Ethics and Regulation of Clinical Research, 2nd edition. New Haven, CT.: Yale University Press.

Lilienfeld, Abraham M., and Lilienfeld, David E. 1980. Foundations of Epidemiology. New York: Oxford University Press.

Mann, Charles. 1990. "Meta-Analysis in the Breech." Science 249: 476–480.

Milgram, Stanley. 1963. "Behavioral Study of Obedience." Journal of Abnormal Psychology 67(4): 371–378.

Milgram, Stanley. 1964. "Issues in the Study of Obedience: A Reply to Baumrind." American Psychologist 19(11): 848–852.

Merriam-Webster's Collegiate Dictionary, 10th edition. 2002. Springfield, MA: Merriam-Webster.

Mill, John Stuart. 1843 (reprint 1959). A System of Logic. London: Longmans, Green and Co.

National Research Council's Committee on Models for Biomedical Research. 1985. A New Perspective. Washington, D.C.: National Academy Press.

Sackett, David L. 1979. "Bias in Analytic Research." Journal of Chronic Diseases 32(1–2): 51–63.

Sackett, David L.; Haynes, R. Brian; Straus, Sharon E.; et al. 2000. Evidence Based Medicine. Orlando, FL: Harcourt Health Sciences.

U.S. Food and Drug Administration. 1977. General Considerations for the Clinical Evaluation of Drugs, DHEW Publication No. (FDA) 77–3040. Washington, D.C.: U.S. Government Printing Office.

U.S. Food and Drug Administration. 1991. "Summary Minutes of Antiviral Drugs Advisory Committee," July 18/19, Meeting #6, Bethesda Holiday Inn, Bethesda, MD. Available from FDA on request.

U.S. President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. 1983. IRB Guidebook. Washington, D.C.: U. S. Government Printing Office.

Waldholz, Michael. "Bristol-Meyers Guides AIDS Drug Through a Marketing Minefield." Wall Street Journal, October 10, 1992,p. A1.


American Psychological Association. "Ethical Principals of Psychologists and Code of Conduct 2002." Available from <>.

U.S. National Library of Medicine. 2003. Available from <>.

World Medical Association. 2003. "Declaration of Helsinki." Available from <>.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Research Methodology: I. Conceptual Issues." Encyclopedia of Bioethics. . 19 Sep. 2018 <>.

"Research Methodology: I. Conceptual Issues." Encyclopedia of Bioethics. . (September 19, 2018).

"Research Methodology: I. Conceptual Issues." Encyclopedia of Bioethics. . Retrieved September 19, 2018 from

Learn more about citation styles

Citation styles gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, cannot guarantee each citation it generates. Therefore, it’s best to use citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

The Chicago Manual of Style

American Psychological Association

  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.