In the social sciences, a controlled experiment is any study in which at least one variable—a presumed cause—is manipulated, the units of observation (individuals, groups, or organizations) are randomly assigned to levels of the manipulated variable, and there is at least one control condition. Experiments can be contrasted with quasi-experiments, in which the putative cause is manipulated but there is either no randomization or no control condition, and with nonexperiments, in which there is no manipulation, randomization, or control condition. Although quasi–and nonexperimental studies are useful, and sometimes necessary, in social science research, the information they provide is of limited value when the goal of the research is to detect causal relations. Controlled experiments, by virtue of manipulation, randomization, and control conditions, are the social scientist’s best option for detecting causal relations.
The power of controlled experiments lies in their capacity for achieving control and isolation of the causal variable in a presumed cause–effect relation. When variability in the presumed cause is controlled by the experimenter through manipulation, the only plausible direction of influence between the variables is from cause to effect. The context within which the putative cause varies also must be controlled. Control in this sense involves attending to other variables that covary with the presumed cause or are present in the research context that might contribute to variability in the outcome. Isolation concerns the potential co–occurrence of a causal variable and other variables. To the extent that the presumed cause is confounded with other variables, inferences about its influence will be ambiguous or even incorrect. Control over variability in the presumed cause and isolation of it from other potential causes are necessary for unequivocal inferences of causal influence.
Although random assignment to levels of the manipulated variables also figures into control over the putative cause, its primary role in causal inference is isolation. If the units of observation are randomly assigned to conditions then, apart from the manipulation, the conditions are presumed to be probabilistically equivalent. Their equivalence is only probabilistic because there is some likelihood that, apart from the manipulation, the conditions differ. This probability is accounted for in the construction of statistical tests allowing for apparent difference between levels of the putative cause to be attributed either to the manipulation or to nonequivalence. By convention, these tests ensure that an apparent difference attributable to nonequivalence is attributed to the manipulation no more than 5 percent of the time. Social scientists often report the precise probability, or p –value, of the statistical test comparing conditions in an experiment, with values less than .05 indicating statistical significance.
Although random assignment is essential for isolating a presumed cause from other causal influences, it does not rule out alternative explanations for differences on the presumed effect attributable to the manipulation itself. Strategically designed control conditions address this type of alternative explanation. The prototypical control condition is the placebo condition in experiments on the causal effects of medications. The goal of these experiments is to isolate the effects of active ingredients in the medication from other effects, such as those that accrue from simply receiving medication or attention from a physician. By comparing outcomes for individuals who take a medication with those for individuals who believe they are taking the medication but in fact receive none of the active ingredients, researchers can distinguish effects attributable to the active ingredients from the effects of receiving any medication or seeing a physician. Because a control condition of this sort keeps research participants “blind” to condition, it also rules out the possibility that participants’ assumptions about the effects of the manipulation explain differences in the outcome.
Controlled experiments often include more than one causal variable, with each having two or more levels. In the typical experiment with multiple causal variables, the variables are “fully crossed” to create a factorial design. In factorial designs, units of observation are randomly assigned to all combinations of levels of the causal variables, with the total number of conditions equal to the product of the number of levels of the causal variables. Such designs often are referred to in terms of the number of levels of each causal variable. The simplest factorial design is the 2 × 2 (“two–by–two”), which comprises two causal variables, each with two levels. Such designs allow for the detection of causal effects of the individual variables as well as interaction effects, in which the causal effect of one of the variables varies across levels of the other.
Despite their appeal as a means of detecting causal relations, controlled experiments are not always appropriate in social science research. In some instances, particularly in the early stages of a program of research, social scientists are interested in simply capturing the natural cooccurrence of variables without concern for causality. Controlled experiments are unsuitable for such purposes. In other instances, social scientists are interested in causal relations but cannot, for practical or ethical reasons, design controlled experiments to study them. These limitations might preclude manipulation, random assignment, or the assignment of some individuals or groups to a control condition. For example, some characteristics of individuals, groups, and organizations either cannot be manipulated at all, or cannot be manipulated ethically in a manner that creates variability on the causal variable of interest. Furthermore, it is not always possible to randomly assign units of observation to condition, as, for instance, in the case of an intervention administered by teachers to students who are in a condition because they were assigned to that teacher’s class according to school or district policy. Finally, in the case of studies of treatments that have the potential to save lives that otherwise could not be saved, it is unethical to withhold the treatment from research participants, thereby precluding inclusion of a no–treatment or placebo control condition. Thus, although controlled experiments are ideal for detecting causal relations, they are not always appropriate or feasible. Because of this, social scientists typically view controlled experiments as only one of numerous approaches to studying associations between variables.
SEE ALSO Causality; Ethics in Experimentation; Experiments; Experiments, Human; Inference, Statistical; Random Samples; Social Science
Campbell, Donald T., and Julian C. Stanley. 1963. Experimental and Quasi–Experimental Designs for Research. Boston: Houghton Mifflin.
Haslam, S. Alexander, and Craig McGarty. 2000. Experimental Design and Causality in Social Psychological Research. In The Sage Handbook of Methods in Social Psychology, eds. Carol Sansone, Carolyn C. Morf, and A. T. Panter, 237–264. Thousand Oaks, CA: Sage Publications.
West, Stephen G., Jeremy C. Biesanz, and Steven C. Pitts. 2000. Causal Inference and Generalization in Field Settings: Experimental and Quasi–Experimental Designs. In Handbook of Research Methods in Social and Personality Psychology, eds. Harry T. Reis and Charles M. Judd, 40–84. Cambridge, U.K.: University Press.
Rick H. Hoyle