This section is devoted primarily to detailed explanations of the ways in which behavioral psychologists and psychopharmacologists explore the interactions between drug actions and behavior in laboratory settings. The section begins with an overview article, Aims, Description, and Goals. The article Developing Medications to Treat Substance Abuse and Dependence ties basic research directly to clinical applications. The articles on Drugs as Discriminitive Stimuli; Measuring Effects of Drugs on Behavior; Measuring Effects of Drugs on Mood ; and Motivation describe these general research techniques and concepts and their applicability to understanding drug abuses.
Research in the field of drug dependence, however, is much broader and more diverse than the topics included in this section. In fact, research is conducted on most of the topics contained in this encyclopedia—from epidemiological studies to new methods for detecting drug smuggling; from herbicides that can target specific plant sources of illicit drugs to how to target prevention messages to subgroups within the population; from how certain drugs produce their toxic effects to developing new drugs to reduce drug craving or prevent relapse; from how the interactions of environment and genetics make certain individuals more vulnerable to drug use to the relative effectiveness of different treatment programs. Many of these research issues are touched upon in such diverse articles as those on controlling illicit drug supply; on Treatment; or Prevention; and on Vulnerability as a Cause of Substance Abuse.
Clinical, behavioral, epidemiological, and basic research is carried out primarily by researchers at universities, government research centers, and research institutes. It is funded both publicly and privately. The work of a representative few of these centers is described elsewhere in the encyclopedia (see Addiction Research Foundation (Canada); Addiction Research Unit (U.K.); Center on Addiction and Substance Abuse (CASA); Rutgers Center of Alcohol Studies; U.S. Government/U.S. Government Agencies (SAMHSA, NIAAA, NIDA, CSAP, CSAT). In 1992, worldwide, there were more than eighty research centers devoted to problems of drugs and alcohol. Fifty-eight of the centers were in the United States; thirteen were in Europe and the U.K.; the others were in Central and South America, Asia, Australia, and New Zealand.
For more information on research, see also Imaging Techniques: Visualizing the Living Brain;Pain: Behavioral Methods for Measuring the Analgesic Effects of Drugs; Research, Animal Models.
Aims, Description, and Goals
In a Chinese book on pharmacy, which dates to 2732b. c., references are found to the properties of Marijuana (a type of Old World Hemp, Cannabis sativa of the mulberry family). In an Egyptian papyrus from about 1550b. c., there is a description of the effects of Opium (a product of the opium poppy, Papaver somniferum ). In almost every culture, the uses of Alcohol are documented in both oral and written tradition, often going back into antiquity—the Bible, for example, mentions both the use and abuse of wine. Although people have made observations on Psychoactive substances for thousands of years, much remains to be learned about both alcohol and drugs of abuse; much research remains to be done before these substances and their effects can be fully understood.
WHAT WE NEED TO KNOW
Most substance-abuse research carried out today is a consequence of public health and social concerns. With millions of people using and abusing many different substances, and because of the close association between AIDS and drug abuse, it is imperative to know just how dangerous—or not dangerous—any given drug is to public health and safety. For economic as well as medical reasons, it is essential to find the most effective ways to use our health resources for preventing and treating substance abuse. So many questions still exist that no one scientific discipline can answer them all. The answers must be found through studies in basic chemistry, molecular biology, genetics, pharmacology, neuroscience, biomedicine, physiology, behavior, epidemiology, psychology, economics, social policy, and even international relations.
From a social standpoint, the first question for research must be: How extensive is the problem? Surveys and other indicators of drug and alcohol usage are the tools used by epidemiologists to determine the extent and nature of the problem, or to find out how many people are abusing exactly which drugs, how often, and where. As the dimensions of the problem are defined, basic scientists begin their work, trying to discover the causes and effects of substance abuse at every level, from the movement of molecules to the behavior of entire human cultures. Chemists determine the physical structure of abused substances, and then molecular biologists try to determine exactly how they interact with the subcellular structures of the human body. Geneticists try to determine what components, if any, of substance abuse are genetically linked. Pharmacologists determine how the body breaks down abused substances and sends them to different sites for storage or elimination. Neuroscientists examine the effects of drugs and alcohol on the cells and larger anatomical structures of the brain and other parts of the nervous system. Since these structures control our thoughts, emotions, learning, and perception, psychologists and behavioral pharmacologists study the drugs' effects on their functions. Cardiologists and liver and pulmonary specialists study the responses of heart, liver, and lungs to drugs and alcohol. Immunologists examine the consequences of substance abuse for the immune system, a study made critical by the AIDS epidemic. The conclusions reached through these basic scientific inquiries guide clinicians in developing effective treatment programs.
In considering drug abuse, people have long wondered why so many plants contain substances that have such profound effects on the human brain and mind. Surely people were not equipped by nature with special places on their nerve cells (called Receptors) for substances of abuse—on the off chance that they would eventually smoke marijuana or take Cocaine or Heroin. The discovery in the late 1960s that animals would work to obtain injections or drinks of the same drugs that people abuse was an important scientific observation; it contributed to the hypothesis that there must be a biological basis for substance abuse. These observations and this reasoning led scientists to look for substances produced by people's own bodies (endogenous substances) that behave chemically and physiologically like those people put into themselves from the outside (exogenous substances)—like alcohol, Nicotine, marijuana, cocaine, and other drugs of abuse. When receptors for endogenous substances were discovered—first for the Opiates in the 1970s and only recently for Pcp, cocaine, marijuana, and Lsd—their existence helped establish the biological basis for drug abuse. So did the discovery of a genetic component for certain types of Alcoholism. These discoveries by no means negate the extensive behavioral and social components of substance abuse, but they do suggest a new weapon in dealing with the problem—that is, the possibility of using medication, or a biological therapy, as an adjunct to psychosocial therapies. Asserting a biological basis for substance abuse also removes some of the social stigma attached to drug and alcohol addiction. Since drug dependence is a disorder with strong biological components, society begins to understand that it is not merely the result of weak moral fiber.
Armed with information that was derived from basic research, clinical researchers in hospitals and clinics test and compare treatment modalities, looking for the best balance of pharmacological and psychosocial methods for reclaiming shattered lives. Finding the right approach for each type of patient is an important goal of treatment research, since patients frequently have a number of physical and mental problems besides substance abuse. The development of new medications to assist in the treatment process is an exciting and complex new frontier in substance-abuse research.
The best way to prevent the health and social problems that are associated with substance abuse has always been a significant research question. Insights gained from psychological and social research enable us to design effective prevention programs targeted toward specific populations that are particularly vulnerable to substance abuse for both biomedical and social reasons. Knowing the consequences of substance abuse often helps researchers to formulate prevention messages. For example, the identification of the Fetal Alcohol Syndrome (FAS), a pattern of birth defects among children of mothers who drank heavily during pregnancy, was a major research contribution to the prevention of alcohol abuse. Drug-abuse-prevention research has assumed a new urgency with the realization, brought about by epidemiologists and others, that the AIDS virus is blood-borne—spread by sexual contact and by drug abusers who share contaminated syringes and needles. HIV-positive drug users then spread the disease through unprotected sexual intercourse. Public education about drug abuse and AIDS must use the most powerful and carefully targeted means of reaching the populations at greatest risk for either disease, and these means can be determined only by the most careful social research and evaluation methodologies.
Substance-abuse research is no different from any other sort of scientific endeavor: The process is not always orderly. Critical observations by clinicians frequently provide basic researchers with important insights, which guide the research into new channels. Observations in one science often lead to breakthroughs in other areas.
The range of methods employed by scientists studying substance abuse is as wide as the range of methods in all the biological and social sciences. One important method is the use of animal models of behavior to answer many of the questions raised by drug and alcohol use. Animal models are used in biomedical research in virtually every field, but the discovery that animals will, for the most part, self-administer alcohol and the same drugs of abuse that humans do, meant that there was a great potential for behavioral research uncontaminated by many of the difficult-to-control social components of human research. The results of animal studies have been verified repeatedly in human research and in clinical observation, thus validating this animal model of human drug-seeking behavior.
Drug- and alcohol-abuse research is conducted by many different types of qualified professionals, but mostly by medical researchers (MDs) and people with advanced degrees (PhDs) in the previously mentioned sciences. They work with animals and with patients in university and federally funded laboratories, as well as in privately funded research facilities, in offices, and in clinical treatment centers. Other sites include hospitals, clinics, and sometimes schools, the streets, and even advertising agencies when prevention research is under way.
Who pays for substance-abuse research has always been an important issue. In the late 1980s and early 1990s, most of the drug- and alcohol-abuse research in the world was supported by the U.S. government. One of the federally funded National Institutes of Health—the National Institute on Drug Abuse (NIDA)—funds over 88 percent of drug-abuse research conducted in the United States and abroad. In 1992, this amounted to over 362 million dollars, which supported NIDA's own intramural research at the Addiction Research Center and the research done in universities under grants awarded by the institute. NIDA's sister institute, the National Institute on Alcohol Abuse and Alcoholism (NIAAA), plays a parallel role in funding alcohol-abuse research. In 1992, it funded 175 million dollars in alcohol-research grants. Many other U.S. government agencies also have important roles in sponsoring and conducting substance-abuse research. For the most part, state and local governments do not sponsor substance-abuse research, although they do much of the distribution of funds for treatment and prevention programs.
Other countries, most notably Canada, sponsor basic clinical and epidemiological substance-abuse research within their own universities and laboratories, but none does so on a scale that is comparable to that of the United States. Private foundations and research institutions like the Salk Institute for Biological Studies, Rockefeller University, and the Scripps Clinic and Research Foundation use their own funds, as well as federal grant support, to pay for their research endeavors. Pharmaceutical companies also support some substance-abuse research—mostly clinical work related to medications that might be used as part of treatment programs for drug and alcohol abuse. Again, much of this work is sponsored, in part, by the U.S. government.
(See also: National Household Survey ; Substance Abuse and HIV/AIDS ; Research, Animal Model ; U.S. Government/U.S. Government Agencies )
Alcohol and Health. (1990). Seventh Special Report to the U.S. Congress. DHHS Publication no. (ADM) 90-1656. Washington, DC: U.S. Government Printing Office.
Barinaga, M. (1992). Pot, heroin unlock new areas for neuroscience. Science, 258, 1882-1884.
Cooper, J. R., Bloom, F. E., & Roth, R. H. (1986). The biochemical basis of pharmacology. New York: Oxford University Press.
Drug Abuse and Drug Abuse Research III. (1991). Third Triennial Report to Congress. DHHS Publication no. (ADM) 91-1704. Washington, DC: U.S. Government Printing Office.
Gershon, E. S., & Rieder, R. D. (1992). Major disorders of mind and brain. Scientific American, 267 (3), 126-133.
Jaffe, J. H. Drug addiction and drug abuse. (1990). In A. G. Gilman et al. (Eds.), Goodman and Gilman's the pharmacological basis of therapeutics, 8th ed. New York: Pergamon.
Christine R. Hartel
In the process of developing new drugs, pharmaceutical companies must perform rigorous studies in the laboratory, in animals, and then, if the drug looks promising, in humans. Carefully designed research into the safety and effectiveness of a drug in humans is called Clinical research (or Clinical trials). Research resulting from new surgical techniques, medical devices, and other medical treatments also fall under this heading.
To conduct research in humans, approval must be obtained from the Food and Drug Administration (FDA). The research sponsors (usually the pharmaceutical company) submit a detailed application termed an Investigational New Drug Application that summarizes the drug characteristics, manufacturing process, and results of any laboratory and animal studies. In addition, this application provides detailed information regarding proposed studies in humans, including the research protocol, data collection documents, and informed consent form. If the drug is proven to be safe and effective, the sponsors can submit a voluminous application called a New Drug Application to the Food and Drug Administration. This application contains the material in the Investigational New Drug Application as well as the data, analyses, and conclusions of all of the clinical trials conducted.
Clinical trials of drugs or medical devices progress through four phases. Phase I studies are conducted on healthy volunteers to assess the safety of the drug or device. Phase II studies are conducted on a relatively small group of patients with the target disease to assess effectiveness as well as safety. Phase III studies are conducted on a large group of patients with the target disease to confirm effectiveness, observe side effects, and to compare the test treatment to the standard treatment. Phase IV studies are performed for a variety of reasons after the drug or device has been on the market. Reasons for conducting phase IV studies include: to test the treatment in different populations (e.g., in children or the elderly), to assess the effects of long-term use of the treatment, or to use the treatment on a different target disease.
Study design is a crucial determinant of the strength, validity, and subsequent usefulness of clinical research results. Study design is the methodology used to conduct the clinical research. Many different types of clinical research studies exist. The strength of the data depends upon the conditions used during the conduct of the trial. Also, these conditions help to eliminate bias by the investigator, patient, or others who are involved in the collection and analysis of the data. The most important conditions are blinding, randomization, and controlling. The randomized, controlled, double blind study is considered to be the clinical research ideal.
Blinding refers to the process in which the patient does not know whether he or she is receiving the test treatment or a placebo treatment. In the single blind design, the patient does not know which treatment he or she is receiving. The investigator knows, however, and this may lead to bias. Ideally, studies should be double blind, a condition in which neither the patient nor any of the other people who are actively involved in the study have knowledge of the treatment.
Randomization refers to the process in which the patients are randomly assigned to the various treatments. This insures that the test treatment and controls are allocated to the patient by chance, and not the choice of the investigator. Randomization eliminates the possibility that an investigator could sway study results.
Clinical research studies can be either controlled or uncontrolled. Controls can be either the standard treatment for the target disease (active controlled ) or a placebo (vehicle controlled ). Many diseases have a natural tendency to wax and wane so study results can be misleading without a control group to serve as a comparator to the treatment group. Because controlled studies are a more reliable indicator of a treatment's effectiveness, uncontrolled studies are considered as preliminary or suggestive, or they may be disregarded altogether.
Another important component of study design is the determination of the sample size, or number of patients to include in the study. A sample size that is too small will yield a study in which the results are not strong enough (not statistically significant) to prove that the test treatment is effective. The sample size is based upon, among other things, the number of treatment and control groups in the study and an estimation of the expected differences between these groups.
The study design is contained within the study protocol, which is a detailed document that outlines every aspect of the study. The protocol is essentially a set of rules that the investigator (s) must follow. It covers such things as who may be entered into the study, how to collect and record data, and how to record and report adverse reactions. Violation of any of the rules set forth in the protocol can disqualify an investigator, a patient, or even the entire study.
Although the randomized, controlled, single and double blind studies are very common designs, there are other study designs which may be used. The sponsor may initially conduct dose-finding studies in order to find the optimal dose of a test drug to treat the target disease. In the cross-over design, patients receive both treatments being compared (or treatment and a placebo) which factors out inter-individual variability. Each patient would receive one treatment for a designated time period, their disease state would be evaluated, and then they would switch to the other treatment for a designated time period. Other, more complex study designs are also employed. However, with increasing complexity comes increasing difficulties in data analysis, interpretation, and validity.
Federal regulations insure that the rights of subjects in a clinical trial are protected. Each clinical trial must be approved and monitored by a committee known as an Institutional Review Board, which has medical, scientific, and non-scientific members. Institutional Review Boards review and approve trial documents such as the protocol and informed consent form as well as the advertising materials needed to attract subjects. The purpose of the Institutional Review Board is to protect the rights, safety, and well-being of the study subjects.
The Food and Drug Administration requires that all participants in a clinical trial be informed of the details of the study. This process is called informed consent. Informed consent usually involves a lengthy document (informed consent form) that describes key facts about the study including: the purpose of the research, what the goals are, what procedures will be done, what the possible risks are, what the possible benefits are, and what other treatments are available for the target disease. In addition, the informed consent form stresses that the subject can leave the study at any time. An important component of the informed consent process is that the subject has the opportunity to ask questions regarding the study and/or the consent form.
Clinical research plays an invaluable role in the ongoing process of finding effective and safe treatments for diseases. The information obtained by clinical trials provides physicians with the necessary information to make informed choices in the treatment of their patients. Clinical studies are key in identifying the optimal doses of a new drug and also in providing information regarding the occurrence and incidence of adverse reactions. However, clinical research is limited by sample size. Even studies comprised of thousands of subjects will fail to pick up extremely rare, possibly serious adverse reactions that materialize during clinical use.
Bowling, A. (1997). Research methods in health: investigating health and health services. Buckingham: Open University Press.
"What is a clinical trial?" National Institutes of Allergy and Infectious Disease & National Institutes of Health & cited 4 September 2000 & http://www.niaid.nih.gov/clintrials/clinictrial.htm.
Developing Medications to Treat Substance Abuse and Dependence
Dependence on drugs, Alcohol, or Tobacco is difficult to treat, and practitioners have tried many approaches in their attempts to arrive at successful treatments. One approach is to develop medications, or pharmacological treatments. This approach is most effective when the medication is given along with behavioral treatments. These behavioral treatments help the individual cope with the underlying etiology of his or her drug use and the problems associated with drug use; they may also help ensure compliance in taking the medication that is prescribed.
PERPETUATION OF DRUG ABUSE: EUPHORIA AND WITHDRAWAL
Many people who are drug- or alcohol-dependent want to stop their habit, but often they have a difficult time doing so. There are at least two reasons for this difficulty. First, the drugs produce pleasant or euphoric feelings that the user wants to experience again and again. Second, unpleasant effects can occur when the drug use is stopped. The latter effect, commonly known as Withdrawal, has been shown after prolonged use of many drugs, including alcohol, Opiates (such as Heroin), Sedative Hypnotics, and anxiety-reducing drugs. Other drugs, such as Cocaine and even Caffeine (Coffee and Cola drinks) and Nicotine (cigarettes), are also believed to be associated with withdrawal effects after prolonged use. These unpleasant withdrawal effects are alleviated by further drug use. Thus drugs are used and abused because they produce immediate pleasant effects and because the drug reduces the discomfort of withdrawal.
The symptoms of withdrawal are fairly specific for each drug and include physiological effects and psychological effects. For example, alcohol withdrawal can be associated with shaking or headaches, and opiate withdrawal with anxiety, sweating, and increases in blood pressure, among other effects. Withdrawal from cocaine may cause depression or sadness, withdrawal from caffeine is associated with headaches, and withdrawal from nicotine often produces irritability. All drug withdrawals are also associated with a strong craving to use more drugs. Much work has been done to document the withdrawal effects from alcohol, opiates, Benzodiazepines, and tobacco; however, documentation of withdrawal from cocaine or other stimulant drugs has only recently begun to be examined.
NEURAL CHANGES WITH CHRONIC DRUG USE
Both withdrawal and the pleasant or euphoric effects from drug use occur, in part, as a result of the drug's action on the brain. The immediate or acute effects of most drugs of abuse affect areas of the brain that have been associated with "reward" or pleasure. These drugs stimulate areas normally aroused by natural pleasures such as eating or sexual activity. Long-term, or chronic, drug use alters these and other brain areas. Some brain areas will develop Tolerance to the drug effects, so that greater and greater amounts are needed to achieve the original effects of the drug. Some examples of drug effects that develop tolerance are the Analgesic or painkilling effect of opiates and the euphoria- or pleasure-producing effect of most drugs of abuse, which are probably related to their abuse potential.
Because some brain areas may also become sensitized, an original drug effect will either require a lesser amount of the drug to elicit the effect when the drug is used chronically or the effect becomes greater with chronic use. This phenomenon has been studied most extensively in cocaine use. Cocaine is associated with behavioral sensitization of motor activity in animals and paranoia (extreme delusional fear) in humans. There are physiological effects that develop tolerance or sensitization as well. For example, the chronic use of cocaine will sensitize some brain areas so that seizures are more easily induced. Other health risks of drug use will be addressed below.
In addition to these more direct acute and chronic drug effects, another phenomenon occurs with long-term drug use. This phenomenon is the conditioned drug effect, in which the environmental or internal (mood states) cues commonly presented with drug use become conditioned or psychologically associated with drug use. For example, when angry, a drug addict may buy or use drugs in a certain place with certain people. After frequently taking drugs under similar conditions, the individual can experience a strong craving or even withdrawal when in the environment in which he or she has taken drugs or feels angry. When the individual tries to stop using drugs, exposure to these conditioned cues can often lead to relapse because the craving and withdrawal effects are so powerful. Very little research has been done on the neural bases of these conditioned effects; thus it is not known whether these effects are mediated by similar or different neural mechanisms.
RESEARCH ON DRUG EFFECTS
Many of these acute and chronic effects of drugs on the brain have been investigated in animal research, which allows greater control over the research, including manipulations of drug exposure. A number of animal models are used to assess drug preferences, and, since most drugs that humans abuse are also preferred by animals, these models are useful for understanding human drug abuse. Moreover, animal research allows scientists to study directly the various areas of the brain that are involved in drug use. In addition, recent technological advancements on noninvasive Imaging have allowed scientists to took at pictures of the brains of humans while they are being administered drugs or while they are withdrawing from drugs. This human work has also enhanced our knowledge of the drug effects on the brain as well as validated the information gained from animal research.
Another useful line of research in assessing the effects of drugs involves human laboratory studies. In one type of study, research volunteers who have had experience with the abused drugs are given a specific drug (e.g., morphine), and various psychological and physiological measurements are obtained. The psychological measurements can include reports from the subject on the effects of the drug as well as more sophisticated behavioral measures that tell the experimenter how much the drug is preferred. Another type of human laboratory study is to study the effects of drug withdrawal. For opiates, withdrawal can be precipitated by an opiate Antagonist drug (Naltrexone), and withdrawal signs and symptoms are measured. For other drugs (such as cocaine), withdrawal is more difficult to measure because little is known about their withdrawal syndromes.
Some of the information that scientists have learned from such studies includes delineating specific brain areas as well as the Neurotransmitters (the chemicals released by the brain cells) involved in drug use and withdrawal. Thus, when specific neurotransmitters become identified as playing an important role in drug use or withdrawal, scientists can administer experimental drugs that act on these neurotransmitters to see if the animals will alter their drug preference or show less severe withdrawal signs. Researchers can also give these experimental drugs to the human research volunteers to see if the medication alters the subject's perception of or behavior toward the abused drug or if it alleviates withdrawal symptoms. If the results from these animal and human laboratory studies are promising, then these agents can be tested on treatment-seeking, drug-dependent individuals in clinical trials. This latter type of research is more time-consuming and expensive than the laboratory studies, but it helps provide an answer to the ultimate question: Does this medication help an individual stop abusing drugs?
APPROACHES TO DEVELOPING MEDICATIONS FOR DRUG ABUSE
Researchers can use the knowledge gained from animal and human studies of the effects of drugs on the brain as they develop medications for alcohol and drug dependence. Most likely, one medication will be needed to help detoxify the drug-dependent individual and a second medication to help sustain abstinence from drug use. This two-phase medication regimen is used for opiate and alcohol treatment, and it may ultimately be the approach used for countering dependency on other drugs, such as cocaine, sedatives, and nicotine. In theory, a pharmacological treatment agent or medication would block or reduce either the acute, rewarding effect of the drug or the discomfort of withdrawal. In practice, few treatment drugs have been found to be very effective in sustaining abstinence from drugs or alcohol.
Any pharmacological agent should be able to be given orally, as this is much easier than other routes of administration, such as injections. The agent itself must be medically safe and not enhance any of the health risks associated with illicit drug use, since the individual may illicitly use drugs while being maintained on the treatment agent. Finally, the pharmacological treatment agent must be acceptable to the patient. That is, if the agent causes undesirable side effects, individuals will likely not take it.
Current research with alcohol and drug effects on the brain and with treatment outcome hold great promise for effective pharmacological agents. This search process will necessarily include the animal and human laboratory studies mentioned as well as medicinal chemistry research. Medicinal chemistry research is used to develop new compounds that have similar but slightly altered chemical structures to the abused drugs or to the neurotransmitters that mediate the drug or alcohol effects. These new compounds are then tested in animals to see if they produce therapeutic effects. These effects include having a low potential for being another drug of abuse and attenuating the effects of the abused drug under study, preferably in a way that would lead to decreased drug abuse.
EXAMPLES OF MEDICATIONS USED TO TREAT DRUG ABUSE
Several types of medications have been developed for countering various kinds of dependencies.
Some of the best examples of pharmacotherapies for drug abuse were developed for opiate addicts. One of the first pharmacological agents used to treat opiate addicts is Methadone. Methadone itself is an opiate drug and effectively reduces or blocks the withdrawal discomfort brought on by discontinuing use of heroin or other illegal opiate. Although methadone is itself addictive, it is delivered to the opiate-dependent patients in a facility with psychological and other medical and support treatments and services. Methadone is safer than opiates obtained illegally—in part because it is given orally. Because illegal opiates are often injected by addicts, they can lead to many diseases—including AIDS and hepatitis, if the needles are shared with an infected person. Illegal drug use is expensive, and many addicts steal to support their habit. Moreover, since drugs obtained illegally vary in their quality and purity, there is a greater chance of getting an overdose that produces severe medical problems and, perhaps, death. Thus methadone decreases the need to use illegal opiates, as a result of its ability to relieve withdrawal as well as to block the effects of other opiates by cross-tolerance. Moreover, it reduces the health risks and social problems associated with illegal opiate use.
Another treatment drug that was developed for opiate dependence and abuse is naltrexone. This agent blocks the ability of the opiate drug to act on the brain. Thus, if heroin addict maintained on naltrexone injects heroin, he or she will not feel the pleasant or other effects of the heroin. The principle behind this approach is based on research suggesting that drug use is continued, despite the dire consequences, because of the euphoria associated with its use. Once maintained on naltrexone, the addict may forget this association, because the drug can no longer produce these effects. Unfortunately, although naltrexone works well for some, others will simply discontinue using the naltrexone in order to get high from drugs again.
Before opiate abusers can be maintained on the medication naltrexone, they must be detoxified from the opiate drugs in their systems. Although abstaining ("cold turkey") from heroin use for several days will accomplish detoxification, the withdrawal process is difficult because of the physical distress it causes. Thus, another Detoxification method was developed in which the withdrawal is precipitated, or triggered, with naltrexone, while the symptoms are treated with another medication, Clonidine. When withdrawal is precipitated, the symptoms are worse than that seen with natural withdrawal, but the symptom course is much briefer. Moreover, clonidine helps alleviate the symptoms, to make this shorter-term withdrawal process less severe.
An example of another type of medication is one used to treat alcoholism: Disulfiram. The basis for this agent's therapeutic effect is different from that of methadone or naltrexone. When someone is maintained on disulfiram, future alcohol ingestion will cause stomach distress and, possibly, vomiting, because the disulfiram prevents the breakdown of a noxious alcohol metabolite by the liver. Patients maintained on disulfiram should come to forget the pleasant effects of alcohol use, which is similar to the psychological basis of naltrexone maintenance. Moreover, they should begin to develop an aversion to alcohol use. Another similarity to the use of naltrexone is that disulfiram treatment of alcoholism has not been very successful, because the patient who wants to use alcohol again can simply stop using the disulfiram.
Some pharmacological agents have been tested to reduce craving for alcohol and thus help the alcoholic abstain from drinking. These drugs include naltrexone, which was developed for opiate addicts, and fluoxetine. The former medication is a potential treatment drug because most drugs of abuse are believed to be mediated, in part, through the brain's natural opiate system (Endorphins, etc.). Based on research that implicates a specific neurotransmitter system (Serotonin) in alcohol craving, the latter medication and others of this type may be useful. However, as in the treatment of opiate abuse, alcoholics must be detoxified before any of these medications are used as maintenance agents.
One commonly used pharmacological treatment for tobacco dependence is Nicotine Gum (Nicorette). The main reason to quit smoking is that it is linked to lung cancer, emphysema, and other serious illnesses. Yet the active ingredient in cigarettes, Nicotine, is associated with pleasant effects and with withdrawal discomfort, thereby making it an extremely addicting drug. Providing smokers with nicotine replacement in the form of a gum will help them avoid the health risks associated with cigarettes. One problem with nicotine gum is that it is difficult to chew correctly; people need to be shown how to chew it in order to get the therapeutic effect. A patch is also available that is placed on the arm and automatically releases nicotine. This method shows good treatment potential. Detoxification from nicotine may also be facilitated with the medication clonidine, the same agent used to help alleviate opiate withdrawal symptoms.
Developing pharmacological treatment agents for stimulant (e.g., cocaine) dependence is a difficult task but has been the focus of a great deal of research. One of the difficulties for treating cocaine abuse is that cocaine affects many different neurotransmitter systems in various ways. Thus one approach may be to develop a treatment drug or regimen of drugs that affects a variety of neurotransmitter systems. However, the exact nature of the neural effects of cocaine are still not entirely understood.
Another difficulty is that it is not clear what approach to take in developing a treatment drug. One obvious technique in developing a medication for cocaine abuse is to use an agent that blocks the rewarding aspects of cocaine use. This type of drug would, presumably, decrease cocaine use because the rewarding effects are no longer experienced. However, this approach is similar to having opiate addicts use naltrexone, which has not been well accepted by heroin addicts. Clinical work with some treatment agents that were suggested to block the rewarding effects of cocaine did not prove to be useful in the treatment of abuse and dependence. Whether this lack of treatment effect resulted from a flaw in the method or from the limitations in our knowledge of cocaine's effects on neurotransmitter systems is not clear. One problem is that the potential blocking agents for cocaine may produce dysphoria, or an unpleasant feeling.
Another approach to treating cocaine abuse and dependence is based on a premise similar to that of methadone for opiate abuse. That is, a pharmacological agent similar in its effects to cocaine, but one that is not addicting, may be a useful anticraving agent. Just as methadone helps alleviate drug withdrawal, an agent of this type for cocaine abuse may alleviate the distress and craving associated with abstinence from cocaine. Several medications of this type have been tried, including bromocriptine and Amantadine. Thus far, these and other agents have shown some limited treatment promise.
Most of the approaches to developing pharmacological treatments for cocaine abuse have been based on research suggesting that one specific neurotransmitter (Dopamine) is important for cocaine's rewarding effects. Yet other neurotransmitters are activated during cocaine use and may be better targets for developing new treatment drugs. That is, although dopamine is critical for the rewarding aspects of cocaine use, other neurotransmitter systems may be more important in withdrawal distress. Although withdrawal distress from cocaine has been difficult to document, depression is thought to be one aspect of abstaining from chronic cocaine use. Antidepressant medications, such as desipramine and imipramine, have shown some, albeit limited, treatment potential.
Current treatments for sedative dependence include detoxification agents, not anticraving agents. Detoxification is accomplished by tapering the dosage of Benzodiazepines over two to three weeks. More recently, carbamazepine, an antiseizure analgesic medication, has been shown to relieve alcohol and sedative withdrawal symptoms, including seizures and delirium tremens. Future work with agents that block the actions of benzodiazepines may hold promise as a maintenance or anticraving agent to help the sedative abuser abstain from drug abuse.
One of the greatest lessons learned from the practice of giving medications to drug-abusing individuals is that these medications must be accompanied by psychological and social treatments and support. Medications do not work on their own. Moreover, medications that are developed based on theoretical principles of altering or blocking the drug's effects in the brain may not be useful in the practice of treating drug abuse and dependence, because the premises of how to develop a pharmacological treatment agent may not be accurate. Yet the largest research challenge is to understand the etiology and mechanisms of drug abuse. Thus more research in many fields is needed to identify potential medications in order to develop more effective treatments for the difficult problem of drug abuse and dependence.
(See also: Addiction: Concepts and Definitions ; Imaging Techniques: Visualizing the Living Brain ; Treatmeat/Treatment Types )
Jaffe, J. H. (1985). Drug addiction and drug abuse. In A. G. Gilman, el al. (Eds.), Goodman and Gilman's the pharmacological basis of therapeutics, 7th ed. New York: Macmillan.
Kosten, T. R., & Kleber, H.D. (Eds.). (1992). Clinician's guide to cocaine addiction. New York: Guilford Press.
Liebman, J. L., & Cooper, S.J. (Eds.). (1989). The neuropharmacological basis of reward. Oxford: Clarendon Press.
Lowinson, J. H., Ruiz, P., & Millman, R.B. (Eds.). (1992). Substance abuse: A comprehensive textbook. Baltimore: Williams & Wilkins.
Miller, N.S. (Eds.). (1991). Comprehensive handbook of drug and alcohol addiction. New York: Marcel Dekker.
Therese A. Kosten
Drugs as Discriminative Stimuli
Human behavior is influenced by numerous stimuli in the environment. Those stimuli acquire behavioral control when certain behavioral consequences occur in their presence. As a result, a particular behavioral response becomes more or less likely to occur when those stimuli are present. For example, several laboratory experiments have demonstrated that it is possible to increase a particular response during a stimulus (such as a distinctively colored light) by arranging for reinforcement (such as a preferred food or drink) to be given following that> response when the stimulus is present; when that stimulus is absent, however, responses do not produce the reinforcer. Over a period of time, responding will then occur when the stimulus is present but not when it is absent. Stimuli that govern behavior in this manner are termed discriminative stimuli and have been widely used in behavioral and pharmacological research to better understand how behavior is controlled by various stimuli, and how those stimuli, in turn, might affect the activity of various drugs.
It is important to recognize that there are differences between discriminative stimuli that merely set the occasion for a response to be reinforced and other types of stimuli that directly produce or elicit responses. Discriminative stimuli do not coerce a response from the individual in the same way that a stimulus such as a sharp pierce evokes a reflexive withdrawal response. Instead, discriminative stimuli may be seen as providing guidance to behavior because of the unique history of reinforcement that has occurred in their presence.
DRUGS AS DISCRIMINATIVE STIMULI
Although the stimuli that typically govern behavior are external (i.e., located in the environment outside the skin), it is also possible for internal or subjective stimuli to influence behavior. One of the more popular methods to emerge in the field of behavioral pharmacology has been the use of drugs as discriminative stimuli. The procedure consists of establishing a drug as the stimulus, in the presence of which a particular response is reinforced. Typically, to establish a drug as a discriminative stimulus, a single dose of a drug is selected and, following its administration, one of two responses are reinforced; with rodents or nonhuman primates, this usually entails pressing one of two simultaneously available levers, with reinforcement being scheduled intermittently after a fixed number of correct responses. Alternatively, when saline or a placebo is administered, responses on the other device are reinforced. Over a number of experimental sessions, a discrimination develops between the administration of the drug and saline, with the interoceptive (subjective) stimuli produced by the drug seen as guiding or controlling behavior in much the same manner as any external stimulus, such as a visual or auditory stimulus. Once the discrimination has been established, as indicated by the selection of the appropriate response following either the training drug or the saline administration, it is possible to investigate aspects of the drug stimulus in the same way as one might investigate other physical stimuli. It is thus possible to determine gradients of intensity or dose-effect functions with the training drug as well as generalization functions aimed at determining how similar the training drug dose is to a different dose or to another drug substituted for the training stimulus.
BASIC EXPERIMENTAL RESULTS
One of the more striking aspects of the drug discrimination technique is the strong relationship that has been found between the stimulus-generalization profile and the receptor-binding characteristics of the training drug. For example, animals trained to discriminate between a Benzodiazepine anxiolytic, such as Chlordiazepoxide, and saline solution typically respond similarly to other drugs that also interact with the receptor sites for benzodiazepine ligands. Anxiolytic drugs that produce their effects through other brain mechanisms or receptors do not engender responses similar to those occasioned by benzodiazepines. This suggests that it is activity at a specific Receptor that is established when this technique is used and not the action of the drug on a hypothetical psychological construct such as anxiety (Barrett & Gleeson, 1991).
Several studies have examined the effects of drugs of abuse by using the drug discrimination procedure, and they have established Cocaine and numerous other drugs—such as an Opiate, Phencyclidine (PCP), or Marijuana—as a discriminative stimulus in an effort to help delineate the neuropharmacological or brain mechanisms that contribute to the subjective and abuse-liability effects of these drugs. As an example, Figure 1 shows the results obtained in pigeons trained to discriminate a 1.7 milligram per kilogram (mg/kg) dose of cocaine from saline. The dose-response function demonstrates that doses below the training dose of cocaine yielded a diminished percentage of responses on the key correlated with cocaine administration, which suggests that the lower doses of cocaine were less discernible than the training dose. In addition, other psychomotor stimulants such as Amphetamine and Methamphetamine also produced cocaine-like responses, and this suggests that these drugs share some of the neurochemical properties of cocaine. In contrast, other drugs, such as the α2-adrenoreceptor antagonist yohimbine, along with several other drugs such as morphine, PCP, or marijuana (that are not illustrated) do not produce responding on the key correlated with cocaine administration—thereby suggesting that the mechanisms of action underlying those drugs, as well as their subjective effects, are not similar to those of cocaine and the other psychomotor stimulants in this figure.
The use of drugs as discriminative stimuli has provided a wealth of information on the way drugs are similar to more conventional environmental stimuli in their ability to control and modify behavior. The procedure has also increased our understanding of the neuropharmacological mechanisms that operate to produce the constellation of effects associated with those drugs. The technique has wide generality and has been studied in several species, including humans—in whom the effects are quite similar to those of nonhumans.
Because it is believed that the subjective effects of a drug are critical to its abuse potential, the study of drugs of abuse as discriminative stimuli takes on added significance. A better understanding of the effects of drugs of abuse as pharmacologically subjective stimuli provides a means by which to evaluate possible pharmacological as well as behavioral approaches to the treatment of drug abuse. For example, a drug that prevents or antagonizes the discriminative-stimulus effects (and presumably the neuropharmacological actions) of an abused drug might be an effective medication to permit individuals to diminish their intake of abused drugs, because the stimuli usually associated with its effects will no longer occur. Similarly, although little work has been performed on the manipulation of environmental stimuli correlated with the drug stimulus, it might be possible to design innovative treatment strategies in which other stimuli compete with the subjective discriminative-stimulus effects of the abused drug. Thus, a basic experimental procedure such as drug discrimination has provided a useful experimental tool for understanding the behavioral and neuropharmacological effects of abused drugs.
Further work may help design and implement novel treatment approaches to modifying the behavioral and environmental conditions surrounding the effects of abused drugs and thus result in diminished behavioral control by substances of abuse.
(See also: Abuse Liability of Drugs ; Drug Types ; Research, Animal Model )
Barrett, J. E., & Gleeson, S. (1991). Anxiolytic effects of 5-HT 1A agonists, 5-HT 3 antagonists and benzodiazepines. In R. J. Rodgers & S. J. Cooper (Eds.), 5-HT 1A agonists, 5-HT 3 antagonists and benzodiazepines: Their comparative behavioral pharmacology. New York: Wiley.
Johanson, C. E., & Barrett, J. E. (1993). The discriminative stimulus effects of cocaine in pigeons. Journal of Phamacology and Experimental Therapeutics, 267, 1-8.
James E. Barrett
Measuring Effects of Drugs on Behavior
People throughout world take drugs such as Heroin, Cocaine, and Alcohol because these drugs alter behavior. For example, cocaine alters general activity levels; it increases wakefulness and decreases the amount of food an individual eats. Heroin produces drowsiness, relief from pain, and a general feeling of pleasure. Alcohol's effects include relaxation, increased social interactions, marked sedation, and impaired motor function. For the most part, the scientific investigations of the ways drugs alter behavior began in the 1950s, when chlorpromazine was introduced as a treatment for Schizophrenia. As a result of this discovery, scientists became interested in the development of new medications to treat behavioral disorders as well as in the development of procedures for studying behavior in the laboratory.
HOW IS BEHAVIOR STUDIED?
The simplest way to study the effects of drugs on behavior is to pick a behavior, give a drug, and observe what happens. Although this approach sounds very easy, the study of a drug's effect on behavior is not so simple. Like any other scientific inquiry, research in this area requires careful description of the behaviors being examined. If the behavior is not carefully described, it is difficult to determine whether a change in behavior following drug administration is actually due to the drug.
Behavior is best defined by describing how it is measured. By specifying how to measure a behavior, an operational definition of that behavior is developed. For example, to study the way in which a drug alters food intake, the following procedure might be used: First, select several people and present each with a box of cereal, a bowl, a spoon, and some milk after they wake up in the morning. Then measure how much cereal and milk they each consume within the next thirty minutes. To make sure the measurements are correct, repeat the observations several times under the same conditions (i.e., at the same time of day, with the same foods available). From these observations, determine the average amount of milk and cereal consumed by each person. This is the baseline level. Once the baseline level is known, give a small amount of drug and measure changes in the amount of milk and cereal consumed. Repeat the experiment, using increasing amounts of the drug. This concept of baseline level and change from baseline level is common to many scientific investigations.
In addition to defining behavior by describing how it is measured, a good behavioral procedure is also (1) sensitive to the ways in which drugs alter behavior and (2) is reliable. Sensitivity refers to whether a particular behavior is easily changed as the result of drug administration. For example, food consumption may be altered by using cocaine, but other behaviors may not be. Reliability refers to whether a drug produces the same effect each time it is taken. In order to say that cocaine reliably alters the amount of food consumed, it should decrease food consumption each time it is given, provided that the experimental conditions surrounding its administration are the same.
WHAT FACTORS INFLUENCE A DRUG'S EFFECTS ON BEHAVIOR?
Although good behavioral procedures are necessary for understanding a drug's effects on behavior, pharmacological factors are also important determinants of a drug's effect. Pharmacological factors include the amount of drug given (the dose ), how quickly the drug produces its effects (its onset ), the time it takes for its effects to disappear (its duration ), and whether the drug's effects are reduced (tolerance ) or increased (sensitivity ) if it is taken several times. Although this point may seem obvious, it is often overlooked. It is impossible to describe the behavioral effects of a drug on the basis of just one dose of the drug, since drugs can have very different effects, depending on how much of the drug is taken. Moreover, the probability that a drug will produce an effect also depends on the amount taken. As an example, consider Figure 1, which shows the risk of being involved in a traffic accident as a function of the amount of alcohol in a person's blood.
The way in which a drug is taken is also important. Cocaine can be taken by injection into the veins, by smoking, or by sniffing through the nose. Each of these routes of administration can produce different effects. Environmental factors also influence a drug's effect. Cocaine might change the amount of cereal and milk consumed in the morning but it might not change the amount consumed at a different time of day or if other types of food are available. Finally, individual factors also influence the drug effect. These include such factors as how many times an individual has taken a particular drug; what happened the last time it was taken; or what one may have heard from friends about a drug's effects.
HOW IS BEHAVIOR STUDIED IN THE LABORATORY?
Human behavior is very complex, and it is often difficult to examine. Although scientists do conduct studies on people, many investigations of drug effects on behavior are carried out using animals. With animals, investigators have better control over the conditions in which the behavior occurs as well as better information about the organism's past experience with a particular drug. Although animal experiments provide a precise, controlled environment in which to investigate drug effects, they also have their limitations. Clearly, they cannot research all the factors that influence human behavior. Nevertheless, many of the effects that drugs produce on behavior in animals also occur in humans. Moreover, behavioral studies sometimes require a large number of subjects with the same genetic makeup or with no previous drug experience. It is easier to meet these requirements in animal studies than in studies with people.
Since animals are often used in research studies, it is important to remember that behavioral scientists are very concerned about the general welfare of their animals. The U.S. Animal Welfare Act set standards for handling, housing, transporting, feeding, and veterinary care of a wide variety of animals. In addition, all animal research in the United States is now reviewed by a committee that includes a veterinarian experienced in laboratory-animal care. This committee inspects animal-research areas and reviews the design of experiments to ensure that the animals are treated well.
WHAT APPROACHES ARE USED TO EXAMINE DRUG EFFECTS?
In general, there are two ways to examine drug effects on behavior in the laboratory. One approach relies on observation of behavior in an animal's home cage or in an open area in which the animal (or person) can move about freely. When observational approaches are used, special precautions are necessary. First of all, the observer's presence should not disrupt the experiment. Television-monitoring systems and videotaping make it possible for the observer to be completely removed from the experimental situation. Second, the observer should not be biased. The best way to insure that the observer is not biased is to make the observer "blind" to the experimental conditions; that is, the observer does not know what drug is given or which subject received the drug. If the study is done in human subjects, then they also should be blind to the experimental conditions. An additional way to make sure observations are reliable is to use more than one observer and compare observations. If these precautions are taken, observational approaches can produce interesting and reliable data. Indeed, much of what is known about drug effects on motor behavior, food or water intake, and some social behaviors comes from careful observational studies.
Another approach uses the procedures of classical and operant conditioning. This involves training animals to make specific responses under special conditions. For example, in a typical experiment of this sort, a rat is placed in an experimental chamber and trained to press a lever to receive food. The number and pattern of lever presses are measured with an automatic device, and changes in responding are examined following drug administration. These procedures have several advantages. First, they produce a very consistent measure of behavior. Second, they can be used with human subjects as well as with several different animal species. Third, the technology for recording behavior eliminates the need for a trained observer.
WHAT BEHAVIORS DO DRUGS ALTER?
Some of the behaviors that drugs alter are motor behavior, sensory behavior, food and water intake, social behavior, and behavior established with classical and operant conditioning procedures. By combining investigations of these behaviors, scientists classify drugs according to their prominent behavioral effects. For example, drugs such as Amphetamine and cocaine are classified as Psychomotor Stimulants because they increase alertness and general activity in a variety of different behavioral procedures. Drugs such as Morphine are classified as analgesics because they alter the perception of pain, without altering other sensations such as vision or audition (hearing).
Most behaviorally active drugs alter motor behavior in some way. Morphine usually decreases motor activity, whereas with cocaine certain behaviors occur over and over again (that is, repetitively). Other drugs, such as alcohol, may alter the motor skills used in Driving a car or operating various types of machinery. Finally, some drugs alter exploratory behavior, as measured by a decrease in motor activity in an unfamiliar environment. Examination of the many ways in which drugs alter motor behavior requires different types of procedures. Some of these procedures examine fine motor control or repetitive behavior; others simply measure spontaneous motor activity.
Although changes in motor behavior can be observed directly, most studies of motor behavior use some sort of automatic device that does not depend on human observers. One of these devices is the running wheel. The type of running wheel used in scientific investigations is similar to the running wheel in pet cages. This includes a cylinder of some sort that moves around an axle when an animal walks or runs in it. The only difference between a running wheel in a pet cage and a running wheel in the laboratory is its size and the addition of a counter that records the number of times the wheel turns. Another device for measuring motor behavior uses an apparatus that is surrounded by photocells. If the animal moves past one of the photocells, a beam of light is broken and a count is produced. Yet another way to measure motor behavior is with video tracking systems. An animal is placed in an open area and a tracking system determines when movement stops and starts as well as its speed and location. This system provides a way to look at unique movement patterns such as repetitive behaviors. For example, small amounts of amphetamine increase forward locomotion, whereas larger amounts produce repetitive behaviors such as head bobbing, licking, and rearing. Until recently, this type of repetitive behavior was measured by direct observation and description.
Although technology for measuring motor behavior is very advanced, it is important to remember that how much drug is given, where it is given, and the type of subject to whom it is given will also influence a drug's effect on motor behavior. Whether a drug's effects are measured at night or during the day is an important factor. The age, sex, species, and strain of the animal is also important. Whether food and water are available is another consideration as well as the animal's previous experience with the drug or test situation. As an example, see Table 1, which shows how the effects of alcohol on motor behavior differ depending on the amount of alcohol in a person's blood.
The integration and execution of every behavior an organism engages in involves one or more of the primary senses, including hearing, vision, taste, smell, and touch. Obviously, a drug can affect sensory behavior and thereby alter a number of different behaviors. For example, drugs such as Lysergic Acid Diethylamide (LSD) produce visual abnormalities and Hallucinations. Phencyclidine (PCP) produces a numbness in the hands and feet. Morphine alters sensitivity to painful stimuli.
It is difficult to investigate drug effects on sensory behavior, since changes in sensory behavior cannot be observed directly. In order to determine whether someone hears a sound, one must report having heard it. In animal studies, rats or monkeys are trained to press a lever when they hear or see a given stimulus. Then a drug is given and alterations in responding are observed. If the drug alters responding, it is possible that the drug did so by altering sensory behavior; however, care must be taken in coming to this conclusion since a drug might simply alter the motor response used to measure sensory behavior without changing sensory behavior at all.
One area of sensory behavior that has received considerable attention is pain perception. In most procedures for measuring pain perception, a potentially painful stimulus is presented to an organism and the time it takes the organism to respond to that stimulus is observed. Once baseline levels of responding are determined and considered reliable, a drug is given. If the time it takes the organism to respond to the stimulus is longer following drug administration and if this change is not because the animal is too sedated to make a response, then the drug probably has altered pain perception.
Among the most common procedures used to measure pain perception is the tail-flick procedure in which the time it takes an animal to remove its tail from a heat source is measured prior to and after administration of a drug such as morphine. Another commonly used procedure measures the time it takes an animal to lick its paws when placed on a warm plate or to remove its tail from a container of warm water. Thus, an alteration in pain perception is operationally defined as a change in responding in the presence of a painful stimulus. It is also important to note that the animal, not the experimenter, determines when to respond or remove its tail. Also, these procedures do not produce long-term damage or discomfort that extends beyond the brief experimental session.
Food and Water Intake.
The simplest way to measure food and water intake is to determine how much an organism eats or drinks within a given period of time. A more thorough analysis might also include counting the number of times an organism eats or drinks in a single day, or measuring the time between periods of eating and periods of drinking. Several factors are important in accurately measuring food and water intake. For example, how much food or water is available to the organism and when is it available? Is it a food the organism likes? When did the last meal occur?
In animals, food intake is often measured by placing several pieces of pelleted food of a known weight in their cages. The food that remains after a period of time is weighed and subtracted from the original amount to get an estimate of how much was actually eaten. Water intake is usually measured with calibrated drinking tubes clipped to the front of the animal's cage or with a device called a drinkometer, which counts the number of times an animal licks a drinking tube. An accurate measure of fluid intake also requires a careful description of the surrounding conditions. For example, was fluid intake measured during the day or during the night? Was food also available? What kind of fluid was available? Was there more than one kind of fluid available? These procedures are also used to examine drug intake. If rats are presented with two different drinking tubes, one with alcohol, another with water, they will generally drink more alcohol than water; however, the amount they drink is generally not sufficient to produce intoxication or physical dependence. Rats will drink a large amount of alcohol as well as other drugs of abuse such as morphine and cocaine when these drugs are the only liquid available. Indeed, most animals will consume sufficient quantities to become physically dependent on alcohol or morphine.
Behaviors such as aggression, social interaction, and sexual behavior are usually measured by direct experimenter observation. Aggressive behavior can be measured by observing the number of times an animal engages in attack behavior when another animal is placed into its cage. In some cases, isolation is used to produce aggressive behavior. Sexual behavior is also measured by direct observation. In the male rat or cat, the frequency of behaviors such as mounting, intromission, and ejaculation are observed. Another interesting procedure for measuring social behavior is the social interaction test. In this procedure, two rats are placed together and the time they spend in active social interaction (sniffing, following, grooming each other) is measured under different conditions. In one condition, the rats are placed in a familiar environment; in another condition, the environment is unfamiliar. Rats interact more when they are in a familiar environment than when they are in an unfamiliar environment. Moreover, antianxiety drugs increase social interaction in the unfamiliar area. These observational techniques can produce interesting data, provided that they are carried out under well-controlled conditions, the behavior is well-defined, and care is taken to make sure the observer neither disrupts the ongoing behavior nor is biased.
Classical conditioning was made famous by the work of the Russian scientist Ivan Pavlov in the 1920s. In those experiments, Pavlov used the following procedure. First, dogs were prepared with a tube to measure saliva, as shown in Figure 2. Then Pavlov measured the amount of saliva that was produced when food was given. The amount of saliva not only increased when food was presented but also when the caretaker arrived with the food. From these careful observations, Pavlov concluded that salivation in response to the food represented an inborn, innate response that did not require any learning. Because no learning was required, he called this an unlearned (unconditioned) response and the food itself an unlearned (unconditioned) stimulus. The dogs did not automatically salivate, however, when the caretaker entered the room; but after the caretaker and the food occurred together several times, the presence of the caretaker was paired with (or conditioned to) the food. Pavlov called the caretaker the conditioned stimulus and he called the salivation that occurred in the presence of the caretaker a conditioned response.
Events in the environment that are paired with or conditioned to drug delivery can also produce effects similar to the drug itself, much in the same way that Pavlov's caretaker was conditioned to food delivery. For example, when heroin-dependent individuals stop taking heroin, they experience a number of unpleasant effects, such as restlessness, irritability, tremors, nausea, and vomiting. These are called withdrawal or abstinence symptoms. If an individual experiences withdrawal several times in the same environment, then events or stimuli in that location became paired with (or conditioned to) the withdrawal syndrome. With time, the environmental events themselves can produce withdrawal-like responses, just as the caretaker produced salivation in Pavlov's dogs.
About a decade after Pavlov's discovery of classical conditioning, another psychologist, B. F. Skinner, was developing his own theory of learning. Skinner observed that certain behaviors occur again and again. He also observed that behaviors with a high probability of occurrence were behaviors that produced effects on the environment. According to Skinner, behavior "operates" on the environment to produce an effect. Skinner called this process operant conditioning. For example, people work at their jobs because working produces a paycheck. In this situation, working is the response and a paycheck is the effect. In other situations, a person does something to avoid a certain effect. For example, by driving a car within the appropriate speed limit, traffic tickets are avoided and the probability of having a traffic accident is reduced. In this case, the response is driving at a given speed and the effect is avoiding a ticket or an accident.
If the effect that follows a given behavior increases the likelihood that the behavior will occur again, then that event is called a reinforcer. Food, water, and heat are common reinforcers. Drug administration is also a reinforcer. It is well known that animals will respond on a lever to receive intravenous injections of morphine, cocaine, and amphetamine, as well as a number of other drugs. Not all drugs are self-administered, however. For example, animals will respond to avoid the presentation of certain nonabused drugs such as the Antipsychotics (medications used in the treatment of schizophrenia). Because there is a good correlation between drugs that are self-administered by animals and those that are abused by people, the self-administration procedure is often used to examine drug-taking behavior.
In most operant conditioning experiments, animals perform a simple response such as a lever press or a key peck to receive food. Usually the organism has to make a fixed number of responses or to space responses according to some temporal pattern. The various ways of delivering a reinforcer are called schedules of reinforcement. Schedules of reinforcement produce very consistent and reliable patterns of responding. Moreover, they maintain behavior for long periods of time, are easily adapted for a number of different animals, and provide a very accurate measure of behavior. Thus, they provide a well-defined, operational measure of behavior, which is used to examine the behavioral effects of drugs.
Motivation, Learning, Memory, and Emotion.
One of the biggest challenges for behavioral scientists is to develop procedures for measuring drug effects on processes such as motivation, emotion, learning, or memory since these behaviors are very difficult to observe directly. Drugs certainly alter processes such as these. For example, many drugs relieve anxiety. Other drugs produce feelings of pleasure and well-being; still others interfere with memory processes. Given the complexity of devised procedures, they are not described in detail here; however, it is important to emphasize that the approach for examining the effects of drugs on these complex behaviors is the same as it is for any behavior: First, carefully define the behavior and describe the conditions under which it occurs. Second, give a drug and observe changes in the behavior. Third, take special care to consider pharmacological factors, such as how much drug is given, when the drug is given, or the number of times the drug is given. Fourth, consider behavioral factors, such as the nature of the behavior examined, the conditions under which the behavior is examined, as well as the individual's past experience with the behavior.
To find out how drugs alter behavior, several factors are considered. These include the Pharmacology of the drug itself as well as an understanding of the behavior being examined. Indeed, the behavioral state of an organism, as well as the organism's past behavior and experience with a drug contribute as much to the final drug effect as do factors such as the dose of the drug and how long it lasts. Thus, the examination of drug effects on behavior requires a careful description of behavior with special attention to the way in which the behavior is measured. Behavioral studies also require a number of experimental controls, which assure that changes in behavior following drug administration are actually due to the drug itself and not the result of behavioral variability.
(See also: Addiction: Concepts and Definitions ; Aggression and Drugs ; Causes of Drug Abuse ; Pharmacodynamics ; Psychomotor Effects of Alcohol and Drugs ; Reinforcement ; Research, Animal Model ; Sensation and Perception and Effects of Drugs ; Tolerance and Physical Dependence )
Carlton, P. L. (1983). A primer of behavioral pharmacology. New York: W. H. Freeman.
Domjan, M., & Burkhard, B. (1982). The principles of learning and behavior. Pacific Grove, CA: Brooks/Cole Publishing Co.
Greenshaw, A. J., & Dourish, C.T. (Eds.). (1987). Experimental psychopharmacology. Clifton, NJ: Humana Press.
Grilly, D. M. (1989). Drugs and human behavior. Boston: Allyn & Bacon.
Julien, R. M. (1988). A primer of drug action. New York: W. H. Freeman.
Mc Kim, W. A. (1986). Drugs and behavior. Englewood Cliffs, NJ: Prentice-Hall.
Myers, D. G. (1989). Psychology. New York: Worth.
Ray, O., &Ksir, C. (1987). Drugs, society, & human behavior. St. Louis: Times Mirror/Mosby.
Seiden, L. S., & Dykstra, L. A. (1977). Psychopharmacology: A biochemical and behavioral approach. New York: Van Nostrand Reinhold.
Linda A. Dykstra
Measuring Effects of Drugs on Mood
Subjective effects are feelings, perceptions, and moods that are the personal experiences of an individual. They are not accessible to other observers for public validation and, thus, can only be obtained through reports from the individual. Subjective-effect measures are used to determine whether the drug is perceived and to determine the quantitative and qualitative characterization of what is experienced. Although subjective effects can be collected in the form of narrative descriptions, standardized questionnaires have greater experimental utility. For example, they may be used to collect the reports of individuals in a fashion that is meaningful to outside observers, can be combined across subjects, and can provide data that are reliable and replicable. The measurement of subjective effects through the use of questionnaires is scientifically useful for determining the pharmacologic properties of drugs—including time course, potency, abuse liability, side effects, and therapeutic utility. Many of the current methods used to measure subjective effects resulted from research aimed at reducing drug abuse.
Drug abuse and drug addiction are problems that are not new to contemporary society; they have a long-recorded history, dating back to ancient times. For centuries, various drugs including Alcohol, Tobacco, Marijuana, Hallucinogens, Opium, and Cocaine, have been available and used widely across many cultures. Throughout these times, humans have been interested in describing and communicating the subjective experiences that arise from drug administration. Although scientists have been interested in the study of Pharmacology for many centuries, reliable procedures were not developed to measure the subjective effects of drugs until recently.
Throughout the twentieth century, the U.S. Government has become increasingly concerned with the growing problem of drug abuse. To decrease the availability of drugs with significant Abuse Liability, the government has passed increasingly restrictive laws concerning the possession and sale of existing drugs and the development and marketing of new drugs. The pressing need to regulate drugs that have potential for misuse prompted the government to sponsor research for the development of scientific methodologies that would be useful in assessing the abuse liability of drugs.
Two laboratories that made major contributions to the development of subjective-effect measures were Henry Beecher and his colleagues at Harvard University and the government-operated Addiction Research Center (ARC) in Lexington, Kentucky. Beecher and his colleagues at Harvard conducted a lengthy series of well-designed studies that compared the subjective effects of various drugs—opiates, sedatives, and stimulants—in a variety of subject populations that included patients, substance abusers, and normal volunteers and highlighted the importance of studying the appropriate patient population. Additionally, this group laid the foundation for conducting studies with solid experimental designs, which include double-blind and placebo controls, randomized dosing, and characterization of dose-response relationships. Investigators at the ARC conducted fundamental studies of both the acute (immediate) and chronic (long-term) effects of drugs, as well as physical dependence and withdrawal symptoms (e.g., Himmelsbach's opiate withdrawal scale). A number of questionnaires and procedures now in use to study the subjective effects of drugs were developed, including the Addiction Research Center Inventory and the Single Dose Questionnaire. Although many of the tools and methods developed at the ARC are still in use, other laboratories have since modified and expanded subjective-effect measures and their applications.
Subjective-effects measures are usually presented in the form of groups of questions (questionnaires). These questions can be presented in a number of formats, the most frequently used of which are ordinal scales and visual analog scales. The ordinal scale is a scale of ranked values in which the ranks are assigned based upon the amount of the measured effect that is experienced by each individual. Subjects are usually asked to rate their response to a question on a 4- or 5-point scale (e.g., to rate the strength of the drug effect from 0 to 4, with 0=not at all; 1=a little; 2=moderately; 3=quite a bit; and 4=extremely). A visual-analog scale is a continuous scale presented as a line without tick marks or sometimes with tick marks to give some indication of gradations. A subject indicates the response by placing a mark on that line, according to a particular reference point; for example, lines are usually anchored at the ends with labels such as "not at all" and "extremely." Visual-analog scales can be unipolar (example: "tired," rated from no effect to extremely), or they may be bipolar (example: "tired/alert," with "extremely tired" at one end, "extremely alert" at the other, and "no effect" in the center). Another frequently used format is the binomial scale, usually in the form of yes/no or true/false responses, such as the Addiction Research Center Inventory. A fourth format utilizes a nominal scale, in which the response choices are categorical in nature and mutually exclusive of each other (e.g., drug class questionnaire).
Frequently used subjective-effect measures include investigator-generated scales, such as adjective-rating scales, and standardized questionnaires, such as the Profile of Mood States and the Addiction Research Center Inventory. A description of a number of questionnaires follows; however, this list is illustrative only and is not meant to be exhaustive.
Adjective Rating Scales.
These are questionnaires on which subjects rate a list of symptoms, describing how they feel or effects associated with drug ingestion. The questionaires can be presented to subjects with either visual-analog or ordinal scales. Items can be used singly or grouped into scales. Some adjective-type scales are designed to measure global effects, such as the strength of drug effects or the subject's liking of a drug, while other adjective rating scales are designed to measure specific drug-induced symptoms. In the latter use, the adjectives used may depend on the class of drugs being studied and their expected effects. For example, studies of amphetamine include items such as "stimulated" and "anxious," while studies of opioids include symptoms such as "itching" and "talkative." To study physical dependence, symptoms associated with drug withdrawal are used; for example, in studies of opioid withdrawal, subjects might rate "watery eyes," "chills," and "gooseflesh." Most adjective-rating scales have not been formally validated; investigators rely on external validity.
Profile of Mood States (POMS).
This questionnaire was developed to measure mood effects in psychiatric populations and for use in testing treatments for psychiatric conditions such as depression and anxiety. It is a form of an adjective-rating scale. This scale was developed by Douglas McNair, Ph.D., and has been modified several times. It exists in two forms—one consisting of sixty-five and another of seventy-two adjectives describing mood states that are rated on a five-point scale from "not at all" (0) to "extremely" (4). The item scores are weighted and grouped by factor analysis into a number of subscales, including tension-anxiety, depression-dejection, anger-hostility, vigor, fatigue, confusion-bewilderment, friendliness, and elation. This questionnaire has been used to measure acute drug effects, usually by comparing measures collected before and after drug administration. Its use in drug studies has not been formally validated; however, it has been validated by replication studies in normal and psychiatric populations and in treatment studies.
Single Dose Questionnaire.
This was developed in the 1960s at the ARC to quantify the subjective effects of opioids. It has been used extensively and has been modified over time. This questionnaire consists of four parts; (1) a question in which subjects are asked whether they feel a drug effect (a binomial yes/no scale); (2) a question in which subjects are asked to indicate which among a list of drugs or drug classes is most similar to the test drug (a nominal scale); (3) a list of symptoms (checked yes or no); and (4) a question asking subjects to rate how much they like the drug (presented as an ordinal scale). The list of drugs used in the questionnaire includes placebo, opiate, stimulant, marijuana, sedative, and other. Examples of symptoms listed are turning of stomach, skin itchy, relaxed, sleepy, and drunken. While this questionnaire has not been formally validated, it has been used widely to study opioids, and the results have been remarkably consistent over three decades.
Addiction Research Center Inventory (ARCI).
This is a true/false questionnaire containing more than 550 items. The ARCI was developed by researchers at the ARC to measure a broad range of physical, emotive, and subjective drug effects from diverse pharmacological classes. Subscales within the ARCI were developed to be sensitive to the acute effects of specific drugs or pharmacological classes (e.g., morphine, amphetamine, barbiturates, marijuana); feeling states (e.g., tired, excitement, drunk); the effects of chronic drug administration (Chronic Opiate Scale); and drug withdrawal (e.g., the Weak Opiate Withdrawal and Alcohol Withdrawal Scale). The ARCI subscales most frequently used in acute drug-effect studies are the Morphine-Benzedrine Group (MBG) to measure euphoria; the Pentobarbital-Chlorpromazine-Alcohol Group (PCAG) to measure apathetic sedation; and the Lysergic Acid Diethylamide Group (LSDG) to measure dysphoria or somatic discomfort. The use of the MBG, PCAG, and LSDG scales has remained standard in most studies of abuse liability. Subscales on this questionnaire were developed empirically, followed by extensive validation studies.
These may frequently accompany the collection of subjective effects and are often based on the subjective-effect questionnaires. Ratings are made by an observer who is present with the subject during the study, and items are limited to those drug effects that are observable. Observer-rated measures may include drug-induced behaviors (e.g., talking, scratching, activity levels, and impairment of motor function), as well as other drug signs such as redness of the eyes, flushing, and sweating. Observer-rated measures can be designed using any of the formats used in subject-rated measures. Examples of observer-rated questionnaires that have been used extensively are the Single Dose Questionnaire, which exists in an observer-rated version, and the Opiate Withdrawal Scale developed by Himmelsbach and his colleagues at the ARC.
USES OF SUBJECTIVE-EFFECT MEASURES
The methodology for assessing the subjective effects of drugs was developed, in large part, to characterize the abuse liability, the pharmacological properties, and the therapeutic utility of drugs. Abuse liability is the term for the likelihood that a drug will be used illicitly for nonmedical purposes. The assessment of the abuse-liability profile of a new drug has historically been studied by comparing it with a known drug, whose effects have been previously characterized. Drugs that produce euphoria are considered more likely to be abused than drugs that do not produce euphoria.
Subjective-effects measures may also be used to characterize the time course of a drug's action (such as time to drug onset, time to maximal or peak effect, and the duration of the drug effect). These procedures can provide information about the pharmacological properties of a particular drug, such as its drug class, whether it has Agonist or Antagonist effects, and its similarity to prototypic drugs within a given drug class. Subjective-response reports are also useful in assessing the efficacy (the ability of a drug to produce its desired effects), potency (amount or dose of a drug needed to produce that effect), and therapeutic utility of a new drug. Subjective reports provide information regarding the potency and efficacy of a new drug in comparison to available treatment agents. Subjective-effect measures may be useful in determining whether a drug produces side effects that are dangerous or intolerable to the patient. Drugs that produce unpleasant or dysphoric mood-altering effects may have limited therapeutic usefulness.
DESCRIPTION OF MAJOR FINDINGS OBTAINED WITH DIFFERENT DRUG CLASSES
Drugs of different pharmacological classes generally produce profiles of subjective effects that are unique to that class of drugs and that are recognizable to individuals. The subjective effects of major pharmacological classes have been characterized using the questionnaires described above. Table 1 lists some major pharmacological classes and their typical effects on various instruments. While global measures provide quantitative information regarding drug effects, they tend not to differentiate among different types of drugs. Nevertheless, the more specific subjective-effect measures, such as the ARCI and the Adjective Rating Scales, yield qualitative information that can differentiate among drug classes.
Measures of the subjective effects of drugs have been extremely useful in the study of pharmacology. Questionnaires have been developed that are sensitive to both the global effects and the specific effects of drugs; however, research is still underway to develop even more sensitive subjective-effect measures and new applications for their use.
(See also: Abuse Liability of Drugs ; Addiction: Concepts and Definitions ; Causes of Substance Abuse ; Drug Types )
Beecher, H. K. (1959). Measurement of subjective responses: Quantitative effects of drugs. New York: Ox ford University Press.
De Wit, H., & Griffiths, R. R. (1991). Testing the abuse liability of anxiolytic and hypnotic drugs in humans. Drug and Alcohol Dependence, 28 (1), 83-111.
Foltin, R. W., & Fischman, M. W. (1991). Assessment of abuse liability of stimulant drugs in humans: A methodological survey. Drug and Alcohol Dependence, 28 (1), 3-48.
Martin, W. R. (1977a). Drug addiction I. Berlin: Springer-Verlag.
Martin, W. R. (1977b). Drug addiction II. Berlin: Springer-Verlag.
Preston, K. L., & Jasinski, D. R. (1991). Abuse liability studies of opioid agonist-antagonists in humans. Drug and Alcohol Dependence, 28 (1), 49-82.
Kenzie L. Preston
Sharon L. Walsh
Motivation is a theoretical construct that refers to the neurobiological processes responsible for the initiation and selection of such goal-directed patterns of behavior as are appropriate to the physiological needs or psychological desires of the individual. Effort or vigor are terms used to describe the intensity of a specific pattern of motivated behavior. Physiological "drive" states, caused by imbalances in the body's homeostatic regulatory systems, are postulated to be major determinants of different motivational states. Deprivation produced by withholding food or water is used routinely in studies with experimental animals to establish prerequisite conditions in which nutrients or fluids can serve as positive reinforcers in both operant and classical conditioning procedures. In more natural conditions, the processes by which animals seek, find, and ingest food or fluids are divided into appetitive and consummatory phases. Appetitive behavior refers to the various patterns of behavior that are used to locate and bring the individual into direct contact with a biologically relevant stimulus such as water. Consummatory behavior describes the termination of approach behavior leading subsequently to ingestion of food, drinking of fluid, or copulation with a mate.
is the term applied to the most influential psychological theory that explains how the stimulus properties of biologically relevant stimuli, and the environmental stimuli associated with them, control specific patterns of appetitive behavior (Bolles, 1972). According to this theory, the initiation and selection of specific behaviors are triggered by external (incentive) stimuli that also guide the individual toward a primary natural incentive, such as food, fluid, or a mate. Drugs of abuse and electrical brain-stimulation reward can serve as artificial incentives. In a further refinement of this theory, Berridge and Valenstein (1991) defined incentive motivation as the final stage in a three-part process. The first phase involves the activation of neural substrates for pleasure, which in the second phase are associated with the object giving rise to these positive sensations and the environmental stimuli identified with the object. The critical third stage involves processes by which salience is attributed to subsequent perceptions of the natural incentive stimulus and the associated environmental cues. It is postulated that this attribution of "incentive salience" depends upon activation of the mesotelencephalic dopamine systems. The sensation of pleasure and the classical associative learning processes that mediate stages one and two respectively are subserved by different neural substrates.
In the context of drive states as the physiological substrates of motivation, the level of motivation is manipulated by deprivation schedules in which the subject is denied access mainly to food or water for fixed periods of time (e.g., twenty-two hours of food deprivation). An animal's increased motivation can be inferred from measures such as its running speed in a runway to obtain food reward. Under these conditions, speed is correlated with level of deprivation. Another measure of the motivational state of an animal is the amount of work expended for a given unit of food, water, or drug. Work here is defined as the number of lever presses per reinforcer. If one systematically obtains an increase in the number of presses, one can identify a specific ratio of responses per reward beyond which the animal is unwilling to work. This final ratio is called the break point. In the context of drug reinforcement, the break point in responding for Cocaine can be increased or decreased in a dose-dependent manner by dopamine agonists and antagonists respectively.
Appetitive behavior also can be measured directly in animal behavior studies either by an animal's latency (the time it takes) in approaching a source of food or water during presentation of a conditioned stimulus predictive of food, or simply by measuring the animal's latency approaching a food dispenser when given access to it. The fact that these appetitive behaviors are disrupted by dopamine antagonists has been interpreted as evidence of the role of mesotelencephalic dopamine pathways in incentive motivation.
In extending these ideas to the neural bases of drug addiction, Robinson and Berridge (1993) emphasized the role of sensitization, or enhanced behavioral responses to fixed doses of addictive drugs, that occurs after repeated intermittent drug treatment. Neurobiological evidence indicates that sensitization is directly related to neuroadaptations in the mesotelencephalic dopamine systems. As a result of these neural changes, a given dose of amphetamine, for example, causes enhanced levels of extracellular dopamine and an increase in the behavioral effects of the drug. Given the role proposed for the mesotelencephalic Dopamine systems in incentive salience, it is further conjectured that craving, or exaggerated desire for a specific object or its mental representation, is a direct result of drug-induced sensitization. In this manner, repeated self-administration of drugs of abuse, such as Amphetamine, produce neural effects that set the stage for subsequent craving for repeated access to the drug.
(See also: Brain Structures and Drugs ; Causes of Substance Abuse ; Research, Animal Model )
Berridge, K. C., & Valenstein, E. S. (1991). What psychological process mediates feeding evoked by electrical stimulation of the lateral hypothalamus? Behavioral Neuroscience, 105, 3-14.
Bolles, R. C. (1992). Reinforcement, expectancy and learning. Psychology Reviews, 79, 394-409.
Robinson, T. E., & Berridge, K. C. (1993). The neural basis of drug craving: An incentive-sensitization theory of addiction. Brain Research Reviews, 18, 247-291.
Anthony G. Phillips
georgine m. piondavid s. cordray
qualitative and ethnographic
leann g. putneyjudith l. greencarol n. dixon
school and program evaluation
How do people learn to be effective teachers? What percentage of American students has access to computers at home? What types of assessments best measure learning in science classes? Do college admission tests place certain groups at a disadvantage? Can students who are at risk for dropping out of high school be identified? What is the impact of new technologies on school performance? These are some of the many questions that can be informed by the results of research.
Although research is not the only source used for seeking answers to such questions, it is an important one and the most reliable if executed well. Research is a process in which measurements are taken of individuals or organizations and the resulting data are subjected to analysis and interpretation. Special care is taken to provide as accurate an answer as possible to the posed question by subjecting "beliefs, conjectures, policies, positions, sources of ideas, traditions, and the like … to maximum criticism, in order to counteract and eliminate as much intellectual error as possible" (Bartley, pp. 139–140). In collecting the necessary information, a variety of methodologies and procedures can be used, many of which are shared by such disciplines as education, psychology, sociology, cognitive science, anthropology, history, and economics.
Evidence–The Foundation of Research
In education, research is approached from two distinct perspectives on how knowledge should be acquired. Research using quantitative methods rests on the belief that individuals, groups, organizations, and the environments in which they operate have an objective reality that is relatively constant across time and settings. Consequently, it is possible to construct measures that yield numerical data on this reality, which can then be further probed and interpreted by statistical analyses. In contrast, qualitative research methods are rooted in the conviction that "features of the social environment are constructed as interpretations by individuals and that these interpretations tend to be transitory and situational" (Gall, Borg, and Gall, p. 28). It is only through intensive study of specific cases in natural settings that these meanings and interpretations can be revealed and common themes educed. Although debate over which perspective is "right" continues, qualitative and quantitative research share a common feature–data are at the center of all forms of inquiry.
Fundamentally, data gathering boils down to two basic activities: Researchers either ask individuals (or other units) questions or observe behavior. More specifically, individuals can be asked about their attitudes, beliefs, and knowledge about past or current behaviors or experiences. Questions can also tap personality traits and other hypothetical constructs associated with individuals. Similarly, observations can take on a number of forms: (1) the observer can be a passive transducer of information or an active participant in the group being observed;(2) those being observed may or may not be aware that their behavior is being chronicled for research purposes; and (3) data gathering can be done by a human recorder or through the use of technology (e.g., video cameras or other electronic devices). Another distinction that is applicable to both forms of data gathering is whether the data are developed afresh within the study (i.e., primary data) or stem from secondary sources (e.g., data archives; written documents such as academic transcripts, individualized educational plans, or teacher notes; and artifacts that are found in natural settings). Artifacts can be very telling about naturally occurring phenomena. These can involve trace and accretion measures–that is, "residue" that individuals leave behind in the course of their daily lives. Examples include carpet wear in front of exhibits at children's museums (showing which exhibits are the most popular), graffiti written on school buildings, and websites visited by students.
What should be clear from this discussion so far is that there exists a vast array of approaches to gathering evidence about educational and social phenomena. Although reliance on empirical data distinguishes research-based disciplines from other modes of knowing, decisions about what to gather and how to structure the data gathering process need to be governed by the purpose of the research. In addition, a thoughtful combination of data gathering approaches has the greater chance of producing the most accurate answer.
Purposes of Research
The array of questions listed in the introductory paragraph suggests that research is done for a variety of purposes. These include exploring, describing, predicting, explaining, or evaluating some phenomenon or set of phenomena. Some research is aimed at replicating results from previous studies; other research is focused on quantitatively synthesizing a body of research. These two types of efforts are directed at strengthening a theory, verifying predictions, or probing the robustness of explanations by seeing if they hold true for different types of individuals, organizations, or settings.
Exploration. Very little may be known about some phenomena such as new types of settings, practices, or groups. Here, the research question focuses on identifying salient characteristics or features that merit further and more concerted examination in additional studies.
Description. Often, research is initiated to carefully describe a phenomenon or problem in terms of its structure, form, key ingredients, magnitude, and/or changes over time. The resulting profiles can either be qualitative or narrative, quantitative (e.g., x number of people have this characteristic), or a mixture of both. For example, the National Center for Education Statistics collects statistical information about several aspects of education and monitors changes in these indicators over time. The information covers a broad range of topics, most of which are chosen because of their interest to policymakers and educational personnel.
Prediction. Some questions seek to predict the occurrence of specific phenomena or states on the basis of one or more other characteristics. Short-and long-term planning are often the main rationale for this type of research.
Explanation. It is possible to be able to predict the occurrence of a certain phenomenon but not to know exactly why this relationship exists. In explanatory research, the aim is to not only predict the out-come or state of interest but also understand the mechanisms and processes that result in one variable causing another.
Evaluation. Questions of this nature focus on evaluating or judging the worth of something, typically an intervention or program. Of primary interest is to learn whether an organized set of activities that is aimed at correcting some problem (e.g., poor academic skills, low self-esteem, disruptive behavior) is effective. When these efforts are targeted at evaluating the potential or actual success of policies, regulations, and laws, this is often known as policy analysis.
Replication. Some questions revolve around whether a demonstrated relationship between two variables (e.g., predictive value of the SAT in college persistence) can be again found in different populations or different types of settings. Because few studies can incorporate all relevant populations and settings, it is important to determine how generalizable the results of a study to a particular group or program are.
Synthesis. Taking stock of what is known and what is not known is a major function of research. "Summing-up" a body of prior research can take quantitative (e.g., meta-analysis) and qualitative (narrative summaries) forms.
Types of Research Methods
The purpose or purposes underlying a research study guide the choice of the specific research methods that are used. Any individual research study may address multiple questions, not all of which share the same purpose. Consequently, more than one research method may be incorporated into a particular research effort. Because methods of investigation are not pure (i.e., free of bias), several types of data and methods of gathering data are often used to "triangulate" on the answer to a specific question.
Measurement development. At the root of most inquiry is the act of measuring key conceptual variables of interest (e.g., learning strategies, intrinsic motivation, learning with understanding). When the outcomes being measured are important (e.g., grade placement, speech therapy, college admission), considerable research is often needed prior to conducting the main research study to ensure that the measure accurately describes individuals' status or performance. This can require substantial data collection and analysis in order to determine the measure's reliability, validity, and sensitivity to change; for some measures, additional data from a variety of diverse groups must be gathered for establishing norms that can assist in interpretation. With the exception of exploratory research, the quality of most studies relies heavily upon the degree to which the data-collection instruments provide reliable and valid information on the variables of interest.
Survey methodology. Survey research is primarily aimed at collecting self-report information about a population by asking questions directly of some sample of it. The members of the target population can be individuals (e.g., local teachers), organizations (e.g., parent–teacher associations), or other recognized bodies (e.g., school districts or states). The questions can be directed at examining attitudes and preferences, facts, previous behaviors, and past experiences. Such questions can be asked by interviewers either face-to-face or on the telephone; they can also be self-administered by distributing them to groups (e.g., students in classrooms) or delivering them via the mail, e-mail, or the Internet.
High-quality surveys devote considerable attention to reducing as much as possible the major sources of error that can bias the results. For example, the target population needs to be completely enumerated so that important segments or groups are not unintentionally excluded from being eligible to participate. The sample is chosen in a way as to be representative of the population of interest, which is best accomplished through the use of probability sampling. Substantial time is given to constructing survey questions, pilot testing them, and training interviewers so that item wording, question presentation and format, and interviewing styles are likely to encourage thoughtful and accurate responses. Finally, concerted efforts are used to encourage all sampled individuals to complete the interview or questionnaire.
Surveys are mainly designed for description and prediction. Because they rarely involve the manipulation of independent variables or random assignment of individuals (or units) to conditions, they generally are less useful by themselves for answering explanatory and effects-oriented evaluative questions. If survey research is separated into its two fundamental components–sampling and data gathering through the use of questionnaires–it is easy to see that survey methods are embedded within experimental and quasi-experimental studies. For example, comparing learning outcome among students enrolled in traditional classroom-based college courses with those of students completing the course through distance learning would likely involve the administration of surveys that assess student views of the instructor and their satisfaction with how the course was taught. As another illustration, a major evaluation of Sesame Street that randomly assigned classrooms to in-class viewing of the program involved not only administering standardized reading tests to the students participating but also surveys of teachers and parents. So, in this sense, many forms of inquiry can be improved by using state-of-the-art methods in questionnaire construction and measurement.
Observational methods. Instead of relying on individuals' self-reports of events, researchers can conduct their own observations. This is often preferable when there is a concern that individuals may misreport the requested information, either deliberately or inadvertently (e.g., they cannot remember). In addition, some variables are better measured by direct observation. For example, in comparing direct observations of how long teachers lecture in a class as opposed to asking teachers to self-report the time they spent lecturing; it should be obvious that the latter could be influenced (biased upward or downward) by how the teachers believe the researcher wants them to respond.
Observational methods are typically used in natural settings, although, as with survey methods, observations can be made of behaviors even in experimental and quasi-experimental studies. Both quantitative and qualitative observation strategies are possible. Quantitative strategies involve either training observers to record the information of interest in a systematic fashion or employing audiotape recorders, video cameras, and other electronic devices. When observers are used, they must be trained and monitored as to what should be observed and how it should be recorded (e.g., the number of times that a target behavior occurs during an agreed-upon time period).
Qualitative observational methods are distinctly different in several ways. First, rather than coding a prescribed set of behaviors, the focus of the observations is deliberately left more open-ended. By using open-ended observation schemes, the full range of individuals' responses to an environment can be recorded. That is, observations are much broader in contrast to quantitative observational strategies that focus on specific behaviors. Second, observers do not necessarily strive to remain neutral about what they are observing and may include their own feelings and experiences in interpreting what happened. Also, observers who employ quantitative methods do not participate in the situations that they are observing. In contrast, observers in qualitative research are not typically detached from the setting being studied; rather, they are more likely to be complete participants where the researcher is a member of the setting that is being observed.
Qualitative strategies are typically used to answer exploratory questions as they help identify important variables and hypotheses about them. They also are commonly used to answer descriptive questions because they can provide in-depth information about groups and situations. Although qualitative strategies have been used to answer predictive, explanatory, and evaluative questions, they are less able to yield results that can eliminate all rival explanations for causal relationships.
Experimental methods. Experimental research methods are ideally suited for examining explanatory questions that seek to ascertain whether a cause-and-effect relationship exists among two or more variables. In experiments, the researcher directly manipulates the cause (the independent variable), assigns individuals randomly to various levels of the independent variable, and measures their responses (the expected effect). Ideally, the researcher has a high degree of control over the presentation of the purported cause–where, when, and in what form it is delivered; who receives it; and when and how the effect is measured. This level of control helps rule out alternative or rival explanations for the observed results. Exercising this control typically requires that the research be done under laboratory or contrived conditions rather than in natural settings. Experimental methods, however, can also be used in real-world settings–these are commonly referred to as field experiments.
Conducting experiments in the field is more difficult inasmuch as the chances increase that integral parts of the experimental method will be compromised. Participants may be more likely to leave the study and thus be unavailable for measurement of the outcomes of interest. Subjects who are randomly assigned to the control group, which may receive no tutoring, may decide to obtain help on their own–assistance that resembles the intervention being tested. Such problems essentially work against controlling for rival explanations and the key elements of the experimental method are sacrificed. Excellent discussions of procedures for conducting field experiments can be found in the 2002 book Experimental and Quasi-Experimental Designs for Generalized Causal Inference, written by William R. Shadish, Thomas D. Cook, and Donald T. Campbell, and in Robert F. Boruch's 1997 book Randomized Field Experiments for Planning and Evaluation: A Practical Guide.
Quasi-experimental methods. As suggested by its name, the methods that comprise quasi-experimental research approximate experimental methodologies. They are directed at fulfilling the same purposes–explanation and evaluation–but may provide more equivocal answers than experimental designs. The key characteristic that distinguishes quasi experiments from experiments is the lack of random assignment. Because of this, researchers must make concerted efforts to rule out the plausible rival hypotheses that random assignment is designed to eliminate.
Quasi-experimental designs constitute a core set of research strategies because there are many instances in which it is impossible to successfully assign participants randomly to different conditions or levels of the independent variable. For example, the first evaluation of Sesame Street that was conducted by Samuel Ball and Gerry Bogatz in 1970 was designed as a randomized experiment where individual children in five locations were randomly assigned to either be encouraged to watch the television program (and be observed in their homes doing it) or not encouraged. Classrooms in these locations were also either given television sets or not, and teachers in classrooms with television sets were encouraged to allow the children to view the show at least three days per week. The study, however, turned into a quasi experiment because Sesame Street became so popular that children in the control group (who were not encouraged to watch) ended up watching a considerable number of shows.
The two most frequently used quasi-experimental strategies are time-series designs and nonequivalent comparison group designs, each of which has some variations. In time-series designs, the dependent variable or expected effect is measured several times before and after the independent variable is introduced. For example, in a study of a zero tolerance policy, the number of school incidents related to violence and substance use are recorded on a monthly basis for twelve months before the policy is introduced and twelve or more months after its implementation. If a noticeable reduction in incidents occurs soon after the new policy is introduced and the reduction persists, one can be reasonably confident that the new policy was responsible for the observed increase if no other events occurred that could have resulted in a decline and there was evidence that the policy was actually enforced. This confidence may be even stronger if data are collected on schools that have similar student populations and characteristics but no zero tolerance policies during the same period and there is no reduction in illegal substance and violence-related incidents.
Establishing causal relationships with the nonequivalent comparison group design is typically more difficult. This is because when groups are formed in ways other than random assignment (e.g., participant choice), this often means that they differ in other ways that affect the outcome of interest. For example, suppose that students who are having problems academically are identified and allowed to choose to be involved or not involved in an after-school tutoring program. Those who decide to enroll are also those who may be more motivated to do well, who may have parents who are willing to help their children improve, and who may differ in other ways from those who choose not to stay after school. They may also have less-serious academic problems. Such factors all may contribute to these students exhibiting higher academic gains than their nontutored counterparts do when after-tutoring testing has been completed. It is difficult, however, to disen-tangle the role that tutoring contributed to any observed improvement from these other features. The use of well-validated measures of these characteristics for both groups prior to receiving or not receiving tutoring can help in this process, but the difficulty is to identify and measure all the key variables other than tutoring receipt that can influence the observed outcomes.
Secondary analysis and meta-analysis. Both secondary analysis and meta-analysis are part of the arsenal of quantitative research methods, and both rely on research data already collected by other studies. They are invaluable tools for informing questions that seek descriptive, predictive, explanatory, or evaluative answers. Studies that rely on secondary analysis focus on examining and reanalyzing the raw data from prior surveys, experiments, and quasi experiments. In some cases, the questions prompting the analysis are ones that were not examined by the original investigator; in other cases, secondary analysis is performed because the researcher disagrees to some extent with the original conclusions and wants to probe the data, using different statistical techniques.
Secondary analyses occupy a distinct place in educational research. Since the 1960s federal agencies have sponsored several large-scale survey and evaluation efforts relevant to education, which have been analyzed by other researchers to re-examine the reported results or answer additional questions not addressed by the original researchers. Two examples, both conducted by the National Center for Education Statistics, include the High School and Beyond Survey, which tracks seniors and sophomores as they progress through high school and college and enter the workplace; and the Schools and Staffing Survey, which regularly collects data on the characteristics and qualifications of teachers and principals, class size, and other school conditions.
The primary idea underlying meta-analysis or research synthesis methods is to go beyond the more traditional, narrative literature reviews of research in a given area. The process involves using systematic and comprehensive retrieval practices for accumulating prior studies, quantifying the results by using a common metric (such as the effect size), and statistically combining this collection of results. In general, the reported results that are used from studies involve intermediate statistics such as means, standard deviations, proportions, and correlations.
The use of meta-analysis grew dramatically in the 1990s. Its strength is that it allows one to draw conclusions across multiple studies that addressed the same question (e.g., what have been the effects of bilingual education?) but used different measures, populations, settings, and study designs. The use of both secondary analysis and meta-analysis has increased the longer-term value of individual research efforts, either by increasing the number of questions that can be answered from one large-scale survey or by looking across several small-scale studies that seek answers to the same question. These research methods have contributed much in addressing policymakers' questions in a timely fashion and to advancing theories relevant to translating educational research into recommended practices.
See also: Faculty Performance of Research and Scholarship; Research Methods, subentries on Qualitative and Ethnographic, School and Program Evaluation.
Ball, Samuel, and Bogatz, Gerry A. 1970. The First Year of Sesame Street: An Evaluation. Princeton, NJ: Educational Testing Service.
Bartley, William W., III. 1962. The Retreat to Commitment. New York: Knopf.
Boruch, Robert F. 1997. Randomized Field Experiments for Planning and Evaluation: A Practical Guide. Thousand Oaks, CA: Sage.
Bryk, Anthony S., and Raudenbush, Stephen W. 1992. Hierarchical Linear Models: Applications and Data Analysis Methods. Newbury Park, CA: Sage.
Cook, Thomas D.; Cooper, Harrison; Cordray, David S.; Hartmann, Heidi; Hedges, Larryv.; Light, Richard J.; Louis, Thomas A.; and Mosteller, Frederick, eds. 1992. Metaanalysis for Explanation: A Casebook. New York: Russell Sage Foundation.
Cooper, Harrison, and Hedges, Larry V., eds. 1994. The Handbook of Research Synthesis. New York: Russell Sage Foundation.
Gall, Meridith D.; Borg, Walter R.; and Gall, Joyce P. 1966. Educational Research: An Introduction, 6th edition. White Plains, NY: Long-man
Shadish, William R.; Cook, Thomas D.; and Campbell, Donald T. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.
Georgine M. Pion
David S. Cordray
QUALITATIVE AND ETHNOGRAPHIC
A qualitative approach to research generally involves the researcher in contact with participants in their natural setting to answer questions related to how the participants make sense of their lives. Qualitative researchers may observe the participants and conduct formal and informal interviews to further an understanding of what is going on in the setting from the point of view of those involved in the study. Ethnographic research shares these qualitative traits, but ethnographers more specifically seek understanding of what participants do to create the culture in which they live, and how the culture develops over time. This article further explores what it means to conduct qualitative and ethnographic research by looking at them historically and then by describing key characteristics of these approaches.
The Context in Education
Qualitative and ethnographic research developed in education in the late 1970s. Ethnographic researchers drew on theory and methods in anthropology and sociology, creating a distinction between ethnography of education (work undertaken by anthropologists and sociologists) and ethnography in education (work undertaken by educators to address educational issues). Other forms of qualitative research drew on theories from the humanities and other social and behavioral sciences, adapting this work to educational goals and concerns, often creating new forms (e.g., connoisseurship, a field method approach, interview approaches, and some forms of action research).
In the early development of these traditions, educational researchers struggled for acceptance by both other professionals and policymakers. This phase was characterized by arguments over the value of qualitative methods in contrast to the dominant paradigms of the time–quantitative and experimental approaches. Qualitative and ethnographic researchers argued that questions important to education were left unexamined by the dominant paradigms. Some qualitative researchers argued for the need to include and represent the voices of people in their research, particularly voices not heard in other forms of research involving large-scale studies.
Questions asked by qualitative and ethnographic researchers generally focus on understanding the local experiences of people as they engage in their everyday worlds (e.g., classrooms, peer groups, homes, communities). For example, some researchers explore questions about ways in which people gain, or fail to gain, access to ways of learning in a diverse world; others focus on beliefs people hold about education and learning; while still others examine how patterns learned within a group are consequential for participation in other groups and situations.
A broad range of perspectives and approaches exist, each with its own historical tradition and theoretical orientation. A number of common dimensions can be identified across these perspectives and approaches. Qualitative and ethnographic researchers in education are concerned with the positions they take relative to participants and data collected. For example, many qualitative and ethnographic researchers engage in observations over a period of time to identify patterns of life in a particular group.
The theoretical orientation chosen guides the design and implementation of the research, including the tools used to collect (e.g., participant observation, interviewing, and collecting artifacts) and analyze data (e.g., discourse analysis, document analysis, content analysis, and transcribing video/audio data). Theory also guides other decisions, including how to enter the field (e.g., the social group, classroom, home, and/or community center), what types and how much data to collect and records to make (e.g., videotape, audiotape, and/or field notes), who to interview (formally and/or informally), how long to remain in the field (e.g., for ethnography, one or more years), and what literature is relevant. It also influences relationships researchers establish with people in local settings, which in turn influences what can be known. Some theoretical perspectives guide researchers to observe what is occurring from a distance by taking the role of passive observer, recording information for analysis once they leave the field. Such researchers often do not interview participants, preferring to "ground" their observations in patterns in the data, without concern for what members understand. These descriptions are called etic, or outsider descriptions, because the observer is not concerned with members' understandings.
This approach is in contrast with ones in which researchers join the group and become active participant-observers, at times participating directly in events. Such researchers also make videotape records that enable them to step back from what they thought was occurring to examine closely what resulted from those actions. Those not using video or audio records reconstruct events by constructing retrospective field notes, drawing on their memories of what occurred to create a written record to analyze when they leave the field. Just which type of approach and position researchers take depends on their research goal (s) and theoretical orientation (s) as well as what participants permit.
Approaches to Research Questions
Research questions in a qualitative study are generated as part of the research process. Qualitative and ethnographic researchers often begin a study with one or more initiating question (s) or an issue they want to examine. Qualitative and ethnographic research approaches involve a process of interacting with data, reflecting on what is important to members in the local setting, and using this to generate new questions and refine the initial questions. This interactive and responsive process also influences the data that are collected and analyzed throughout the study. Therefore, it is common for researchers to construct more detailed questions that are generated as part of the analysis as they proceed throughout the study, or to abandon questions and generate ones more relevant to the local group or issues being studied.
For example, in one study of a fifth-grade classroom, the initial research questions were open ended and general: (1) What counts as community to the students and teacher in this classroom? (2) How do the participants construct community in this classroom? and (3) How is participating in this classroom consequential for students and the teacher? As the study unfolded, the research questions became more directed toward what the researcher was beginning to understand about this classroom in particular. After first developing an understanding of patterns of interactions among participants, the researcher began to formulate more specific questions: (1) What patterns of practice does the teacher construct to offer opportunities for learning? (2) What roles do the social and academic practices play in the construction of community in this classroom? and (3) What are the consequences for individuals and the collective when a member leaves and reenters the classroom community? This last question was one that could not have been anticipated but was important to understanding what students learned and when student learning occurred as well as what supported and constrained that learning. The shifts in questions constitute this researcher's logic of inquiry and need to be reported as part of the dynamic design of the study.
Approaches to Design and Data Collection
In designing qualitative studies, researchers consider ways of collecting data to represent the multiple voices and actions constituting the research setting. Typical techniques used in qualitative research for collecting data include observing in the particular setting, conducting interviews with various participants, and reviewing documents or artifacts. The degree to which these techniques are used depends on the nature of the particular research study and what occurs in the local group.
Some studies involve in-depth analysis of one setting or interviews of one group of people. Others involve a contrastive design from the beginning, seeking to understand how the practices of one group are similar to or different from another group. Others seek to study multiple communities to test hypotheses from the research literature (e.g., child-rearing practices are the same in all communities). What is common to all of these studies is that they are examining the qualities of life and experiences within a local situation. This is often called a situated perspective.
Entering the Field and Gaining Access to Insider Knowledge
Entering the research setting is one of the first phases of conducting fieldwork. Gaining access to the site is ongoing and negotiated with the participants throughout the study. As new questions arise, the researcher has to renegotiate access. For example, a researcher may find that the outcomes of standardized tests become an important issue for the teachers and students. The researcher may not have obtained permission to collect these data at the beginning of the study and must then negotiate permission from parents, students, teachers, and district personnel to gain access to these scores.
Qualitative research involves a social contract with those participating in the study, and informed consent is negotiated at each phase of the research when new information is needed or new areas of study are undertaken. At such points of renegotiation, researchers need to consider the tools necessary and the ways to participate within the group (e.g., as participant-observer and/or observer-participant, as interviewer of one person or as a facilitator of a focus group, or as analyst of district data or student products). How the researcher conducts observations, collects new forms of data, and analyzes such data is related to shifts in questions and/or theoretical stance (s) necessary to understand what is occurring.
One of the most frequently used tools, in addition to participant observation, is interviewing. For ethnography and other types of field research, interviews occur within the context of the ongoing observations and collection of artifacts. These interviews are grounded in what is occurring in the local context, both within and across time. Some interviews are undertaken to gain insider information about what the researcher is observing or to test out the developing theory that the researcher is constructing.
In contrast, other forms of qualitative research may use interviews as the sole form of data collection. Such interviews also seek meanings that individuals or groups have for their own experience or of observed phenomena. These interviews, however, form the basis for analysis and do not require contextual information from observations. What the people say becomes the basis for exploration, not what was observed.
Other tools used by qualitative and ethnographic researchers include artifact and document analysis (artifacts being anything people make and use). The researcher in a field-based study collects artifacts produced and/or used by members of the group, identifies how these artifacts function for the individual and/or the group, and explores how members talk about and name these artifacts. For some theoretical positions, the artifacts may be viewed as a type of participant in the local event (e.g., computer programs as participants). Some artifacts, such as documents, are examined for links to other events or artifacts. This form of analysis builds on the understanding that the past (and future) is present in these artifacts and that intertextual links between and among events are often inscribed in such documents. In some cases, qualitative researchers may focus solely on a set of artifacts (e.g., student work, linked sets of laws, a photograph collection, or written texts in the environment–environmental print). Such studies seek to examine the range of texts or materials constructed, the patterned ways in which the texts are constructed, and how the choices of focus or discourse inscribe the views that members have of self and others as well as what is possible in their worlds.
Although some qualitative studies focus solely on the documents, field-based researchers generally move between document analysis and an exploration of the relationship of the document to past, present, and future actions of individuals and/or groups. These studies seek to understand the importance of the artifact or document within the lives of those being studied.
Ongoing Data Analysis
While conducting fieldwork, researchers reread their field notes and add to them any relevant information that they were not able to include at the time of first writing the notes. While reviewing their field notes, researchers look for themes and information relevant to the research questions. They note this information in the form of theoretical notes (or write theoretical memos to themselves) that may include questions about repeated patterns, links to other theories, and conceptual ideas they are beginning to develop. They also make methodological notes to reconstruct their thinking and their logic of inquiry. Sometimes they make personal notes that reflect their thoughts and feelings about what they are observing or experiencing. These notes allow them to keep from imposing their own opinion on data, helping them to focus on what is meaningful or important to those with whom they are working.
Researchers constantly use contrast to build interpretations that are grounded in the data, within and across actors, events, times, actions, and activities that constitute the social situations of everyday life. Many qualitative (particularly ethnographic) researchers examine material, activity, semiotic (meaning-carrying), and/or social dimensions of everyday life and its consequences for members. The analytic principles of practice that they use include comparing and contrasting data, methods, theories, and perspectives; examining part-whole relationships between and among actions, events, and actors; seeking insider (emic ) understandings of experiences, actions, practices, and events; and identifying through these what is relevant to the local group.
Reporting Research Findings
The final step in qualitative and ethnographic research is writing an account. The researchers make choices about how to represent the data that illustrate what was typical about the particular group being studied. Another choice might be to highlight actions of the group that were illustrative of their particular patterns of beliefs. In some studies, several cases are chosen to make visible comparisons across different activities within the group, or across different groups that may have some activities in common. For example, researchers who study classroom interactions might bring together data from different classrooms to make visible principles of practice that are similar in general terms such as asking students to understand various points of view. However, in each classroom, the actions of juxtaposing points of view will be carried out differently due to the different experiences within each classroom.
Researchers also select genres for writing the report that best enable the intended audience to understand what the study made visible that was not previously known or that extended previous knowledge. The researcher does not seek to generalize from the specific case. Rather, qualitative or ethnographic researchers provide in-depth descriptions that lead to general patterns. These patterns are then examined in other situations to see if, when, and how they occur and what consequences they have for what members in the new setting can know, do, understand, and/or produce. In qualitative and ethnographic studies this is often referred to as transferability, in contrast to generalizability.
See also: Research Methods, subentries on Overview, School and Program Evaluation.
Denzin, Norman, and Lincoln, Yvonna, eds. 1994. Handbook of Qualitative Research. Thousand Oaks, CA: Sage.
Erickson, Fredrick. 1986. "Qualitative Research." In The Handbook of Research on Teaching, 3rd edition, ed. Merle Wittrock. New York: Macmillan.
Flood, James; Jensen, Julie; Lapp, Diane; and Squire, James, eds. 1990. Handbook of Research on Teaching the English Language Arts. New York: Macmillan.
Gee, James, and Green, Judith. 1998. "Discourse Analysis, Learning, and Social Practice: A Methodological Study." Review of Research in Education 23:119–169.
Gillmore, Perry, and Glatthorn, Alan, eds. Children In and Out of School: Ethnography and Education. Washington, DC: Center for Applied Linguistics.
Green, Judith; Dixon, Carol; and Zaharlick, Amy. 2002. "Ethnography as a Logic of Inquiry." In Handbook for Methods of Research on English Language Arts Teaching, ed. James Flood, Julie Jensen, Diane Lapp, and James Squire. New York: Macmillan.
Hammersley, Martin, and Atkinson, Paul. 1995. Ethnography: Principles in Practice, 2nd edition. New York: Routledge.
Kvale, Steinar. 1996. Interviews: An Introduction to Qualitative Research Interviewing. Thousand Oaks, CA: Sage.
LeCompte, Margaret; Millroy, Wendy; and Preissle, Judith, eds. 1992. The Handbook of Qualitative Research in Education. San Diego, CA: Academic Press.
Linde, Charlotte. 1993. Life Stories: The Creation of Coherence. New York: Oxford University Press.
Ochs, Elinor. 1979. "Transcription as Theory." In Developmental Pragmatics, ed. Elinor Ochs and Bambi B. Schieffelin. New York: Academic Press.
Putney, LeAnn; Green, Judith; Dixon, Carol; and Kelly, Gregory. 1999. "Evolution of Qualitative Research Methodology: Looking beyond Defense to Possibilities." Reading Research Quarterly 34:368–377.
Richardson, Virginia. 2002. Handbook for Research on Teaching, 4th edition. Mahwah, NJ: Erlbaum.
Spradley, James. 1980. Participant Observation. New York: Holt, Rinehart and Winston.
Strike, Kenneth. 1974. "On the Expressive Potential of Behaviorist Language." American Educational Research Journal 11:103–120.
Van Maanen, John. 1988. Tales of the Field: On Writing Ethnography. Chicago: University of Chicago Press.
Wolcott, Harry. 1992. "Posturing in Qualitative Research." In The Handbook of Qualitative Research in Education, ed. Margaret LeCompte, Wendy Millroy, and Judith Preissle. New York: Academic Press.
LeAnn G. Putney
Judith L. Green
Carol N. Dixon
SCHOOL AND PROGRAM EVALUATION
Program evaluation is research designed to assess the implementation and effects of a program. Its purposes vary and can include (1) program improvement, (2) judging the value of a program, (3) assessing the utility of particular components of a program, and (4) meeting accountability requirements. Results of program evaluations are often used for decisions about whether to continue a program, improve it, institute similar programs elsewhere, allocate resources among competing programs, or accept or reject a program approach or theory. Through these uses program evaluation is viewed as a way of rationalizing policy decision-making.
Program evaluation is conducted for a wide range of programs, from broad social programs such as welfare, to large multisite programs such as the preschool intervention program Head Start, to program funding streams such as the U.S. Department of Education's Title I program that gives millions of dollars to high-poverty schools, to small-scale programs with only one or a few sites such as a new mathematics curriculum in one school or district.
Scientific Research versus Evaluation
There has been some debate about the relationship between "basic" or scientific research and program evaluation. For example, in 1999 Peter Rossi, Howard Freeman, and Michael Lipsey described program evaluation as the application of scientific research methods to the assessment of the design and implementation of a program. In contrast, Michael Patton in 1997 described program evaluation not as the application of scientific research methods, but as the systematic collection of information about a program to inform decision-making.
Both agree, however, that in many circumstances the design of a program evaluation that is sufficient for answering evaluation questions and providing guidance to decision-makers would not meet the high standards of scientific research. Further, program evaluations are often not able to strictly follow the principles of scientific research because evaluators must confront the politics of changing actors and priorities, limited resources, short timelines, and imperfect program implementation.
Another dimension on which scientific research and program evaluation differ is their purpose. Program evaluations must be designed to maximize the usefulness for decision-makers, whereas scientific research does not have this constraint. Both types of research might use the same methods or focus on the same subject, but scientific research can be formulated solely from intellectual curiosity, whereas evaluations must respond to the policy and program interests of stakeholders (i.e., those who hold a stake in the program, such as those who fund or manage it, or program staff or clients).
How Did Program Evaluation Evolve?
Program evaluation began proliferating in the 1960s, with the dawn of social antipoverty programs and the government's desire to hold the programs accountable for positive results. Education program evaluation in particular expanded also because of the formal evaluation requirements of the National Science Foundation–sponsored mathematics and science curriculum reforms that were a response to the 1957 launch of Sputnik by the Soviet Union, as well as the evaluation requirements instituted as part of the Elementary and Secondary Education Act of 1965.
Experimentation versus Quasi-experimentation
The first large-scale evaluations in education were the subject of much criticism. In particular, two influential early evaluations were Paul Berman and Milbrey McLaughlin's RAND Change Agent 1973– 1978 study of four major federal programs: the Elementary and Secondary Education Act, Title VII (bilingual education), the Vocational Education Act, and the Right to Read Act; and a four-year study of Follow Through, which sampled 20,000 students and compared thirteen models of early childhood education. Some of the criticisms of these evaluations were that they were conducted under too short of a time frame, used crude measures that did not look at incremental or intermediate change, had statistical inadequacies including invalid assumptions, used poorly supported models and inappropriate analyses, and did not consider the social context of the program.
These criticisms led to the promotion of the use of experiments for program evaluation. Donald Campbell wrote an influential article in 1969 advocating the use of experimental designs in social program evaluation. The Social Science Research Council commissioned Henry Riecken and Robert Boruch to write the 1978 book Social Experimentation, which served as both a "guidebook and manifesto" for using experimentation in program evaluation. The best example of the use of experimentation in social research is the New Jersey negative income tax experiment sponsored by the Office of Equal Opportunity of the federal Department of Health, Education, and Welfare.
Experiments are the strongest designs for assessing impact, because through random sampling from the population of interest and random assignment to treatment and control groups, experiments rule out other factors besides the program that might explain program success. There are several practical disadvantages to experiments, however. First, they require that the program be a partial coverage program–that is, there must be people who do not participate in the program, who can serve as the control group. Second, experiments require large amounts of resources that are not always available. Third, they require that the program be firmly and consistently implemented, which is frequently not the case. Fourth, experiments do not provide information about how the program achieved its effects. Fifth, program stakeholders sometimes feel that random assignment to the program is unethical or politically unfeasible. Sixth, an experimental design in a field study is likely to produce no more than an approximation of a true experiment, because of such factors as systematic attrition from the program, which leaves the evaluator with a biased sample of participants (e.g., those who leave the program, or attrite, might be those who are the hardest to influence, so successful program outcomes would be biased in the positive direction).
When experiments are not appropriate or feasible, quasi-experimental techniques are used. Set forth by Donald Campbell and Julian Stanley in 1963, quasi-experimentation involves a number of different methods of conducting research that does not require random sampling and random assignment to treatment and control groups. One common example is an evaluation that matches the program participants to nonparticipants that share similar characteristics (e.g., race) and measures outcomes of both groups before and after the program. The challenge to quasi-experimentation is to rule out what Campbell and Stanley termed internal validity threats, or factors that might be alternative explanations for program results besides the program itself, which in turn would reduce confidence in the conclusions of the study. Unlike experimental design, which protects against just about all possible internal validity threats, quasi-experimental designs generally leave one or several of them uncontrolled.
In addition to focusing on the relative strengths and weaknesses of experiments and quasi-experiments, criticisms of early large-scale education evaluations highlighted the importance of measuring implementation. For example, McLaughlin and Berman's RAND Change Agent study and the Follow-Through evaluation demonstrated that implementation of a specific program can differ a great deal from one site to the next. If an evaluation is designed to attribute effects to a program, varying implementation of the same program reduces the value of the evaluation, because it is unclear how to define the program. Thus, it is necessary to include in a program evaluation a complete description of how the program is being implemented, to allow the examination of implementation fidelity to the original design, and to discover any cross-site implementation differences that would affect outcomes.
In 1967 Michael Scriven first articulated the idea that there were two types of evaluation–one focused on evaluating implementation, called formative evaluation, and one focused on evaluating the impact of the program, called summative evaluation. He argued that emerging programs should be the subject of formative evaluations, which are designed to see how well a program was implemented and to improve implementation; and that summative evaluations should be reserved for programs that have been well-established and have stable and consistent implementation.
Related to the idea of formative and summative evaluation is a controversy over the extent to which the evaluator should be a program insider or an objective third party. In formative evaluations, it can be argued that the evaluator needs to become somewhat of an insider, in order to become part of the formal and informal feedback loop that makes providing program improvement information possible. In contrast, summative evaluations conducted by a program insider foster little confidence in the results, because of the inherent conflict of interest.
Stakeholder and Utilization Approaches
Still another criticism of early education evaluations was that stakeholders felt uninvolved in the evaluations; did not agree with the goals, measures, and procedures; and thus rejected the findings. This discovery of the importance to the evaluation of stake-holder buy-in led to what Michael Patton termed stakeholder or utilization-focused evaluation. Stake-holder evaluation bases its design and execution on the needs and goals of identified stakeholders or users, such as the funding organization, a program director, the staff, or clients of the program.
In the context of stakeholder evaluation, Patton in 1997 introduced the idea that it is sometimes appropriate to conduct goal-free evaluation. He suggested that evaluators should be open to the idea of conducting an evaluation without preconceived goals because program staff might not agree with the goals and because the goals of the program might change over time. Further, he argued that goal-free evaluation avoids missing unanticipated outcomes, removes the negative connotation to side effects, eliminates perceptual biases that occur when goals are known, and helps to maintain evaluator objectivity. Goals are often necessary, however, to guide and focus the evaluation and to respond to the needs of policymakers. As a result, Patton argued that the use of goals in program evaluation should be decided on a case-by-case basis.
Besides stakeholder and goal-free evaluation, Carol Weiss in 1997 advocated for theory-based evaluations, or evaluations that are grounded in the program's theory of action. Theory-based evaluation aims to make clear the theoretical underpinnings of the program and use them to help structure the evaluation. In her support of theory-based evaluation, Weiss wrote that if the program theory is outlined in a phased sequence of cause and effect, then the evaluation can identify weaknesses in the system or at what point in the chain of effects results can be attributed. Also, articulating a programmatic theory can have positive benefits for the program, including helping the staff address conflicts, examine their own assumptions, and improve practice.
Weiss explained that theory-based approaches have not been widespread because there may be more than one theory that applies to a program and no guidance about which to choose, and because the process of constructing theories is challenging and time consuming. Further, theory-based approaches require large amounts of data and resources. A theory-based evaluation approach does, however, strengthen the rigor of the evaluation and link it more with scientific research, which by design is a theory-testing endeavor.
Data Collection Methods
Within different types of evaluation (e.g., formative, stakeholder, theory-based), there have been debates about which type of methodology is appropriate, with these debates mirroring the debates in the larger social science community. The "scientific ideal" of using social experiments and randomized experiments, which supports the quantification of implementation and outcomes, is contrasted with the "humanistic ideal" that the program should be seen through the eyes of the clients and defies quantification, which supports an ethnographic or observational methodology.
Campbell believed that the nature of the research question should determine the question, and he encouraged evaluations that have both qualitative and quantitative assessments, with these assessments supporting each other. In the early twenty-first century, program evaluations commonly use a combination of qualitative and quantitative data collection techniques.
Does Evaluation Influence Policy?
Although the main justification for program evaluation is its role in rationalizing policy, program evaluation results rarely have a direct impact on decision-making. This is because of the diffuse and political nature of policy decision-making and because people are generally resistant to change. Most evaluations are undertaken and disseminated in an environment where decision-making is decentralized among several groups and where program and policy choices result from conflict and accommodation across a complex and shifting set of players. In this environment, evaluation results cannot have a single and clear use, nor can the evaluator be sure how the results will be interpreted or used.
While program evaluations may not directly affect decisions, evaluation does play a critical role in contributing to the discourse around a particular program or issue. Information generated from program evaluation helps to frame the policy debate by bringing conflict to the forefront, providing information about trade-offs, influencing the broad assumptions and beliefs underlying policies, and changing the way people think about a specific issue or problem.
Evaluation in the Early Twenty-First Century
In the early twenty-first century, program evaluation is an integral component of education research and practice. The No Child Left Behind Act of 2001 (reauthorization of the U.S. government's Elementary and Secondary Education Act) calls for schools to use "research-based practices." This means practices that are grounded in research and have been proven through evaluation to be successful. Owing in part to this government emphasis on the results of program evaluation, there is an increased call for the use of experimental designs.
Further, as the evaluation field has developed in sophistication and increased its requirements for rigor and high standards of research, the lines between scientific research and evaluation have faded. There is a move to design large-scale education evaluations to respond to programmatic concerns while simultaneously informing methodological and substantive inquiry.
While program evaluation is not expected to drive policy, if conducted in a rigorous and systematic way that adheres to the principles of social research as closely as possible, the results of program evaluations can contribute to program improvement and can provide valuable information to both advance scholarly inquiry as well as inform important policy debates.
See also: Research Methods, subentries on Overview, Qualitative and Ethnographic.
Berman, Paul, and McLaughlin, Milbrey. 1978. Federal Programs Supporting Educational Change, Vol. IV: The Findings in Review. Santa Monica, CA: RAND.
Campbell, Donald. 1969. "Reforms as Experiments." American Psychologist 24:409–429.
Campbell, Donald, and Stanley, Julian. 1963. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally.
Chelimsky, Eleanor. 1987. "What Have We Learned about the Politics of Program Evaluation?" Evaluation News 8 (1):5–22.
Cohen, David, and Garet, Michael. 1975. "Reforming Educational Policy with Applied Social Research." Harvard Educational Review 45 (1):17–43.
Cook, Thomas D., and Campbell, Donald T. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally.
Cronbach, Lee J. 1982. Designing Evaluations of Educational and Social Programs. San Francisco: Jossey-Bass.
Cronbach, Lee J.; Abron, Sueann Robinson; Dornbusch, Sanford; Hess, Robert; Phillips, D. C.; Walker, Decker; and Weiner,Stephen. 1980. Toward Reform of Program Evaluation: Aims, Methods, and Institutional Arrangements. San Francisco: Jossey-Bass.
House, Ernest; Glass, Gene; Mclean, Leslie; and Walker, Decker. 1978. "No Simple Answer: Critique of the Follow Through Evaluation." Harvard Educational Review 48:128–160.
Patton, Michael. 1997. Utilization-Focused Evaluation, 3rd edition. Thousand Oaks, CA: Sage.
Riecken, Henry, and Boruch, Robert. 1978. Social Experimentation: A Method for Planning and Evaluating Social Intervention. New York: Academic Press.
Rossi, Peter; Freeman, Howard; and Lipsey, Mark. 1999. Evaluation: A Systematic Approach, 6th edition. Thousand Oaks, CA: Sage.
Scriven, Michael. 1967. "The Methodology of Evaluation." In Perspective of Curriculum Evaluation, ed. Robert E. Stake. Chicago: Rand McNally.
Shadish, William R.; Cook, Thomas; and Leviton, Laura. 1991. Foundations of Program Evaluation: Theories of Practice. Newbury Park, CA: Sage.
U.S. Office of Education. 1977. National Evaluation: Detailed Effects. Volumes II-A and II-B of the Follow Through Planned Variation Experiment Series. Washington, DC: Government Printing Office.
Weiss, Carol. 1972. Evaluation Research: Methods for Assessing Program Effectiveness. Englewood Cliffs, NJ: Prentice Hall.
Weiss, Carol. 1987. "Evaluating Social Programs: What Have We Learned?" Society 25:40–45.
Weiss, Carol. 1988. "Evaluation for Decisions: Is Anybody There? Does Anybody Care?" Evaluation Practice 9:5–20.
Weiss, Carol. 1997. "How Can Theory-Based Evaluation Make Greater Headway?" Evaluation Review 21:501–524.
Since the early 1900s, researchers have relied on verbal data to gain insights about thinking and learning. Over the years, however, the perceived value of verbal data for gaining such insights has waxed and waned. In 1912 Edward Titchener, one of the founders of structural psychology, advocated the use of introspection by highly trained self-observers as the only method for revealing certain cognitive processes. At the same time, this technique of observing and verbalizing one's own cognitive processes drew much criticism. Researchers questioned the objectivity of the technique and the extent to which people have knowledge of and access to their cognitive processes. With behaviorism as the dominant perspective for studying learning in the United States, verbal data were treated as behavioral products, not as information that might reveal something about cognitive processing. From about the 1920s to 1950s, most U.S. researchers abandoned the use of introspective techniques, as well as most other types of verbal data such as question answering.
While U.S. learning theorists and researchers were relying almost solely on nonverbal or very limited verbal (e.g., yes/no response) techniques, the Swiss cognitive theorist Jean Piaget was relying primarily on children's verbal explanations for gaining insights into their cognitive abilities and processes. Piaget believed that children's explanations for their responses to various cognitive tasks provided much more information about their thinking than did the task responses alone. United States theorists, however, were not ready to consider Piaget's work seriously until about 1960, when cognitive psychology was beginning to emerge and there was declining satisfaction with a purely behavioral perspective.
With the rise of cognitive psychology beginning in the 1950s and 1960s, educational and experimental psychologists became interested once again in the usefulness of verbal data for providing information about thinking and learning. Cognitive researchers rarely use Titchener's original introspective technique in the early twenty-first century. Since the 1980s, however, researchers have increasingly used verbal protocol analysis, which has roots in the introspective technique, to study the cognitive processes involved in expert task performance, problem solving, text comprehension, science education, second language acquisition, and hypertext navigation.
What Are Verbal Protocols?
Verbal protocols are rich data sources containing individuals' spoken thoughts that are associated with working on a task. While working on a particular task, subjects usually either think aloud as thoughts occur to them or they do so at intervals specified by the researcher. In some studies, researchers ask subjects to verbalize their thoughts upon completion of the task. The verbalizations are recorded verbatim, usually using a tape recorder, and are then coded according to theory-driven and/or empirically driven categories.
Verbal protocols differ from introspection. Subjects are not instructed to focus on the cognitive processes involved in task completion nor are they trained in the self-observation of cognitive processing. The goal is for subjects to express out loud the thoughts that occur to them naturally. Researchers use these data in conjunction with logical theoretical premises to generate hypotheses and to draw conclusions about cognitive processes and products.
What Can Verbal Protocols Reveal about Thinking and Learning?
In order to verbalize one's thoughts, individuals must be aware of those thoughts and the thoughts must be amenable to language. Thus, verbal protocol analysis can reveal those aspects of thinking and learning that are consciously available, or activated in working memory, and that can be encoded verbally.
One major advantage of verbal protocol data is that they provide the richest information regarding the contents of working memory during task execution. In studies of reading comprehension, for example, verbal protocols have provided a detailed database of the types of text-based and knowledge-based inferences that might occur during the normal reading of narrative texts. Data using other measures such as sentence reading time and reaction time to single-word probes have corroborated some of the verbal protocol findings. For example, corroborating evidence for the generation of causal inferences and goal-based explanations exists. Verbal protocols have also provided information about the particular knowledge domains that are used to make inferences when reading narratives, and about differences in readers' deliberate strategies for understanding both narrative and informational texts.
Verbal protocols have been used extensively in the study of expert versus novice task performance across a variety of domains (e.g., cognitive-perceptual expertise involved in chess, perceptual-motor expertise such as in sports, science and mathematical problem-solving strategies, skilled versus less-skilled reading). While the specific insights about the differences between expert and novice approaches vary from domain to domain, some generalities across domains can be made. Clearly, experts have more knowledge and more highly organized knowledge structures within their domains than do novices. But the processes by which they solve problems and accomplish tasks within their domains of expertise also differ. Verbal protocols have revealed that experts are more likely to evaluate and anticipate the ever-changing situations involved with many problems and to plan ahead and reason accordingly. Knowledge about expert and novice problem-solving processes has implications for developing and assessing pedagogical practices.
Another advantage of verbal protocol analysis is that it provides sequential observations over time. As such, it reveals changes that occur in working memory over the course of task execution. This has been useful in studies of reading comprehension where the information presented and the individual's representation of the text change over time, in studies of problem solving where multiple steps are involved in reaching a solution and/or where multiple solutions are possible, in studies of expert versus novice task performance, and in studies of conceptual change.
Limitations of Verbal Protocol Data
As is the case with most research methods, verbal protocols have both advantages and limitations. Obviously, subjects can verbalize only thoughts and processes about which they are consciously aware. Thus, processes that are automatic and executed outside of conscious awareness are not likely to be included in verbal protocols, and other means of assessing such processes must be used. Also, nonverbal knowledge is not likely to be reported.
Most authors of articles examining the think-aloud procedure seem to disagree with the 1993 contention of K. Anders Ericsson and Herbert A. Simon that thinking aloud does not usually affect normal cognitive processing. It is thought that the think-aloud procedure may lead to overestimates and/or underestimates of the knowledge and processes used under normal task conditions. The need to verbalize for the think-aloud task itself might encourage subjects to strategically use knowledge or processes that they might not otherwise use. Alternately, the demands of the think-aloud task might interfere with subjects' abilities to use knowledge and/or processes they might use under normal conditions. Self-presentation issues (e.g., desire to appear smart, embarrassment, introversion/extroversion) might affect subjects' verbal reports. Finally, the pragmatics and social rules associated with the perception of having to communicate one's thoughts to the researcher might also lead to overestimates or underestimates of knowledge and processes typically used.
Unfortunately, it is not possible to know if a verbal protocol provides a complete picture of the knowledge and processes normally used to perform a task. Typically, however, no single research technique provides a complete picture. Only the use of multiple measures for assessing the same hypotheses and for assessing various aspects of task performance can provide the most complete picture possible.
A final limitation of verbal protocol methodology is that it is very labor intensive. The data collection and data coding are extremely time consuming as compared with other methodologies. The amount of potential information that can be acquired about the contents of working memory during task performance, however, is often well worth the time required.
Optimizing the Advantages and Minimizing the Limitations
Several suggestions have been put forth for increasing the likelihood of obtaining verbal protocol data that provide valid information about the contents of working memory under normal task conditions. The most frequent suggestions are as follows:
- Collect verbal protocol data while subjects are performing the task of interest.
- Ask subjects to verbalize all thoughts that occur. One should not direct their thoughts or processing by asking for specific types of information unless one wishes to study the planned, strategic use of that type of information.
- Make it clear to the subjects that task performance is their primary concern and that thinking aloud is secondary. If, however, a subject is silent for a relatively long period as compared to others during task execution, prompts such as "keep talking" may become necessary.
- To minimize as much as possible the conversational aspects of the think-aloud task, the researcher should try to remain out of the subject's view.
See also: Language and Education; Learning, subentry on Conceptual Change; Reading, subentries on Comprehension, Content Areas; Science Learning, subentry on Explanation and Argumentation.
Berk, Laura E. 2000. Child Development, 5th edition. Boston: Allyn and Bacon.
Cote, Nathalie, and Goldman, Susan R. 1999. "Building Representations of Informational Text: Evidence from Children's Think-Aloud Protocols." In The Construction of Mental Representations during Reading, ed. Herre van Oostendorp and Susan R. Goldman. Mahwah, NJ: Erlbaum.
Crutcher, Robert J. 1994. "Telling What We Know: The Use of Verbal Report Methodologies in Psychological Research." Psychological Science 5:241–244.
Dhillon, Amarjit S. 1998. "Individual Differences within Problem-Solving Strategies Used in Physics." Science Education 82:379–405.
Ericsson, K. Anders, and Simon, Herbert A. 1993. Protocol Analysis: Verbal Reports as Data, revised edition. Cambridge, MA: MIT Press.
Hurst, Roy W., and Milkent, Marlene M. 1996. "Facilitating Successful Prediction Problem Solving in Biology through Application of Skill Theory." Journal of Research in Science Teaching 33:541–552.
Long, Debra L., and Bourg, Tammy. 1996. "Thinking Aloud: Telling a Story about a Story." Discourse Processes 21:329–339.
Magliano, Joseph P. 1999. "Revealing Inference Processes during Text Comprehension." In Narrative Comprehension, Causality, and Coherence: Essays in Honor of Tom Trabasso, ed. Susan R. Goldman, Arthur C. Graesser, and Paul van den Broek. Mahwah, NJ: Erlbaum.
Magliano, Joseph P.; Trabasso, Tom; and Graesser, Arthur C. 1999. "Strategic Processing during Comprehension." Journal of Educational Psychology 91:615–629.
Payne, John W. 1994. "Thinking Aloud: Insights into Information Processing." Psychological Science 5 (5):241–248.
Piaget, Jean. 1929. The Child's Conception of the World (1926), trans. Joan Tomlinson and Andrew Tomlinson. London: Kegan Paul.
Pressley, Michael, and Afflerbach, Peter. 1995. Verbal Protocols of Reading: The Nature of Constructively Responsive Reading. Hillsdale, NJ: Erlbaum.
Pritchard, Robert. 1990. "The Evolution of Introspective Methodology and Its Implications for Studying the Reading Process." Reading Psychology: An International Quarterly 11 (1):1–13.
Trabasso, Tom, and Magliano, Joseph P. 1996. "Conscious Understanding during Comprehension." Discourse Processes 21:255–287.
Whitney, Paul, and Budd, Desiree. 1996. "Think-Aloud Protocols and the Study of Comprehension." Discourse Processes 21:341–351.
Wilson, Timothy D. 1994. "The Proper Protocol: Validity and Completeness of Verbal Reports." Psychological Science 5 (5):249–252.
Zwaan, Rolf A., and Brown, Carol M. 1996. "The Influence of Language Proficiency and Comprehension Skill on Situation-Model Construction." Discourse Processes 21:289–327.
family measurementmurray a. straus,susan m. ross
methodologyalan acock, yoshie sano
A 1964 review of tests and scales used in family research found serious deficiencies (Straus 1964), and subsequent reviews showed very little improvement (Straus 1992; Straus and Brown 1978). However, changes in the nature of the field have contributed to an increase in the use of standardized tests to measure characteristics of the family. This is an important development because standardized tests are vital tools for both clinical assessment and research. New tests tend to produce a flowering of research focused on the newly measurable concept. Examples of tests that have fostered much research include measures of marital satisfaction (Spanier 1976), adequacy of family functioning (Olson, Russell, and Sprenkle 1989), and family violence (Straus 1990a). Hundreds of family measures are abstracted or reproduced in compendiums such as Family Assessment (Grotevant and Carlson 1989), Handbook of Measurements for Marriage and Family Therapy (Fredman and Sherman 1987), and Handbook of Family Measurement Techniques (Touliatos, Perlmutter, and Straus 2001). There is also a growing methodological literature on techniques for constructing measures of family characteristics, such as those by Karen S. Wampler and Charles F. Halverson, Jr. (1993) and Thomas W. Draper and Anastascios C. Marcos (1990). The state of testing in family research, however, is not as healthy as these publications might suggest. In fact, the data indicate that the validity of tests used in family research is rarely known.
For purposes of this entry, the term measure includes test, scale (such as Likert, Thurstone, Guttman, and Semantic Differential scales), index, factor score, scoring system (when referring to methods of scoring social interaction such as Gottman 1994 or Patterson 1982), and latent variables constructed by use of a structural equation modeling program. The defining feature is that they "combine the values of several items [also called indicators, questions, observations, events] into a composite measure . . . used to predict or gauge some underlying continuum which can only be partially measured by any single item or variable" (Nie et al. 1978, p. 529).
Advantages of Multiple-Item Measures
Multiple-item measures are emphasized in this entry because they are more likely to be valid than single-item measures. Although one good question or observation may be enough and thirty bad ones are useless, there are reasons why multiple-item measures are more likely to be valid. One reason is that most phenomena of interest to family researchers have multiple facets that can be adequately represented only by use of multiple items. A single question, for example, is unlikely to represent the multiple facets of marital satisfaction adequately.
A second reason for greater confidence in multiple-item measures occurs because of the inevitable risk of error in selecting items. If a single item is used and there is a conceptual error in formulating or scoring it, hypotheses that are tested by using that measure will not be supported even if they are true. However, when a multiple-item test is used, the adverse effect of a single invalid item is limited to a relatively small reduction in validity (Straus and Baron 1990). In a fifteen-item scale, for example, a defective item is only 6.6 percent of the total, so the findings would parallel those obtained if all fifteen items were correct.
Multiple items are also desirable because measures of internal consistency reliability are based on the number of items in the measure and the correlation between them. Given a certain average correlation between items, the more items, the higher the reliability. If only three items are used, it is rarely possible to achieve a high level of reliability. Reliability needs to be high because it sets an upper limit on validity.
Status and Trends in Family Measurement
To investigate the quality of measurement in family research, all empirical studies published in two major U.S. family journals (Journal of Marriage and the Family and Journal of Family Psychology) were examined. To determine trends in the Journal of Marriage and the Family, issues from 1982 and 1992 were compared. For the Journal of Family Psychology, issues from 1987 (the year the journal was founded) and 1992 were compared. Of the 161 empirical research articles reviewed, slightly fewer than two-thirds used a multiple item measurement. This increased from 46.9 percent initially to 68.1 percent in 1992. A typical article used more than one such instrument, so that a total of 219 multiple item measures were used in these 161 articles. Reliability was reported in 79.4 percent of these articles. Reliability reporting increased from 53.3 percent initially to 90.6 percent in 1992. Six percent of the articles had as their main purpose describing a new measurement instrument or presenting data concerning an existing instrument.
How one interprets these statistics depends on the standard of comparison. Articles in sociology journals and child psychology and clinical psychology journals are appropriate comparisons because these are the disciplines closest to family studies and in which many family researchers were trained. For sociology, the findings listed above can be compared to those reported in a study by Murray A. Straus and Barbara Wauchope (1992), in which they examined empirical articles from the 1979 and 1989 issues of American Sociological Review, American Journal of Sociology, and Sociological Methods and Research. This comparison shows that articles in family journals pay considerably more attention to measurement than articles in leading sociological journals. None of the 185 articles in sociology journals was on a specific measure, whereas 6 percent of the articles in the family journals were devoted to describing or evaluating an instrument. This portends well for family research because it is an investment in tools for future research. Only one-third of the articles in the sociology journals used a multiple-item measure, compared to more than two-thirds (68%) of articles in the family journals. The record of family researchers also exceeds that of sociologists in respect to reporting reliability. Only about 10 percent of the articles in sociology journals, compared to 80 percent of the articles in family journals, reported the reliability of the instruments. The main problem area is validity; only 12.4 percent of the articles in family journals described or referenced evidence of validity. The fact that this is three times more than in sociology is not much consolation because 12 percent is still a small percentage. Moreover, reporting or citing information on validity did not increase from the base period. Since validity is probably the most crucial quality of an instrument, the low percentage and the lack of growth indicate that more attention needs to be paid to measurement in family research.
There is no comparable study of measures in child or clinical psychological journals.
Reasons for Underdevelopment of Measures
The limited production of standard and validated measures of family characteristics is probably the result of a number of causes. Conventional wisdom attributes it to a lack of time and other resources for instrument development and validation. This is not an adequate explanation because it is true of all the social sciences. Why do psychologists devote the most resources to developing and validating tests, sociologists the least, and family researchers fall in between?
One likely reason is a difference in rewards for measurement research. A related reason is a difference in the opportunities and constraints. In psychology, there are journals devoted to psychological measures in whole or in part, such as Educational and Psychological Measurement and Journal of Clinical and Consulting Psychology. There are no such journals in sociology or family studies. Moreover, there is a large market for psychological tests, and several major firms specialize in publishing tests. It is a multimillion-dollar industry, and authors of tests can earn substantial royalties. By contrast, sociology lacks the symbolic and economic reward system that underlies the institutionalization of test development as a major specialization in psychology. The field of family studies lies in between. In principle there should be a demand for tests because of the large number of family therapists, but few family therapists actually use tests.
A second explanation for the differences among psychology, family studies, and sociology in attention to measurement is a situational constraint inherent in the type of research done. A considerable amount of family research is done by survey methods—for example, the National Survey of Families and Households. Surveys of this type usually include measures of many variables in a single thirty- to sixty-minute interview. Clinical psychologists, on the other hand, often can use longer and therefore more reliable tests, because their clients have a greater stake in providing adequate data and will tolerate undergoing two or more hours of testing.
Third, most tests are developed for a specific study and there is rarely a place in the project budget for adequate measure development—test/retest reliability, concurrent and construct validity, and construction of normative tables. Even when the author of a measure does the psychometric research needed to enable others to evaluate whether the measure might be suitable for their research, family journals rarely allow enough space to present that material.
Fourth, the optimum procedure is for the author to write a paper describing the test, the theory underlying the test, the empirical procedures used to develop the test, reliability and validity evidence, and norms. This rarely occurs because of the lack of resources indicated above. In addition, most investigators are more interested in the substantive issues for which the project was funded.
Another reason why standardized tests are less frequently used in family research is that many studies are based on cases from agencies. A researcher studying child abuse who draws the cases from child protective services might not need a method of measuring child abuse. However, standardized tests are still needed because an adequate understanding of child abuse cannot depend solely on officially identified cases. It is important also to do research on cases that are not known to agencies, because such cases are much more numerous than cases known to agencies and because general population cases typically differ in important ways from the cases known to agencies (Straus 1990b).
The Future of Family Research Measures
There are grounds for optimism and grounds for concern about the future of family tests. The grounds for concern are, first, that in survey research on the family, concepts are often measured by a single interview question. Second, even when a multiple-item test is used, it is rarely on the basis of empirical evidence of reliability and validity. Third, the typical measure developed for use in a family study is never used in another study. One can speculate that this hiatus in the cumulative nature of research occurs because of the lack of evidence of reliability and validity and because authors rarely provide sufficient information to facilitate use of the instrument by others.
The grounds for optimism are to be found in the sizable and slowly growing number of standardized instruments, as listed in compendiums (e.g., Grotevant and Carlson 1989; Fredman and Sherman 1987; Touliatos, Perlmutter, and Straus 1990). A second ground for optimism is the rapid growth in the number of psychologists doing family research, because psychologists bring to family research an established tradition of test development. Similarly, the explosive growth of family therapy is grounds for optimism, because it is likely that more tests will gradually begin to be used for intake diagnosis. A third ground for optimism is the increasing use of some family measures in cultures other than those in which the measures were initially developed. For example, David H. Olson's Family Adaptability and Cohesion Evaluation Scales (FACES) (1993) have been used to research Chinese families (Philips, West, Shen, and Zheng 1998; Tang and Chung 1997; Wang, Zhang, Li, and Zhao 1998; Zhang et al. 1995), immigrants to Israel (Ben-David and Gilbar 1997; Gilbar 1997), and Ethiopian migrants (Ben-David & Erez-Darvish 1997). The cross-cultural use of measures allows for assessments of validity and reliability outside of the background assumptions of the cultures in which they were developed.
There is a certain irony in the second source of optimism, because basic researchers usually believe that they, not clinicians, represent quality in science. In respect to measurement, clinicians tend to demand instruments of higher quality than do basic researchers because the consequences of using an inadequate measure are more serious. When a basic researcher uses an instrument with low reliability or validity, it can lead to a Type II error—that is, failing to accept a true hypothesis. This may result in theoretical confusion or a paper not being published. But when a practitioner uses an invalid or unreliable instrument, the worst-case scenario can involve injury to a client. Consequently, clinicians need to demand more evidence of reliability and validity than do researchers. As a result, clinically oriented family researchers tend to produce and make available more adequate measures. Hubert M. Blalock (1982) argued that inconsistent findings and failure to find empirical support for sound theories may be due to lack of reliable and valid means of operationalizing concepts in the theories being tested. It follows that research will be on a sounder footing if researchers devote more attention to developing reliable and valid measures of family characteristics.
See also:Family Assessment; Family Diagnosis/DSM IV; Marital Quality; Research: Methodology
ben-david, a., and erez-darvish, t. (1997). "the effect of the family on the emotional life of ethiopian immigrant adolescents in boarding schools in israel." residential treatment for children and youth 15(2):39–50.
ben-david, a., and gilbar, o. (1997). "family, migration, and psychosocial adjustment to illness." social work in health care 26(2):53–67.
blalock, h. m. (1982). conceptualization and measurement in the social sciences. newbury park, ca: sage.
burgess, e. w., and cottrell, l. s. (1939). predicting success or failure in marriage. englewood cliffs, nj: prentice hall.
cronbach, l. j. (1970). essentials of psychological testing. new york: harper & row.
draper, t. w., and marcos, a. c. (1990). family variables:conceptualization, measurement, and use. newbury park, ca: sage.
fredman, n., and sherman, r. (1987). handbook of measurements for marriage and family therapy. new york: brunner/mazel.
gilbar, o. (1997). "the impact of immigration status and family function on the psychosocial adjustment of cancer patients." families, systems and health 15(4):405–412.
gottman, j. m. (1994). what predicts divorce? the relationship between marital process and marital outcome. hillsdale, nj: erlbaum.
grotevant, h. d., and carlson, c. i. (1989). family assessment: a guide to methods and measures. new york: guilford.
nie, n. h.; hull, c. h.; jenkins, j. g.; steinbrenner, k.; and bent, d. h. (1978). spss: statistical package for the social sciences. new york: mcgraw-hill.
olson, d. h. (1993). "circumplex model of marital and family systems: assessing family functioning." in normal family process, ed. f. walsh. new york: guilford press.
olson, d. h.; russell, c. s.; and sprenkle, d. h. (1989). circumplex model: new scales for assessing systematic assessment and treatment of families. new york: haworth press.
patterson, g. r., ed. (1982). coercive family processes: asocial learning approach. eugene, or: castalia.
philips, m. r.; west, c. l.; shen, q.; and zheng, y. (1998). "comparison of schizophrenic patients' families and normal families in china, using chinese versions of faces-ii and the family environment scales." family process 37:95–106.
spanier, g. b. (1976). "measuring dyadic adjustment: the quality of marriage and similar dyads." journal of marriage and the family 38:15–28.
straus, m. a. (1964). "measuring families." in handbook of marriage and the family, ed. h. t. christenson. chicago: rand mcnally.
straus, m. a. (1990a). "the conflict tactics scales and its critics: an evaluation and new data on validity and reliability." in physical violence in american families: risk factors and adaptations to violence in 8,145 families, ed. m. a. straus and r. j. gelles. new brunswick, nj: transaction.
straus, m. a. (1990b). "injury and frequency of assault and the 'representative sample fallacy' in measuring wife beating and child abuse." in physical violence in american families: risk factors and adaptations to violence in 8,145 families, ed. m. a. straus and r. j. gelles. new brunswick, nj: transaction.
straus, m. a. (1992). "measurement instruments in child abuse research." paper prepared for the national academy of sciences panel of child abuse research. durham, nh: family research laboratory, university of new hampshire.
straus, m. a., and baron, l. (1990). "the strength of weak indicators: a response to gilles, brown, geletta, and dalecki." sociological quarterly 31:619–624.
straus, m. a., and brown, b. w. (1978). family measurement techniques, 2nd edition. minneapolis: university of minnesota press.
strauss, m. a., and wauchope, b. (1992). "measurement instruments." in encyclopedia of sociology, ed. e. f. borgatta and m. l. borgatta. new york: macmillan.
tang, c. s., and chung, t. k. h. (1997). "psychosexual adjustment following sterilization: a prospective study on chinese women." journal of psychosomatic research 42(2):187–196.
touliatos, j.; perlmutter, d.; and straus, m. a. (2001). handbook of family measurement techniques, 4th edition. thousand oaks, ca: sage.
wampler, k. s., and halverson, c. f., jr. (1993). "quantitative measurement in family research." in source book of family theories and methods: a contextual approach, ed. p. g. boss, w. j. doherty, r. larossa, w. r. schumm, and s. k. steinmetz. new york: plenum.
wang, z.; zhang, x.; li, g.; and zhao, z. (1998). "a study of family environment, cohesion, and adaptability in heroin addicts." chinese journal of clinical psychology 6(1):32–34.
zhang, j.; weng, z.; liu, q.; li, h.; zhao, s.; xu, z.; chen, w.; and ran, h. (1995). "the relationship of depression of family members and family functions." chinese journal of clinical psychology 3(4):225–229.
murray a. straus (1995) susan m. ross (1995) revised by james m. white
Four characteristics shape the research methods that family scholars use. First, family scholarship has conceptual roots in a variety of disciplines, including anthropology, family and consumer science, economics, history, human ecology, psychology, and sociology. Second, the subject matter studied by family scholars overlaps the subject matter studied by a variety of content specialty areas such as women's studies, human development, gerontology, education, nutrition, and counseling. Third, although other fields often focus on isolated individuals, family scholars study individuals who are embedded in family systems. Fourth, families have a shared past and future (Copeland and White 1991). Being responsive to these characteristics requires multiple perspectives from quantitative and qualitative methods, experimental and survey methods, and cross-sectional and longitudinal methods (Schumm and Hemesath 1999).
Some family scholars approach their study of families from a large-scale/historical perspective or a large-scale/comparative perspective. Others approach it from an individual perspective. Some scholars seek to discover family pattern in ancient culture; others seek to solve current social problems. The unit of analysis—that is, the smallest unit about which a scholar draws a conclusion—may be an individual (child, mother, nonresident father), a dyad (husband and wife, siblings), a family (nuclear, stem, lesbian, single parent), a culture, or a historical period.
Aresearcher may want to explain how a hyperactive child influences outcomes for families, such as conflict or chance of divorce. Other researchers may explain hyperactivity in children in terms of family or cultural factors. For the first researcher, the child's hyperactivity is the independent variable (predictor). For the second researcher, the child's hyperactivity is the dependent variable (outcome).
The intricate relationship between root disciplines and specialty areas on the one hand, and research methodology of groups of scholars on the other hand, has been detailed in a more complete exposition by Robert E. Larzelere and David M. Klein (1987).
Strategies for Data Collection
Data is the empirical information researchers use for drawing conclusions. Often they will use a cross-sectional design when data are collected only once. This is a snapshot of how things are at a single time. Less common are longitudinal designs, where the data are collected at least twice. Although each collection point provides a snapshot, it is possible to make inferences about changes. With time-series designs, you have many snapshots, often more than thirty data collection points. Cross-sectional design. A cross-sectional design can be used in a survey, experiment, in-depth interview, or observational study. The justification for this design is usually cost.
Suppose researchers are interested in the effects of divorce on children. A cross-sectional design could take a large sample of children and measure their well-being. The children would be divided by whether they experienced divorce. If the children who had experienced divorce fared worse on well-being, the researcher would conclude that divorce had adverse effects.
Cross-sectional analysis requires the researcher to examine covariates (related variables) to minimize alternative explanations. Children who experienced divorce probably lived in families that had conflict, and may fare worse because of this conflict rather than because their parents divorced. Researchers would ask for retrospective information about marital conflict before the divorce, income before and after the divorce, and so on. These covariates would be controlled to clarify the effects of divorce, as distinct from the effects of these other variables, because each covariate is an alternative explanation for the children's well-being.
Longitudinal design. By collecting data at different times, causal order is clear; the variables measured at time one can cause the variables at time two, but not the reverse. When variables are measured imperfectly, however, the errors in the first wave are often correlated with the errors in the second and third waves. Therefore, statistical analyses of longitudinal data are typically very complex.
The question concerning the influence of divorce on the well-being of children illustrates advantages and disadvantages of longitudinal strategies. The well-being of children is measured at one time. Five years later the researcher would contact the same children and measure their well-being. Some of the children's parents would have gotten divorced. Children who experienced divorce could have their well-being at time two compared to time one. The difference would be attributed to the effects of divorce. By knowing the well-being of these children five years earlier, some controls for the influence of conflict would be automatically in place.
Although longitudinal designs are very appealing, they present some basic problems. After five years, the researcher may locate only 60 or 70 percent of the children. Those who vanished in the interval might have altered the researcher's conclusions. Second, five years is a long time in the life of a child, and many influences could have entered his or her life. Statistically these problems can be minimized, but the analysis is quite complex.
Time-series design. Although some people use time-series and longitudinal labels interchangeably, measures are made many times, usually thirty times or more, for time-series analyses. By tracking the participants over time, changes are described and attributed to life events.
Using the example of the effects of divorce on children, a researcher may be interested in how effects vary over time. Perhaps there is an initial negative effect that diminishes over time. Alternatively, initial adverse effects may decrease over time for girls but increase for boys.
Design for Collecting Data
Researchers have a variety of approaches and designs for collection of data. Three common designs are surveys, experiments and quasi-experiments, and observation and in-depth interviews.
Surveys. The most common data collection strategy is the survey. For example, the National Longitudinal Survey of Youth 1997 (NLSY97) is a sample of nearly 9,000 twelve- to sixteen-year-old adolescents and their families. This survey will be completed each year for this panel of youth, as they become adults. Such surveys allow researchers to generalize to a larger population, such as that of the United States, and to use longitudinal methods such as growth curves. Because these surveys are large, researchers can study special populations such as adolescents in single-parent families, teen mothers, and juvenile delinquents. These are "general-purpose" surveys, and independent scholars who had nothing to do with the data collection may have access to it to analyze the results.
A second type of survey focuses on special populations. Researchers with a particular interest—for example, middle-aged daughters caring for aged mothers—focus all of their resources on collecting data about a special group. In many cases, these surveys are not probability samples. Credibility for generalizing comes from comparing the profile of participants to demographic information. An advantage of these surveys is that they can ask questions the researcher wants to ask. There might be a twenty-item scale to measure the physical dependency of an aged mother. Such detailed measurements are not usually available in general-purpose surveys. Because the subject of specialized studies is focused, it is often possible to include more open-ended questions than would be practical in a general-purpose survey.
Experiments and quasi-experiments. Experimental designs are used when internal validity is critical (Brown and Melamed 1990). Experiments provide stronger evidence of causal relationship than surveys because an experiment involves random assignment of subjects to groups and the manipulation of the independent variable by the researcher. Nevertheless, experimental designs give up some external validity as they gain internal validity. Because of the difficulty or impossibility of locating subjects who will volunteer to be assigned randomly to groups, many experiments are based on "captive" populations such as college students. Captive populations are fairly homogeneous regarding age, education, race, and socioeconomic status, making it difficult to generalize to a broader population. Experiments that involve putting strangers together for a short experience provide groups that differ qualitatively from naturally occurring groups such as families (Copeland and White 1991).
Many research questions are difficult to address using experiments. Suppose a survey result shows a negative correlation between husband-wife conflict and child well-being. A true experiment requires both randomization of subjects and manipulation of the independent variable. The researcher cannot randomly assign children to families. Nor can the level of husband-wife conflict be manipulated.
Observation and in-depth interviews. Both qualitative and quantitative researchers use observation and in-depth interviews. This may be done in a deliberately unstructured way. For instance, a researcher may observe the interaction between an African-American mother and her child when the child is dropped off at a childcare facility, comparing this to the mother-child interaction for other ethnic and racial groups. The researcher may structure this observation by focusing on specific aspects such as counting tactile contact (i.e., touching or hugging). For many qualitative researchers, however, the aspects of interaction that are recorded emerge after a long period of unstructured observation.
A quantitative researcher may have an elaborate coding system for observing family interaction. This may involve videotaping either ordinary (real life) or contrived situations. A researcher interested in family decision making might give each family a task, such as deciding what they would do with $1,000. Alternatively, the researcher might record family interaction at the dinner table. The videotape would be analyzed using multiple observers and a prearranged system. Observers might record how often each family member spoke, how often each member suggested a solution, how often each member tried to relieve tension, and how often each member solicited opinions from others (Bates 1950).
In-depth interviews are widely used by qualitative researchers. When someone is trying to understand how families work, in-depth interviews are an important resource. In-depth interviews vary in their degree of structure. A white researcher, who is married, has a middle-class background and limited experience in interracial settings, may want to understand the relationship between nonresident African-American fathers and their children. Such a researcher would gain much from unstructured in-depth interviews with nonresident African-American fathers and their children, including knowledge to replace assumptions and stereotypes. It may take a series of extended, unstructured interviews before the researcher is competent to develop a structural interview, much less design a survey or an experiment.
Many scholars would limit in-depth interviews and observational studies to areas where knowledge is limited. A major advantage of such designs, however, is that they open up research to new perspectives precisely where survey or experimental researchers naively believe they have detailed knowledge. By grounding research in the behavior and interactions of ordinary people, researchers may be less prone to impose explanations developed by others.
Two major problems are evident with observation and in-depth interviews. First, these approaches are time-consuming and make it costly to have a large or representative sample. Second, there are dangers of the researcher losing objectivity. When a researcher spends months with a group either as a participant or an observer, there is a danger of identifying so much with the group that objectivity is lost.
Selected other strategies. Case studies are used on rare populations such as families in which a child has AIDS. Content analysis and narrative analysis are used to identify emergent themes. For example, a review of the role of fathers in popular novels of the 1930s, 1960s, and 1990s will tell much about the changing ideology of family roles. Historical analysis has experienced a remarkable growth in the past several decades (Lee 1999), as evidenced by a major journal, the Journal of Family History. Demographic analysis is sometimes done to provide background information (economic well-being of continuously single families—see Acock and Demo 1994), document trends (demographic change of U.S. families—see Teachman, Tedrow, and Crowder 2000), and comparative studies (development of close relationships in Japan and the United States—see Rothbaum, Pott, Azuma, Miyake, and Weisz 2000). Increasingly, studies are using multiple approaches: quantitative, qualitative, and historical. Using multiple methods is called triangulation.
All methodological orientations share a common need for measurement. Scientific advancement in many fields is built on progress in measurement (Draper and Marcos 1990). Good measurement is critical to family studies because of the complexity of the variables being measured. Most concepts have multiple dimensions and a subjective component. A happy marriage for the husband may be a miserable marriage for the wife. A daughter may have a positive relationship with her father centered on her performance in sports but a highly negative relationship with her father centered on her sexual activity. Ignoring multiple dimensions and the subjective components of measurement is a problem for both quantitative and qualitative researchers.
Scales. The most common, the Likert scale, gives the participant a series of statements about a concept, and the participant checks whether he or she strongly agrees, agrees, does not know, disagrees, or strongly disagrees with each of the statements. Often fewer than ten questions are asked, but they are chosen in a way that represents the full domain of the concept. Thus, to measure marital happiness, several items would be used to represent various aspects of the marriage.
The following is becoming a minimum standard for evaluating a scale. First, a factor analysis is done to see if the several questions converge on a single concept. Second, the reliability of the result (whether the scales gives a consistent result when administered again) is measured. This is done by using the scale twice on the same people and seeing if their answers are consistent or by using the alpha coefficient as a measure on reliability. The alpha coefficient indicates the internal consistency of the scale and should have a value of .70 or greater. This minimum standard has been emerging since the early 1980s. Few studies met these minimum standards before 1980. There has been progress, but this is still a problem today.
Additional procedures are done to assess the validity of the scales—that is, whether a scale measures what it is intended to measure (Carmines and McIver 1979). This is most often evaluated by correlating a new scale with various criteria such as existing scales of the same concept or outcomes that are related to the concepts.
Questionnaires and interviews. Questionnaires are the most commonly used methods of measuring the variables in a study. A questionnaire may be designed so that it can be self-administered by the participant, asked in a face-to-face interview, or administered by telephone.
Computer-assisted interviews can be used for all three collection procedures. Self-administered questionnaires are now completed by putting the participant in front of a computer. After the participant answers a question, the computer automatically goes to the next appropriate question. This allows each participant to have an individually tailored questionnaire. The use of Web-based questionnaires is becoming more common.
Difficulties of cross-cultural comparative analysis. Common sources of measurement error stem from insensitivity to gender, race, and culture (Van de Vijver and Leung 1996). Constructing culturally sensitive instruments is particularly salient when a researcher and subjects do not share the same language (Rubin and Babbie 2000; Hambleton and Kanjee 1995). Direct translation of a particular word may not hold the same connotation in another language. Validity of questions can be also an issue. A researcher trying to measure parenting skills in Japan and the United States may ask: "How do you rate your parenting skills? Would you say they are: (a) excellent, (b) good, (c) fair, or (d) not good?" Because of a cultural value on humbleness, Japanese parents may rate themselves lower than do American parents. The findings from this question might be reliable, but certainly not valid to make a comparison between two cultures. Social desirability and how participants react to particular questions should be carefully examined in an appropriate cultural context.
It is not possible to completely avoid cultural biases, but there are some steps to minimize the effect of them. A rule of thumb for researchers is to become immersed in the culture before selecting, constructing, or administering measures. A researcher may utilize knowledgeable informants in the study population, use translation and back-translation of instruments, and pretest measures for reliability and validity before conducting the study.
Missing data. Regardless of the approach to measurement or research design, missing data is a problem. In longitudinal strategies missing data often comes from subjects dropping out of the studies. In cross-sectional strategies missing data often comes from participants refusing to answer questions. Readers should pay special attention to the amount of missing data. It is not unusual for studies to have 20 percent or more of the cases missing from the analysis. If those who drop out of a study or those who refuse to answer questions are different on the dependent variable, then the results will be biased.
There is no simple solution to missing data. Researchers often impute a value for missing cases. For example, if 10 percent of the participants did not report their income, the researchers might substitute the median income of those who did not report their income. A slightly better solution is to substitute the median for homogeneous subgroups. Instead of using the overall median, the researcher might substitute a different median, depending on the participant's gender and education. There are many other imputation methods, involving more complex statistical analysis (see Robin 1987; Acock 1997, Roth 1994; Ward and Clark 1991). In any case, it is important to report information about participants who have missing data.
The variety of statistical analysis techniques seems endless. The statistical procedures range from descriptive (e.g., means, standard deviations, percentage) to multivariate (e.g., ANCOVA, MANOVA, logistic regression, principal component and factor analysis, structural equation modeling, hierarchical linear modeling, event history analysis, and latent growth curves). Most analysis involves several independent variables. OLS regression is widely used as a basic statistical model. It allows researchers to include multiple independent variables (predictors) and systematically control for important covariates. Many of the procedures are either special cases of OLS regression (e.g., ANOVA, ANCOVA) or extensions (e.g., logistic regression, structural equation modeling). There is also clear evidence that factor analysis procedures and their extensions, such as confirmatory factor analysis, play a major role in evaluating how well variables are measured.
Special Problems and Ethical Issues
Family researchers study the issues that concern people the most—factors that enhance or harm the well-being of people and families. This often involves asking sensitive questions. Most studies have a high compliance rate, with 80 percent to 90 percent of the people answering most questions. When studies begin by asking questions that participants are willing to answer, the participants buy into their role and later report intimate information. The reality is that participants will tell interviewers, who are strangers, personal information they would never share with members of their own family.
Although researchers can get people to cooperate with studies, a crucial question is how the researchers should limit themselves in what they ask people to do. All universities have committees that review research proposals where human subjects are involved. Researchers need to demonstrate that the results of their study are sufficiently promising to justify any risks to their subjects. Researchers must take precautions to minimize risks. Sometimes this involves anonymity for the participants (no name or identification associated with participants); sometimes it involves confidentiality (name or identification known only to the project's staff). It also involves informed consent, wherein people agree to participate after they are told about the project. Informed consent is a special problem with qualitative research. The design of qualitative research is emergent in that the researcher does not know exactly what is being tested before going into the field. Consequently, it is difficult to have meaningful informed consent. The participants simply do not know enough about the project when they are asked to participate.
Even with the best intentions, subjects can be put at risk. Asking adolescents about their relationship with a nonresident father may revive problems that had been put to rest. In some cases, the effect of this can be positive; in some cases, it can be negative. Observational studies and participant observation studies are especially prone to risks for subjects. A scholar interested in interaction between family members and physicians when a family member is on an extraordinary life-support system is dealing with very important questions. Who decides to turn the machine off? What is the role of the physician? What are the roles for different family members? All these are important questions. The presence of the researcher may be extremely intrusive and may even influence the decision-making process. This potential influence involves serious ethical considerations.
Another special risk for qualitative work is unanticipated self-exposure (Berg 2001). As the project develops, the participant may reveal information about self or associates that goes beyond the original informed consent agreement.
Feminist methodology is not a particular research design method or data collection method (Nielsen, 1990). It is distinguished by directly stating the researchers' values, explicitly recognizing the influence research has on the researcher, being sensitive to how family arrangements are sources of both support and oppression for women, and having the intention of doing research that benefits women rather than simply being about women (Allen and Walker 1993). Given this worldview, feminist methodology presents complex ethical issues to researchers, and it demands that all family scholars be sensitive to these concerns.
The diversity of strategies, designs, and methods of analysis used by marriage and family researchers reflects the equally diverse root disciplines and content areas that overlap the study of marriage and family. In view of this, cross-sectional surveys remain the most widely used strategy, and quantitative analysis is dominant in the reporting of research results in the professional literature. However, experiments, longitudinal, time-series, and qualitative strategies also remain crucial tools for research.
See also:Research: Family Measurement
acock, a. c. (1997). "working with missing data." familyscience review 10(1):76–102.
acock, a. c., and demo, d. (1994). family diversity andwell-being. newbury park, ca: sage.
allen, k. r., and walker, a. j. (1993). "a feminist analysis of interviews with elderly mothers and their daughters." in qualitative methods in family research, ed. j. f. gilgun, k. daly, and g. handel, newbury park, ca: sage.
bates, r. f. (1950). interaction process analysis: amethod for the study of small groups. cambridge, ma: addison-wesley.
berg, b. l. (2001). qualitative research methods for thesocial sciences, 4th edition. boston: allyn and bacon.
brown, s. r., and melamed, l. (1990). experimental design and analysis. newbury park, ca: sage.
carmines, e. g., and mciver, j. p. (1979). reliability andvalidity assessment. newbury park, ca: sage.
copeland, a. p., and white, k. m. (1991). studying families. newbury park, ca: sage.
drapper, t., and marcos, a. c. (1990). family variables:conceptualization, measurement, and use. newbury park, ca: sage.
hambleton, r. k., and kanjee, a. (1995). "increasing the validity of cross-cultural assessments: use of improved methods for test adaptations." european journal of psychological assessment 11(3):147–157.
larzelere, r. e., and klein, d. m. (1987). "methodology." in handbook of marriage and the family, ed. m. b. sussman and s. k. steinmetz. new york: plenum.
lee, g. r. (1999). comparative perspectives, ed. m. b. sussman, s. k. steinmetz, and g. w. peterson. new york: plenum press.
neilsen, j. m. (1990). introduction to feminist researchmethods, ed. j. m. neilsen. boulder, co: westview press.
roth, p. l. (1994). "missing data: a conceptual review for applied psychologists." personnel psychology 47:537–560.
rothbaum, f.; pott, m.; azuma, h.; miyake, k.; and weisz, j. (2000). "the development of close relationships in japan and the united states: paths of symbiotic harmony and generative tension." child development 71(5):1121–1142.
rubin, a., and babbie, e. (2000). research methods forsocial work, 4th edition. belmont, ca: wadsworth.
rubin, d. b. (1987). multiple imputation for nonresponse in surveys. new york: john wiley & sons.
schumm, w. r., and hemesath, k. k. (1999). measurement in family studies, ed. b. sussman, s. k. steinmetz, and g. w. peterson. new york: plenum press.
teachman, j. d.; tedrow, l. m.; and crowder, k. d. (2000). "the changing demography of america's families." journal of marriage and the family 62(november):1234–1246.
van de vijver, f., and leung, k. (1996). "methods and data analyis of comparative research." in handbook of cross-cultural psychology, 2nd edition, vol. 3, ed. j. w. berry, y. h. poortinga, and j. padey. needham, ma: allyn & bacon.
ward, t. j., and clark, h. t. (1991). "a reexamination of public-versus private-school achievement: the case for missing data." journal for educational research 84:153–163.
alan acock yoshie sano
Methods, Research (In Sociology)
Methods, Research (In Sociology)
QUALITATIVE METHODS AND ETHNOGRAPHY
Various sociological methodologies are used when designing and executing research. Each of these methods, including comparative-historical sociology, ethnomethodology, ethnography, evaluation research, qualitative methods, and survey research, has strengths and weaknesses. While debate surrounds qualitative versus quantitative methods, the best sociological research often integrates both kinds of methods to test hypotheses.
Most nineteenth-century social scientists, including Emile Durkheim, Herbert Spencer, and Karl Marx, engaged in analyses of historical data and made cross-cultural comparisons in their studies of human society. The work of these early historical sociologists was guided by the belief that societies were evolving and that the western European societies were the most advanced. The premise was that societies progressed via evolution and that progress was good. Comparisons were used as a tool for the development of social facts based on cross-cultural and/or historical data. In modern times cross-cultural comparisons serve to provide a better understanding of the structures and institutions of different societies.
The primary strength of comparative-historical research is its use of an interdisciplinary approach. If the scope conditions are clear and the criteria are specified and defined, then this approach is an important method for obtaining “social facts.”
Data available for cross-cultural and historical analyses face multiple hurdles. To illustrate, one must remember that information from a culture is embedded in the language, status sets, and expectations for the use of the data, as well as the time and place where the data were collected. There is always the issue of making sure that data sets are comparable and that the variables are equivalent. One primary limitation noted by Etienne Van de Walle (2005) is that although historical demographers have access to volumes of information, they are frequently limited by only including information on elite male populations with little or no information about females or the common man.
QUALITATIVE METHODS AND ETHNOGRAPHY
The primary qualitative methods sociologists use are ethnography interviews and direct observations. Interviews with research participants may range from open-ended interviews with flexible content directed by the interviewer to more structured questions asked by multiple researchers; in the latter case there is an obvious requirement for internal consistency so that all interviewers ask the same questions in the same way, and hopefully obtain comparable data. Researchers engaged in direct observations may have varying levels of participation, ranging from covert observation to participant observation where the researcher becomes an active member of the group.
Many ethnographers agree that to fully understand a complex social situation, one must enter into an unbiased observation or interaction with the society being studied. William Foote Whyte (1955) argued in his classic Street Corner Society: The Social Structure of an Italian Slum, that the only way to describe a society is to live in it, learn to speak the language, and participate in its social events and everyday life. Some of the most well known ethnographies have been guided by similar principles, for example, Elliott Liebow’s Tally’s Corner (1967), Margery Wolf’s The House of Lim (1968), and Laud Humphrey’s Tearoom Trade (1975), to mention only a few. A quantitative counting using preconceived survey questions only provides answers to the questions and could well be biased by the selection process as well as by the perceived social desirability of the responses by the researcher. In contrast, a qualitative analysis provides detailed description and information and new perspectives necessary for hypothesis development. An issue that should be addressed is the role of the researcher in the ethnographic research and whether or not an external observer can really study the internal workings of a society without bias. Also, a weakness of ethnographic field studies is generalization. But this weakness is often resolved by integrating the qualitative results of the fieldwork with quantitative results obtained in research in which a large population is systematically and randomly sampled and surveyed (see the work of Knodel, Chamratrithirong, and Debavalya  as an example of such integration).
The term ethnomethodology was first used in the 1960s by Harold Garfinkel (1967) in research determining how people make sense of their worlds. Garfinkel noted that for interactions to be smooth, everyday communication and interpersonal interactions have to be based on prior assumptions. Ethnomethodologists commonly study the normal through the use of techniques such as conversation analysis and breaching experiments, which force an examination of the usual, accepted, and unquestioned. The documented reactions of others to these experiments confirm which behaviors are normative (Cohen 2006).
The strength of ethnomethodology is that it permits the researcher to analyze the normal. For example, Allen Smith and Sherry Kleinman (1989) use narratives to demonstrate the patterns of discourse in conversations, which can be used to train medical personnel in the delivery of bad news and desexualizing gynecological exams. The weakness is that assumptions about what is normal and what is expected are in continual flux so that general-izability is sometimes limited.
Organizational sociologists, following a long-standing positivistic agenda, often use evaluation research to determine whether the programs and routines of such groups as corporate organizations, social agencies, and educational institutions actually perform as planned. Evaluation research techniques involve formative research; setting the agenda, goals, and strategies for the organization; determining how these can be quantified and hence evaluated; and summative evaluation, determining if these quantifiable outcomes of both the steps and the goals meet the predetermined standards. Evaluation researchers usually use multiple techniques, including ethnography and survey instruments (see Rossi, Lipsey, and Freeman 2003). The strength of evaluation research is that it is used to minimize expenses while improving the quality of the accepted standards set by formative research. One weakness is that organizations have multiple systems and the research may not target the critical part of the systems. Organizations are in continual flux so that their evaluations must be ongoing and easily modifiable to respond to changing conditions.
Survey research involves the “systematic gathering of information on a defined social group” (Rapley and Hansen 2006, p. 616). The group is typically sampled from a larger population; information is obtained by asking standard questions about previously operationalized variables. One reason for the use of survey research is its simplicity; if one wants to know information, ask. Questions may be either closed-ended or open-ended. The strengths of the analysis and the generalizations from the findings are determined in part by the sample size and selection. Samples may range from small convenience samples to large randomized representative samples. Surveys may be administered via interviews, mailed questionnaires, telephone calls or online. Don Dillman (2000) argues that mail surveys using the “tailored design method,” a detailed methodology of multiple contacts ensuring compliance, often have the greatest likelihood of being understood, completed, and returned; these are all characteristics necessary for the survey results to be truly representative of the population.
Surveys are usually used after one has developed hypotheses to be tested quantitatively. The best-known surveys have an efficient methodology and obtain accurate and current information about the population. Examples include the U.S. Current Population Survey, the World Fertility Surveys, and the U.S. National Surveys of Family Growth.
A strength of survey research, if done correctly, is its potential for strong and generalizable statistical analysis. However, the best surveys require randomization, adequate sample size, and a high completion rate. It must be remembered, however, that survey methods provide only a “partial description of complex social issues.… They are but one tool, of many, in the [social scientist’s] armamentarium” (Rapley and Hansen 2006, p. 617).
SEE ALSO Chicago School; Communication; Conversational Analysis; Discourse; Ethnography; Ethnomethodology; Hypothesis and Hypothesis Testing; Methods, Qualitative; Methods, Quantitative; Observation, Participant; Positivism; Sampling; Sociology; Survey; Tally’s Corner
Agar, Michael. 1996. The Professional Stranger: An Informal Introduction to Ethnography. New York: Academic Press.
Cohen, Ira. 2006. Ethnomethodology. In The Cambridge Dictionary of Sociology, ed. Bryan S. Turner, 177–180. New York: Cambridge University Press.
Garfinkel, Harold. 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall.
Denzin, Norman K., and Yvonna S. Lincoln. 1994. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications.
Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method, 2nd ed. New York: Wiley.
Fowler, Floyd J. Jr. 1995. Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage Publications.
Humphreys, Laud. 1975. Tearoom Trade: Impersonal Sex in Public Places. Chicago: Aldine.
Knodel, John, Aphichat Chamratrithirong, and Nibhon Debavalya. 1987. Thailand’s Reproductive Revolution. Madison: University of Wisconsin Press.
Liebow, Elliot. 1967. Tally’s Corner: A Study of Negro Streetcorner Men. Boston: Little and Brown.
Rapley, Mark, and Susan Hansen. 2006. Surveys. In The Cambridge Dictionary of Sociology, ed. Bryan S. Turner, 616–617. New York: Cambridge University Press.
Rossi, Peter, Mark W. Lipsey, and Howard E. Freeman. 2003. Evaluation: A Systematic Approach, 7th ed. Thousand Oaks, CA: Sage Publications.
Smith, Allen C., and Sherry Kleinman. 1989. Managing Emotions in Medical School: Students’ Contacts with the Living and the Dead. Social Psychology Quarterly 52: 56–69.
Van de Walle, Etienne. 2005. Historical Demography. In The Handbook of Population, eds. Dudley L. Poston, Jr., and Michael Micklin, 577–600. New York: Kluwer Academic/Plenum Publishers.
Whyte, William Foote. 1955. Street Corner Society: The Social Structure of an Italian Slum. Chicago: University of Chicago Press.
Wolf, Margery. 1968. The House of Lim: A Study of a Chinese Farm Family. Englewood Cliffs, NJ: Prentice-Hall.
Mary Ann Davis
Dudley L. Poston Jr.
Drugs affect the functioning of the brain's chemistry. As a result of these effects, a person who uses drugs can become a slave to them. But the brain has its own processes that set the stage for addiction. Research attempts to identify what those processes are and how they work. Addiction is a uniquely human problem, yet the qualities of addiction can be modeled in animals. Research shows that even rats can be turned into drug addicts.
When it comes to choosing an addicting drug, rats are not all that different from people. Scientists can set up experiments in which rats will give themselves repeated doses of drugs that humans find appealing and addicting. Research into human drug addiction that relies on animals may reveal new ways to treat, and even prevent, human problems with addictive drugs.
Experiments have shown that rats with a tiny wire placed in the pleasure-sensing region of the brain will repeatedly press a bar that sends a slight current out of this probe's microscopic tip. Called an electrode, the wire is designed to deliver the small amount of electricity to a brain region that the researcher selects, based on available maps of the brain.
Self-stimulation experiments in rats have repeatedly demonstrated the presence of a reward center in the brain. Scientists observe the behavior of the rats fitted with electrodes. They count the number of times the rats press the bar, and compare the number of bar presses that results when the electrode is in the reward area, compared to the number that results when the electrode is just outside that area.
To train a rat to press a bar, scientists use food as the reward. Before the experiment begins, electrodes are placed in the rat's brain under surgical anesthesia. Then, for a few days, the rats are given less food than they usually would eat. Next, they are put into a box (called a Skinner box after the behavioral scientist B. F. Skinner, who invented it in the 1930s). The box has a bar that, when pressed, delivers food. The rat does what rats naturally do in a new place—it explores the box by sniffing around and rearing up on its hind legs. The rat can smell that there is food somewhere near, and the rat is hungry. By chance, when coming down from a rearing position, the rat's paw will hit the bar. Suddenly, food appears. The rat learns fairly quickly to repeat this behavior to get more food. It becomes very efficient at pressing that bar to get food, an action called self- administering.
Now, the mechanism that gives food when the rat presses the bar is changed so that no food is delivered, and the electrode is activated. The question is, will the rat continue to press the bar? In other words, will the feeling created by the current inside the brain be rewarding in and of itself, and substitute for food? If the electrode stimulates the reward circuitry inside the brain, the answer is yes. However, if the electrode missed its mark, the rat will soon stop pressing the bar. The bar presses no longer deliver anything of interest to the rat.
How Research Proceeds
Based on this rat model of reward, scientists concluded that the brain has a specific place where rewarding feelings are generated. Researchers were able to determine that any area stimulated by an electrode and giving a sustained rate of bar pressing was in fact a reward area. They could count the number of times that a rat pressed the bar, comparing a weakly rewarding area to a strongly rewarding area. By interpreting the data—the rates of bar pressing—region by region, the researchers were able to map a set of places in the brain that consistently generate reward.
Researchers collected these measurements of reward regions from their rat experiments, and compared the active reward areas to the existing maps of brain functions, including areas where they know which neurotransmitters are present. The researchers predicted that the electrode had to be in regions that are rich in the neurotransmitter dopamine for the rats to press the bar repeatedly.
In another method of experimenting, scientists used a tiny, hollow tube to deliver minute amounts of drugs directly into the reward areas of the brain. The animals pressed a bar until they were exhausted in order to deliver such substances as heroin, cocaine, and other drugs into the specific brain structures making up the reward system. Rats self-administer the most popular drugs of abuse to the point of ignoring food and even sex in some instances—just as people do.
There are several variables in these experiments that must be considered before interpreting the results, or data. These variables include: the location of the electrode inside the brain; the amount of current used; and, in the case of drug delivery directly into the brain, the amount, or dose, of drug given. Checking to see that the electrode is working properly is essential to conducting a good experiment. If it is not, this variable can make it impossible to interpret data correctly. The electrodes are intended to deliver the current only at their tips. The researchers locate just where the tips are placed in the brain in order to make their maps of the reward circuit. They make sure that the insulation placed around the wire up to the tip is not leaky so that current will not leak out along the shaft of the electrode. If it does leak, some other brain area that the electrode passes through will be stimulated.
Even the sex of the tested animal is important to take into consideration when interpreting results. For some drugs, certain experiments reveal differences between males and females. For instance, female rats appear more sensitive to morphine-like drugs, and continue to self-administer these types of drugs long after male rats stop.
Rat Research and Brain Opiates
Opiates, such as morphine, are drugs made from the opium poppy that have powerful effects on the brain. One of these effects is to stop pain. Opiates are used as analgesia to make a patient more able to tolerate pain after surgery, or for conditions where pain itself is the problem, such as cancer. In the 1970s, the results of a set of experiments showed that the brain is somehow able to control pain on its own. The brain can regulate how it perceives pain by releasing its own painkiller molecules. These simple protein-like materials, produced within an animal's nerve cells, work in much the same way as the molecules of the opium poppy plant. Scientists had discovered the brain's own opiates.
An experiment with rats that had electrodes placed in a core region of the brain showed that they could undergo surgery without any anesthesia if these electrodes were activated. Other scientists then analyzed the region, called the central grey region, using opiate drugs as tracers. These opiate drugs were tagged with a chemical or radioactive label that would show up under the microscope. The opiate drugs stuck to the central grey region in large amounts—in other words, this region was rich in opiate receptors, specific binding sites that take up the drugs. Teams of scientists at Johns Hopkins University in Baltimore, Stanford University in Stanford, California, New York University in New York City, and at a research center in Sweden all helped demonstrate that certain receptors specifically accept morphine and related drugs. The drugs "dock" on these opiate receptors and act to stop pain.
Other researchers examined brain regions that were rich in opiate receptors and found that they also contained a substance that would bind to opiate receptors. In 1973 John Hughes and Hans Kosterlitz in Aberdeen, Scotland, showed that pig brains contained two similar small molecules that acted much like morphine when they were extracted, purified, and injected into lab animals. The shapes of these previously unknown molecules were shown to resemble closely the opiates from poppies and other narcotic drugs created by human chemistry. Named enkephalins, meaning "in the head," they unfortunately proved to be addictive, just like morphine: the effect of the purified enkephalins in animals would lessen unless increasingly higher doses were used.
Years before, Choh Hao Li, at the University of California in San Francisco, had looked at a substance he purified from a gland at the base of the brain. Li believed that this pea-sized gland, called the pituitary gland, might have a substance that aided the body's handling of fat (fat metabolism). Because the gland is so tiny, Li used material from 500 camels to find and purify a single molecule. But this molecule did not do much in his experiments on fat metabolism. Puzzled by his lack of results, he put the molecule in storage.
Sometimes, science involves luck and timing. But it always requires a prepared mind to recognize the significance of a finding and how to fit it in to an existing or developing theory. When Li read of the work on the brain's own opiates, he retested his molecule. He found that it, too, could lessen the perception of pain. This was a bigger messenger molecule, more closely resembling other messenger molecules, called hormones, that come from the pituitary. The small enkephalin molecules are similar in size to many neurotransmitters in neurons. Many hormones come from the pituitary, rather than the small, enkephalin molecules in neurons.
Li's molecule obviously was doing something other than handling fat. Scientists quickly renamed Li's hormone endorphin, meaning "the brain's own morphine." Endorphin lasts longer when injected than the much smaller enkephalins, which are nearly instantly broken down by the body. Yet it, too, is addicting. The discovery of the brain's own opiates has not yet led to the design of painkillers that are free from addicting qualities. But it has deepened researchers' understanding of how the brain works, especially at the level of molecular signals.
Experiments throughout the 1970s and 1980s mapped out an opiate-rich circuit within the brain. Many of the areas loaded with the enkephalins or endorphins correspond to regions known to carry pain messages, and to control the brain's ability to acknowledge pain, or to ignore it. And parts of this opiate circuitry overlap with the brain's reward pathways.
Imaging the Living Brain
By 2000 researchers had confirmed that the findings about brain chemistry and addictions in rats apply to brain chemistry and addictions in humans. The same areas mapped in the rats light up when the brains of humans, either using or recovering from addicting drugs, are scanned. Positron emission tomography (PET) is a technique that shows the use of energy by the brain. PET scans can show differences in brain activity in the reward circuitry when cocaine is taken, compared to a normal pattern of brain activity. Also, PET scans suggest that the drug ecstasy can alter the workings of a transmitter called serotonin in the brain of someone who takes the drug repeatedly. In fact, PET scanning reveals that dopamine, the messenger molecule of the reward circuit, is involved in nearly all addictive behaviors that scientists examine. Even alcoholics and obese people have a detectable difference in their dopamine activity on PET scans.
Now that scientists can see inside a living, working human brain, they are trying to uncover the details of addiction. Craving for a drug once addiction develops turns out to be a separate phenomenon, and one that may be key to interrupting the process of addiction.
Surprisingly, rats that were once addicted to cocaine did not re- lapse when stimulated in the reward area of the brain. Instead, findings reported in 2001 show that the hippocampus region of the brain, involved in forming memories, was crucial for seeking more drug. The hippocampus, with its rich connections to the reward centers, and the cerebral cortex—the outer region of the brain—may be prompting the drug-seeking behavior.
The brain remains the most complicated frontier for advancing our comprehension of ourselves. Formulating new models, and revising prior ones, will help those who seek release from addictive behavior.
see also Brain Chemistry; Brain Structures; Drug Testing in Animals: Studying Potential for Abuse; Drug Testing in Humans: Studying Potential for Abuse; Imaging Techniques: Visualizing the Living Brain.
WANT TO KNOW MORE?
For an illustration of PET scans, check out the National Institute on Drug Abuse's site, at http://www.drugabuse.gov/Teaching3/teaching3.html, slide 10 and 11. Another good web site is Brookhaven Lab's site, at http://www.bnl.gov/pet/ studies.htm.
Successful human exploration of space depends on continued scientific research and innovative technology development. Advances in scientists' understanding of propulsion systems, power generation, resource utilization, and the physiological and psychological effects on humans of living in space are required if humans are to explore space and other planets or establish settlements on other planets.
Exploration to develop knowledge about Earth and planetary evolution in general, and the origins and conditions for life, will continue to lead us to search for life throughout the solar system and beyond. An initial reconnaissance of all of the planets in the solar system will ultimately be completed with a robotic mission to Pluto. Scientists are also keen to send spacecraft to Europa, one of the moons of Jupiter, to search for signs of life in a liquid ocean thought to exist below its icy crust. And the search will continue for other planetary systems beyond our own in order to answer questions such as: How typical is the solar system? How numerous are solar systems?
At present, Earth-and space-based telescopes are used to conduct the search for other planetary systems, but in the future, squadrons of miniature spacecraft may be sent on interstellar journeys of exploration to help answer some of life's most demanding questions: Are we alone, or is there other life out there? Are there other planets that could support humankind?
All rockets in use in the early twenty-first century are propelled by some form of chemical rocket engine. Rockets with sufficient power to place a satellite in orbit use at least two stages. However, one long-term goal has been the reusable "single-stage to orbit" engine design. This would provide quick turnaround, much like a conventional aircraft, and greatly reduce the cost of getting to orbit because of reduced processing and flight preparation. An interim step may be a two-stage vehicle with boosters that fly back and land the spaceport for refurbishment after each launch.
Once a spacecraft is in orbit, other forms of propulsion are necessary. Several exotic propulsion systems have been proposed and investigated over the years. Orion was a project to design and construct a propulsion system using small atomic bombs. While this sounds impractical, many scientists think that such a propulsion system would have allowed humans to get to the Moon more quickly at a much lower cost than the Saturn V launch system. A variation of this type of propulsion is the nuclear thermal rocket. This system uses a nuclear reactor to heat a gas, which is then expelled through a nozzle, providing thrust.
The crew of a rocket ship powered by a nuclear rocket engine would need to be shielded from the reactor. One proposed solution is to place the engine at a large distance away from the crew quarters, connecting the two compartments by a long truss. In this design, distance substitutes for heavy shielding .* Many scientists believe that if humans are to move beyond Earth orbit, some version of a nuclear rocket engine will be necessary.
Between 2002 and 2007, NASA plans to develop an improved radio-isotope power system for use in robotic planetary exploration and targets the first use of this power system for a Mars mission in 2009. During the period between 2003 and 2013, significant funding will be dedicated to the development of a nuclear-electric-propulsion system to enable a new class of planetary missions with multiple targets, to reduce spacecraft travel time, and to decrease mission cost.
Nuclear electric propulsion systems only use the nuclear reactor to generate electricity. The rocket engine itself is electrically powered. There are three classes of electric rocket engines: electrothermal, electrostatic, and electromagnetic. In electrothermal propulsion, the gas is raised to a high temperature and expelled through a rocket nozzle. Electrostatic propulsion systems first convert the gas to a plasma (highly ionized material) and then use electric fields to accelerate the gas to high velocity. Electromagnetic propulsion uses magnetic fields to accelerate a plasma.
Other propulsion systems include various configurations of solar sails,ion propulsion systems, and laser propulsion. Several systems involve the use of stationary high-powered infrared pulsed lasers. In one interesting system, the laser is fired at a parabolic reflector on the back of the spacecraft. This reflector focuses the laser energy, explosively heating air behind the craft and propelling it forward. In space, the reflector would be jettisoned and the laser would fire pulses at a block of propellant (ice would work) heating it to vapor.
Space Power Generation
Spacecraft currently use solar power, hydrogen fuel cells, or radioisotope thermoelectric generators to generate electrical power and rechargeable batteries to store electrical energy. The International Space Station uses solar panels and rechargeable batteries. Solar power is converted to electrical power in large panels containing photovoltaic cells. These cells convert light directly into electricity using a semiconductor such as silicon or gallium arsenide. Solar panels are relatively low cost and simple. However, they are fragile, take up a lot of space, and become less effective as a spacecraft travels away from the Sun. For future missions that penetrate deeper into the solar system, and beyond, alternative power sources will be essential.
Fuel cells combine hydrogen and oxygen to make water. When hydrogen combines with oxygen, energy is released. A fuel cell converts this energy directly into electricity. Fuel cells are relatively compact and produce usable by-products, but they are complicated and expensive to produce.
Radioisotope thermoelectric generators (RTGs) convert the heat produced by the natural decay of radioactive materials to electrical power by solid-state thermoelectric converters. RTGs are lightweight, compact, robust, reliable, and relatively inexpensive. These devices allow spacecraft to operate at large distances from the Sun or where solar power systems would be impractical. They remain unmatched for power output, reliability, and durability.
If a human colony is to be established on Earth's Moon, Mars, or elsewhere in the solar system, some means of transporting large amounts of materials to the colony site must be developed. It would be prohibitively expensive and impractical to transport materials from Earth in sufficient quantity to build a base on the Moon or Mars. However, this is not necessary, since both Earth's Moon and Mars have an abundance of raw materials that could be used for construction.
The Moon may have a substantial amount of water locked in permafrost in the bottom of deep craters near its poles where sunlight never reaches or in clays. Although it would be expensive to mine this water, it would be far cheaper than transporting water from Earth. The Moon also has surface rocks rich in light materials such as aluminum and silicon dioxide. It would require large amounts of electrical power to produce pure aluminum or glass from Moon rocks, but solar energy is abundant because of the lack of atmosphere.
The Moon may even have sufficient quantities of helium-3 to make a lunar settlement economically self-supporting. The helium-3 would be extracted from lunar soil, packaged as a compressed gas or liquid, and returned to Earth for use in fusion reactors. Due to the lower gravity, launching a rocket from the surface of the Moon for return to Earth is far less costly than launching a rocket from Earth.
Mars also has significant resources available. The red color of Martian soil is due to the presence of large quantities of iron oxide. Other minerals and elements are also present. In addition, Mars is thought to have vast quantities of subsurface water. Asteroids have long been recognized as accessible, mineral-rich bodies in the solar system and are a ready target for resource mining.
see also Asteroid Mining (volume 4); Ion Propulsion (volume 4); Lightsails (volume 4); Lunar Bases (volume 4); Lunar Outposts (volume 4); Mars Bases (volume 4); Mars Missions (volume 4); Power, Methods of Generating (volume 4); Solar Power Systems (volume 4).
NASA Life Sciences Strategic Planning Study Committee. Exploring the Living Universe: A Strategy for Space Life Sciences. Washington, DC: National Aeronautics and Space Administration, 1988.
National Commission on Space. Pioneering the Space Frontier, The Report of the National Commission on Space. New York: Bantam Books, 1986.
Office of Technology Assessment. Exploring the Moon and Mars: Choices for the Nation, OTA-ISC-502, Washington, DC: U.S. Government Printing Office, 1991.
O'Neill, Gerard K. The High Frontier: Human Colonies in Space. New York: WilliamMorrow and Co., 1977.
Space Science Board. Life Beyond the Earth's Environment: The Biology of Living Organisms in Space. Washington, DC: National Academy of Sciences, 1979.
Space Science in the Twenty First Century: Imperatives for the Decades 1995 to 2015—Life Sciences. Washington, DC: National Academy Press, 1988.
Wilhelms, Don E. To a Rocky Moon: A Geologist's History of Lunar Exploration. Tucson: University of Arizona Press, 1993.
Astronomy Resources from STScl. Space Telescope Science Institute.<http://www.stsci.edu/resources/>.
NASA Human Spaceflight.<http://spaceflight.nasa.gov/history/>.
Space Science. National Aeronautics and Space Administration.<http://spacescience.nasa.gov/>.
* This design and engine type was portrayed in the movie, 2001: A Space Odyssey.
re·search / ˈrēˌsərch; riˈsərch/ • n. the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions: we are fighting meningitis by raising money for medical research. ∎ (researches) acts or periods of such investigation: his pathological researches were included in official reports. ∎ [as adj.] engaged in or intended for use in such investigation and discovery: a research student a research paper.• v. [tr.] investigate systematically: the biographer spent 25 years researching Stalin's life | [intr.] the team has been researching into flora and fauna. ∎ discover facts by investigation for use in (a book, program, etc.): I was in New York researching my novel | [as adj.] (researched) this is a well-researched and readable account. DERIVATIVES: re·search·a·ble adj.re·search·er n.