Survey Research

views updated

SURVEY RESEARCH

Survey research is the method most frequently used by sociologists to study American society and other large societies. Surveys allow sociologists to move from a relatively small sample of individuals who are accessible as carriers of information about themselves and their society to the broad contours of a large population, such as its class structure and dominant values. Surveys conform to the major requirements of the scientific method by allowing a considerable (though by no means perfect) degree of objectivity in approach and allowing tests of the reliability and validity of the information obtained.

Like many other important inventions, a survey is composed of several more or less independent parts: sampling, questioning, and analysis of data. The successful combination of those elements early in the twentieth century gave birth to the method as it is known today. (Converse 1987 provides a history of the modern survey).


SAMPLING

The aspect of a survey that laypersons usually find the most mysterious is the assumption that a small sample of people (or other units, such as families or firms) can be used to generalize about the much larger population from which that sample is drawn. Thus, a sample of 1,500 adults might be drawn to represent the population of approximately 200 million Americans over age 18 in the year 2000. The sample itself is then used to estimate the extent to which numerical values calculated from it (for example, the percentage of the sample answering "married" to a question about marital status) are likely to deviate from the values that would have been obtained if the entire population over age 18 had been surveyed. That estimate, referred to as "sampling error" (because it is due to having questioned only a sample, not the full population), is even stranger from the standpoint of common sense, much like pulling oneself up by one's own bootstraps.

Although a sample of only 1,500 may be needed to obtain a fairly good estimate for the entire U.S. adult population, this does not mean that a much smaller sample is equally adequate for, say, a city of only 100,000 population. It is the absolute size of the sample that primarily determines the precision of an estimate, not the proportion of the population that is drawn for the sample—another counterintuitive feature of sampling. This has two important implications. First, a very small sample, for example, two or three hundred, is seldom useful for surveys, regardless of the size of the total population. Second, since it is often subparts of the sample, for example, blacks or whites, that are of primary interest in a survey report, it is the size of each subpart that is crucial, not the size of the overall sample. Thus, a much larger total sample may be required when the goal is to look separately at particular demographic or social subgroups.

All the estimates discussed in this article depend on the use of probability sampling, which implies that at crucial stages the respondents are selected by means of a random procedure. A nonprobability sampling approach, such as the proverbial person-in-the-street set of interviews, lacks scientific justification for generalizing to a larger population or estimating sampling error. Consumers of survey information need to be aware of the large differences in the quality of sampling that occur among organizations that claim to do surveys. It is not the case in this or other aspects of survey research that all published results merit equal confidence. Unfortunately, media presentations of findings from surveys seldom provide the information needed to evaluate the method used in gathering the data.

The theory of sampling is a part of mathematics, not sociology, but it is heavily relied on by sociologists and its implementation with real populations of people involves many nonmathematical problems that sociologists must try to solve. For example, it is one thing to select a sample of people according to the canons of mathematical theory and quite another to locate those people and persuade them to cooperate in a social survey. To the extent that intended respondents are missed, which is referred to as the problem of nonresponse, the scientific character of the survey is jeopardized. The degree of jeopardy (technically termed "bias") is a function of both the amount of nonresponse and the extent to which the nonrespondents differ from those who respond. If, for example, young black males are more likely to be missed in survey samples than are other groups in the population, as often happens, the results of the survey will not represent the entire population adequately. Serious survey investigators spend a great deal of time and money to reduce nonresponse to a minimum, and one measure of the scientific adequacy of a survey report is the information provided about nonresponse. In addition, an active area of research on the survey method consists of studies both of the effects of nonresponse and of possible ways to adjust for them. (for an introduction to sampling in social surveys, see Kalton 1983; for a more extensive classic treatment, see Kish 1965).

QUESTIONS AND QUESTIONNAIRES

Unlike sampling, the role of questions as a component of surveys often is regarded as merely a matter of common sense. Asking questions is a part of all human interaction, and it is widely assumed that no special skill or experience is needed to design a survey questionnaire. This is true in the sense that questioning in surveys is seldom very different from questioning in ordinary life but incorrect in the sense that many precautions are needed in developing a questionnaire for a general population and then interpreting the answers.

Questionnaires can range from brief attempts to obtain factual information (for example, the number of rooms in a sample of dwelling units) or simple attitudes (the leaning of the electorate toward a political candidate) to extensive explorations of the respondents' values and worldviews. Assuming that the questions have been framed with a serious purpose in mind—an assumption not always warranted because surveys are sometimes initiated with little purpose other than a desire to ask some "interesting questions"—there are two important principles to bear in mind: one about the development of the questions and the other about the interpretation of the answers.

The first principle is the importance of carrying out as much pilot work and pretesting of the questions as possible, because not even an experienced survey researcher can foresee all the difficulties and ambiguities a set of questions holds for the respondents, especially when it is administered to a heterogeneous population such as that of the United States. For example, a frequently used question about whether "the lot of the average person is getting worse" turned out on close examination to confuse the respondents about the meaning of "lot,"—with some taking it to refer to housing lots. Of course, it is still useful to draw on expert consultation where possible and to become familiar with discussions of questionnaire design in texts, especially the classic treatment by Payne (1951) and more recent expositions such as that by Sudman and Bradburn (1987).

Pilot work can be done in a number of ways, for example, by having a sample of respondents think aloud while answering, by listening carefully to the reactions of experienced interviewers who have administered the questionnaire in its pretest form, and, perhaps best of all, by having investigators do a number of practice interviews. The distinction between "pilot" and "pretest" questionnaires is that the former refer to the earlier stages of questionnaire development and may involve relatively unstructured interviewing, while the latter are closer to "dress rehearsals" before the final survey.

The main principle in interpreting answers is to be skeptical of simple distributions of results often expressed in percentage form for a particular question, for example, 65 percent "yes," 30 percent "no," 5 percent "don't know." For several reasons, such absolute percentages suggest a meaningfulness to response distributions that can be misleading. For one thing, almost any important issue is really a cluster of subissues, each of which can be asked about and may yield a different distribution of answers. Responses about the issue of "gun control" vary dramatically in the United States depending on the type of gun referred to, the amount and method of control, and so forth. No single percentage distribution or even two or three distributions can capture all this variation, nor are such problems confined to questions about attitudes: Even a seemingly simple inquiry about the number of rooms in a home involves somewhat arbitrary definitions of what is and is not to be counted as a room, and more than one question may have to be asked to obtain the information the investigator is seeking. By the same token, care must be taken not to overgeneralize the results from a single question, since different conclusions might be drawn if a differently framed question were the focus. Indeed, many apparent disagreements between two or more surveys disappear once one realizes that somewhat different questions had been asked by each even though the general topic (e.g., gun control) may look the same.

Even when the substantive issue is kept constant, seemingly minor differences in the order and wording of questions can change percentage distributions noticeably. Thus, a classic experiment from the 1940s showed a large difference in the responses to a particular question depending on whether a certain behavior was said to be "forbidden" rather than "not allowed": To the question, "Do you think the United States should forbid public speeches against democracy?" 54 percent said yes, [Forbid], but to the question, "Do you think the United States should allow public speeches against democracy?" 75 percent said no (do not allow). This is a distinction in wording that would not make a practical difference in real life, since not allowing a speech would have the same consequence as forbidding it, yet the variation in wording has a substantial effect on answers. Experiments of this type, which are called "split-ballot experiments," frequently are carried out by dividing a national sample of respondents in half and asking different versions of the question to each half on a random basis. If the overall sample is large enough, more than two variations can be tested at the same time, and in some case more complex "factorial designs" are employed to allow a larger number of variations (see Rossi and Nock [1982] for examples of factorial surveys).

The proportion of people who answer "don't know" to a survey question also can vary substantially—by 25 percent or more—depending on the extent to which that answer is explicitly legitimized for respondents by mentioning it along with other alternatives ("yes," "no," "don't know") or omitted. In other instances, the location of a question in a series of questions has been shown to affect answers even though the wording of the question is not changed. For example, a widely used question about allowing legalized abortion in the case of a married woman who does not want more children produces different answers depending entirely on its position before or after a question about abortion in the case of a defective fetus. Thus, the context in which a question is asked can influence the answers people give. These and a large number of other experiments on the form, wording, and context of survey questions are reported by Schuman and Presser (1981) (see Turner and Martin [1984] for several treatments of survey questioning, as well as more recent volumes by Schwarz and Sudman [1996] and Sudman et al. [1996] with a cognitive psychological emphasis).


ANALYSIS

Although questioning samples of individuals may seem to capture the entire nature of a survey, a further component is vital to sociologists: the logical and statistical analysis of the resulting data. Responses to survey questions do not speak for themselves, and in most cases even the simple distribution of percentages to a single question calls for explicit or implicit comparison with another distribution, real or ideal. To report that 60 percent of a sample is satisfied with the actions of a particular leader may be grounds for either cheering or booing. It depends on the level of satisfaction typical for that leader at other times or for other individuals or groups in comparable leadership positions. Thus, reports of survey data should include these types of comparisons whenever possible. This is why for sociologists the collection of a set of answers is the beginning and not the end of a research analysis.

More generally, most answers take on clear meaning primarily when they are used in comparisons across time (for example, responses of a sample this year compared with responses of a sample from the same population five years ago), across social categories such as age and education, or across other types of classifications that are meaningful for the problem being studied. Moreover, since any such comparison may produce a difference that is due to chance factors because only a sample was drawn rather than to a true difference between time points or social categories, statistical testing is essential to create confidence that the difference would be found if the entire population could be surveyed. In addition, individual questions sometimes are combined into a larger index to decrease idiosyncratic effects resulting from any single item, and the construction of this type of index requires other preliminary types of statistical analysis.

As an example of survey analysis, sociologists often find important age differences in answers to survey questions, but since age and education are negatively associated in most countries—that is, older people tend to have less education than do younger people—it is necessary to disentangle the two factors in order to judge whether age is a direct cause of responses or only a proxy for education. Moreover, age differences in responses to a question can represent changes resulting from the aging process (which in turn may reflect physiological, social, or other developmental factors) or reflect experiences and influences from a particular historical point in time ("cohort effects"). Steps must be taken to distinguish these explanations from one another. At the same time, a survey analyst must bear in mind and test the possibility that a particular pattern of answers is due to "chance" because of the existence of sampling error.

Thus, the analysis of survey data can be quite complex, well beyond, though not unrelated to, the kinds of tables seen in newspaper and magazine presentations of poll data. (The terms "poll" and "survey" are increasingly interchangeable, with the main difference being academic and governmental preference for "survey" and media preference for "poll.") However, such thorough analysis is important if genuine insights into the meaning of answers are to be gained and misinterpretations are to be avoided. (A comprehensive but relatively nontechnical presentation of the logic of survey analysis is provided by Rosenberg [1968]. Among the many introductory statistical texts, Agresti and Finlay [1997] leans in a survey analytic direction.)


MODE OF ADMINISTRATION

Although sampling, questioning, and analysis are the most fundamental components, decisions about the mode of administering a survey are also important. A basic distinction can be made between self-administered surveys and those in which interviewers are used. If it is to be based on probability sampling of some sort, self-administration, usually is carried out by mailing questionnaires to respondents who have been selected through a random procedure. For instance, a sample of sociologists might be chosen by taking every twentieth name from an alphabetical listing of all regular members of the American Sociological Association, though with the recognition that any such listing would be incomplete (e.g., not everyone with an advanced degree in sociology belongs to the association).

The major advantage of mail surveys is their relatively low cost, which is limited to payments to clerical employees, stamps, and perhaps financial incentive for the respondents. One disadvantage of mail surveys is that they traditionally have produced low response rates; many obtain only 25 percent or less of the target sample. However, Dillman (1978) argues that designing mail surveys in accordance with the principles of exchange theory can yield response rates at or close to those of other modes of administration. Whether this is true for a sample of the U.S. population remains in doubt for the reason given below, although Dillman has implemented some of his strategies in government census-type surveys. It is clear from numerous experiments that the use of two specific features—monetary incentives (not necessarily large) provided in advance and follow-up "reminders"—can almost always improve mail questionnaire response rates appreciably. However, another important disadvantage of mail surveys in the United States is the absence of an available centralized national listing of households for drawing a sample; because of this situation, it is difficult to say what response rate could be obtained from a nongovernmental national mail sample in this country.

Mail surveys generally are used when there is a prior list available, such as an organization's membership, and this practice may add the benefit of loyalty to the organization as a motive for respondent cooperation. Other disadvantages of mail surveys are lack of control over exactly who answers the questions (it may or may not be the target respondent, assuming there is a single target), the order in which the questionnaire is filled out, and the unavailability of an interviewer for respondents who cannot read well or do not understand the questions. One compensating factor is the greater privacy afforded respondents, which may lead to more candor, although evidence of this is still limited. Sometimes similar privacy is attempted an interview survey by giving a portion of the questionnaire to respondents to fill out themselves and even providing a separate sealed envelope to mail back to the survey headquarters, thus guaranteeing that the interviewer will not read the answers. This strategy was used by Laumann et al. (1994) in a major national survey of sexual behavior, but no comparison with data obtained in a more completely private setting was provided. Tourangeau and Smith (1996) provide a different type of evidence by showing that respondents who answer directly into a computer appear more candid than do respondents who give answers to interviewers. Recently, the Internet has been investigated as a vehicle for self-administered surveys, although there are formidable problems of sampling in such cases.

Because of these difficulties, most surveys aimed at the general population employ interviewers to locate respondents and administer a questionnaire. Traditionally, this has been done on a face-to-face (sometimes called "personal") basis, with interviewers going to households, usually after a letter of introduction has been mailed describing the survey. The sample ordinarily is drawn by using "area probability" methods: To take a simple example, large units such as counties may be drawn first on a random basis, then from the selected counties smaller units such as blocks are drawn, and finally addresses on those blocks are listed by interviewers and a randomly drawn subset of the listed addresses is designated for the actual sample, with introductory letters being sent before interviewing is attempted. In practice, more than two levels would be used, and other technical steps involving "stratification" and "clustering" would be included to improve the efficiency of the sampling and data collection.

A major advantage of face-to-face interviewing is the ability of the interviewer to find the target respondent and persuade her or him to take part in the interview. Face-to-face interviewing has other advantages: Graphic aids can be used as part of a questionnaire, interviewers can make observations of a respondent's ability to understand the questions and of other behavior or characteristics of a respondent, and unclear answers can be clarified. The major disadvantage of face-to-face interviewing is its cost, since much of the time of interviewers is spent locating respondents (many are not at home on a first or second visit). For every actual hour spent interviewing, five to ten hours may be needed for travel and related effort. Furthermore, face-to-face surveys require a great deal of total field time, and when results are needed quickly, this is difficult to accomplish and may add more expense. Another disadvantage is the need for an extensive supervisory staff spread around the country, and yet another is that survey administrators must rely on the competence and integrity of interviewers, who are almost always on their own and unsupervised during interviews. This makes standardization of the interviewing difficult.

Increasingly since the early 1970s, face-to-face interviewing has been replaced by telephone interviewing, usually from a centralized location. Telephone surveys are considerably less expensive than face-to-face surveys, though the exact ratio is hard to estimate because they also are normally shorter, usually under forty-five minutes in length; the expense of locating people for face-to-face interviews leads to hourlong or even lengthier interviews, since these usually are tolerated more readily by respondents who are interviewed in person. Telephone surveys can be completed more rapidly than can face-to-face surveys and have the additional advantage of allowing more direct supervision and monitoring of interviewers. The incorporation of the computer directly into interviewing—known as computer-assisted telephone interviewing (CATI)—facilitates questionnaire formatting and postinterview coding, and this increases flexibility and shortens total survey time. Still another advantage of telephone surveys is the relative ease of probability sampling: Essentially random combinations of digits, ten at a time, can be created by computer to sample any telephone number in the United States (three-digit area code plus seven-digit number). There are a variety of practical problems to be overcome (e.g., many of the resulting numbers are nonworking, account must be taken of multiple phones per household, and answering machines and other devices often make it difficult to reach real people), but techniques have been developed that make such samples available and inexpensive to a degree that was never true of the area sampling required for face-to-face interviewing. Perhaps the largest problem confronting survey research is the proliferation of telemarketing, which makes many potential respondents wary of phone calls and reluctant to devote time to a survey interview.

Because speaking on the telephone seems so different from speaking face to face, survey methodologists initially thought that the results from the two types of survey administration might be very different. A number of experimental comparisons, however, have failed to find important differences, and those which do occur may have more to do with different constraints on sampling (telephone surveys obviously miss the approximately 8 percent of the American households without telephones and produce somewhat higher levels of refusal by the intended respondents). Thus, the remaining reasons for continuing face-to-face surveys have to do with the need for longer interviews and special additions such as graphic demonstrations and response scales. (Groves [1989] discusses evidence on telephone versus face-to-face survey differences, and Groves et al. [1988] present detailed accounts of methodological issues involving telephone surveys.)

Face-to-face and telephone surveys share one important feature: the intermediate role of the interviewer between the questionnaire and the respondent. Although this has many advantages, as was noted above, there is always the possibility that some behavior or characteristic of the interviewer will affect responses. For example, as first shown by Hyman (1954) in an effort to study the interview process, a visible interviewer characteristic such as racial appearance can have dramatic effects on answers. This is probably the largest of all the effects discovered, no doubt because of the salience and tension that racial identification produces in America, but the possibility of other complications from the interview process—and from the respondent's assumption about the sponsorship or aim of the survey—must be borne in mind. This is especially true when surveys are attempted in societies in which the assumption of professional neutrality is less common than in the United States, and some recent failures by surveys to predict elections probably are due to bias of this type.


THE SEQUENCE OF A SURVEY

Surveys should begin with one or more research problems that determine both the content of the questionnaire and the design of the sample. The two types of decisions should go hand in hand, since each affects the other. A questionnaire that is intended to focus on the attitudes of different ethnic and racial groups makes sense only if the population sampled and the design of the sample will yield enough members of each group to provide sufficient data for adequate analysis. In addition, decisions must be made early with regard to the mode of administration of the survey—whether it will be conducted through self-administration or interviewing and, if the latter, whether in person, by telephone, or in another way—since these choices also influence what can be asked. Each decision has its trade-offs in terms of quality, cost, and other important features of the research.

After these planning decisions, the development of the questionnaire, the pretesting, and the final field period take place. The resulting data from closed, or fixed-choice, questions can be entered directly in numerical form (e.g., 1 = yes, 2 = no, 3 = don't know) into a computer file for analysis. If open-ended questions—questions that do not present fixed alternatives—are used and the respondents' answers have been recorded in detail, an intermediate step is needed to code the answers into categories. For example, a question that asks the respondents to name the most important problems facing the country today might yield categories for "foreign affairs," "inflation," "racial problems," and so forth, though the words used by the respondents ordinarily would be more concrete. Finally, the data are analyzed in the form of tables and statistical measures that can form the basis for a final report.


MODIFICATIONS AND EXTENSIONS OF THE SURVEY METHOD

This discussion has concerned primarily the single cross-sectional or one-shot survey, but more informative designs are increasingly possible. The most obvious step now that surveys of the national population have been carried out for more than half a century is to study change over time by repeating the same questions at useful intervals. The General Social Survey (GSS) has replicated many attitude and factual questions on an annual or biennial basis since 1972, and the National Election Study (NES) has done the same thing in the political area on a biennial basis since the 1950s. From these repeated surveys, sociologists have learned about substantial changes in some attitudes, while in other areas there has been virtually no change (see Niemi et al. [1989] and Page and Shapiro [1992] for examples of both change and stability). An important variant on such longitudinal research is the panel study, in which the same respondents are interviewed at two or more points in time. This has certain advantages; for example, even where there is no change for the total sample in the distribution of responses, there may be counterbalancing shifts that can be best studied in this way.

Surveys are increasingly being carried out on a cross-national basis, allowing comparisons across societies, though usually with the additional obstacle of translation to be overcome. Even within the framework of a single survey in one country, comparisons across different types of samples can be illuminating, for example, in an important early study by Stouffer (1955) that administered the same questionnaire to the general public and to a special sample of "community leaders" in order to compare their attitudes toward civil liberties. Finally, it is important to recognize that although the survey method often is seen as entirely distinct from or even opposite to the experimental method, the two have been usefully wedded in a number of ways. Much of what is known about variations in survey responses caused by the form, wording, and context of the questions has been obtained by means of split-ballot experiments, while attempts to study the effects of policy changes sometimes have involved embedding surveys of attitudes and behaviors within larger experimental designs.


ETHICAL AND OTHER PROBLEMS

As with other social science approaches to the empirical study of human beings, surveys raise important ethical issues. The success of survey sampling requires persuading individuals to donate their time to being interviewed, usually without compensation, and to trust that their answers will be treated confidentially and used for purposes they would consider worthwhile. A related issue is the extent to which respondents should be told in advance and in detail about the content and aims of a questionnaire (the issue of "informed consent"), especially when this might discourage their willingness to answer questions or affect the kinds of answers they give (Singer 1993). The purely professional or scientific goal of completing the survey thus can conflict with the responsibility of survey investigators to the people who make surveys possible: the respondents. These are difficult issues, and there probably is no simple overall solution. There is a need in each instance to take seriously wider ethical norms as well as professional or scientific goals.

From within sociology, reliance on surveys has been criticized on several grounds. Sociologists committed to more qualitative approaches to studying social interaction often view surveys as sacrificing richness of description and depth of understanding to obtain data amenable to quantitative analysis. Sociologists concerned with larger social structures sometimes regard the survey approach as focusing too much on the individual level, neglecting the network of relations and institutions of societies. Finally, some see the dependence of surveys on self-reporting as a limitation because of the presumed difference between what people say in interviews and how they behave outside the interview situation (Schuman and Johnson 1976). Although there are partial answers to all these criticisms, each has some merit, and those doing survey research need to maintain a self-critical stance toward their own approach. However, the survey is the best-developed and most systematic method sociologists have to gather data. Equally useful methods appropriate to other goals have yet to be developed.


references

Agresti, Alan, and Barbara Finlay 1997 Statistical Methods for the Social Sciences. Upper Saddle River, N.J.: Prentice Hall.

Converse, Jean M. 1987 Survey Research in the United States: Roots and Emergence, 1890–1960. Berkeley: University of California Press.

Dillman, Don A. 1978 Mail and Telephone Surveys: The Total Design Method. New York: Wiley.

Groves, Robert M. 1989 Survey Errors and Survey Costs. New York: Wiley.

—— Paul P. Biemer, Lars E. Lyberg, James T. Massey, William L. Nicholls II, and Joseph Waksberg 1988 Telephone Survey Methodology. New York: Wiley.

Hyman, Herbert H. 1954 Interviewing in Social Research. Chicago: University of Chicago Press.

Kalton, Graham 1983 Introduction to Survey Sampling. Beverly Hills, Calif.: Sage.

Kish, Leslie 1965 Survey Sampling. New York: Wiley.

Laumann, Edward O., Robert T. Michael, John H. Gagnon, and Stuart Michaels 1994 The Social Organization of Sexuality: Sexual Practices in the United States. Chicago: University of Chicago Press.

Niemi, Richard, John Mueller, and Tom W. Smith 1989 Trends in Public Opinion: A Compendium of Survey Data. Westport, Conn.: Greenwood Press.

Page, Benjamin I., and Robert Y. Shapiro 1992 The Rational Public: Fifty Years of Trends in Americans' Policy Preferences. Chicago: University of Chicago Press.

Payne, Stanley L. 1951. The Art of Asking Questions. Princeton, N.J.: Princeton University Press.

Rosenberg, Morris 1968. The Logic of Survey Analysis. New York: Basic Books.

Rossi, Peter H., and Steven L. Nock, eds. 1982 Measuring Social Judgments: The Factorial Survey Approach. Beverly Hills, Calif.: Sage.

Schuman, Howard, and Michael P. Johnson 1976 "Attitudes and Behavior." Annual Review of Sociology, vol. 2. Palo Alto, Calif.: Annual Reviews.

—— and Stanley Presser 1981 Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. New York: Academic Press.

Schwarz, Norbert, and Seymour Sudman, eds. 1996 Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco: Jossey-Bass.

Singer, Eleanor 1993 "Informed Consent in Surveys: A Review of the Empirical Literature." Journal of Official Statistics. 9:361–375.

Stouffer, Samuel A. 1955 Communism, Conformity, and Civil Liberties. Garden City, N.Y.: Doubleday.

Sudman, Seymour, and Norman M. Bradburn 1987 Asking Questions: A Practical Guide to Question Design. San Francisco: Jossey-Bass.

——,———, and Norbert Schwarz 1996 Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass.

Tourangeau, Roger, and Tom W. Smith 1996 "Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Content." Public Opinion Quarterly 60:275–304.

Turner, Charles, and Elizabeth Martin, eds. 1984 Surveying Subjective Phenomena, 2 vols. New York: Russell Sage Foundation.


Howard Schuman

More From encyclopedia.com