A survey is a data collection instrument that generates data through the responses provided by respondents to survey questions. Some researchers connect the term sample with the term survey, arguing that the primary use of surveys is to draw inferences about a population from whom a sample of respondents have been surveyed. Surveys administered to relatively small, random samples permit the drawing of inferences about large populations.
According to Jean Converse’s Survey Research in the United States (1987), historians generally believe the first surveys were used by governments in order to know who to tax and who to draft into military service. Later surveys were used to document conditions of poverty, using open-ended questions and convenience samples. In the early twentieth century, opinion polls were conducted by newspaper organizations; again convenience samples were employed.
As interest developed in measuring opinions and other subjective states, surveyors became interested in question wording. Since the 1930s, systematic efforts have been made to develop questions and answer choices on questionnaires that are relevant, understandable, and unambiguous.
Types of data collected by sample surveys include, for example: beliefs, attitudes, values, resource possession, socioeconomic status, time use, characteristics of members of social networks, and self-reports of behavior. Surveys frequently contain scales of items designed to measure concepts that are multidimensional. Many of these scales have been validated in prior research and have a solid track record in terms of reliability. Survey data also lend themselves to sophisticated types of statistical analyses.
All surveys, however, contain a certain amount of error. The term measurement error refers to those errors directly associated with the questions in the questionnaire. Two types of measurement error exist: systematic error (the respondent falsely reports drinking alcohol infrequently) and random error (a fatigued respondent mistakenly checks the wrong answer to a question).
The variety of survey modes include: (1) self-administered questionnaires; (2) telephone and other communication-assisted interviewing surveys; and (3) personal or face-to-face interviews. These types differ in terms of the amount of motivation a respondent must have in order complete a given survey, cost, research staff requirements, and length and complexity. Personal interviews, for example, can be longer and more complex than self-administered questionnaires. Additional complexity is added to this picture by means of technological intermediary delivery devices, such Internet/Web site surveys and computer assisted interviewing. In addition, mixed mode surveys, which use two or more survey types, are used to increase response rates across differing subpopulations and across differing data collection needs. Thus some respondents fill out questionnaires while others are interviewed by telephone or in person. Mixed modes can be used with the same respondents in which, for example, subjects are interviewed about their health and other topics and then those subjects are asked to maintain a time diary over a several-day period. Optical scanning technology permits wider-scale distribution of surveys because of the labor savings this type of survey represents.
The type of survey employed depends on factors such as: (1) cost; (2) goals of the survey; (3) length and complexity of the instrument; and (4) accessibility of the population (related to cost). Personal interviews require trained, well-paid interviewers. Such skilled labor is relatively expensive when compared with minimum wages offered student workers on college campuses, a frequent source of interviews for university surveys.
Surveys are conducted by individuals and research teams; often these surveys are conducted with a small sample, limiting generalization to larger populations. At the same time, federally funded national surveys have increased in number since the mid-twentieth century; these surveys are conducted across multiple points in time making possible longitudinal studies and panel studies. The General Social Survey (GSS) has provided data on key social issues since 1972. The Current Population Survey (CPS), an excellent data source regarding employment issues, has collected data each quarter of every year since 1937. A time expenditure recall has been added to the CPS in order to study trends in time use. Other surveys provide data on crime victimization (National Crime Victimization Survey), families (National Survey of Families and Households), and health (National Health Interview; National Health and Nutrition Examination Survey).
Researchers are often unsure as to what answer choices to a given question provide the full range of possible valid answers. In addition, the provision of such answers by the researchers is thought by some to bias the answer chosen by respondents. Under such circumstances, open-ended questions are utilized. Such questions inquire but provide no guidance as to what the researcher considers an acceptable answer. The weaknesses associated with open-ended questions include: respondents must be highly motivated in order to fully access their thoughts; and respondents may ask for assistance in order for them to provide an answer. At this point the interviewer is trained in probing, a technique designed to guide the respondent toward an answer. However, if the probing comes across in a directive manner, the respondent may be biased toward providing an answer that does not represent his or her answer. There is also debate over the issue of standardized interviewing, during which the interviewer is required to stick to the “survey text” as written, versus flexible interviewing, which permits the interviewer to diverge from the survey text and reword the question when respondents are unable to understand a given question.
Early-twenty-first-century criticism observes that respondent attitudes often conflict with societal norms. One of the issues is the discrepancy between the liberal attitudes that respondents frequently express toward ethnic groups and the behavior shown by, for example, whites toward blacks. Social desirability bias may lead to this discrepancy. Equally plausible is the observation from social psychology that norms may conflict with attitudes unless those norms result from membership in a salient in-group, according to Katherine White and her colleagues in their 2002 study. Eduardo Bonilla-Silva, in his work Racism without Racists (2003), suggests the most important reason is that while it is less socially acceptable to express “prejudicial attitudes,” whites discriminate against minority groups when their material interests are threatened. Other problems relate to cognitive issues such as memory; respondents typically have difficulty remembering details regarding mundane activities such as eating and remembering activities that have taken place over longer time periods. Much experimental work is taking place in universities and other research settings in order to reduce survey error, including enhancing memory and reducing social desirability bias. Other problems have to do with response rates: people are less willing than in the past to participate in survey research.
Social scientists, including those who test hypotheses using survey data, are frequently accused of making causal claims inappropriately. Surveys are most often used in a cross-sectional setting; that is data is collected from the respondent at one point in time. In his research published in 2004, Andrew Abbott describes this in terms of “vision of causality quite dominant in U.S. quantitative research … arrays causes by their impact on that outcome, explicitly separating the immediate from the distant both in social time and social space” (Abbott 2004, p. 398). At the same time, survey data can be collected over time, which allows the study of change over time. A number of largescale surveys, including the General Social Survey and the Current Population Survey, permit such studies.
Time (temporal process) and context are frequently not captured by survey instruments. However, certain aspects of time can be captured by having respondents maintain time diaries. The Current Population Survey includes a twenty-four-hour time recall. In addition, survey data may be supplemented with information from other data sources (observations made of groups and neighborhoods; secondary data sources such as the Census of Population) that describe the group, neighborhood, and societal context, as multilevel statistical models have demonstrated.
Surveys generally fail to tap issues that may be salient to the researcher’s goals, but are not captured in the survey’s questions. Qualitative methods including participant observation and open-ended interviewing offer a means for capturing those issues. In some studies, researchers begin with open-ended interviews and then use information (respondents’ word choices; issues that are important to respondents) to construct a survey instrument. Surveys also fail to capture historical context. To some degree this problem can be alleviated by use of historical materials (including census and other surveys done in the past) to place survey findings in a broader context.
With the development of new national surveys designed to study socioeconomic circumstances and other aspects of society, the prospects for the availability of national level data, collected given time periods, are good. Survey methodology continues to reduce survey error, as Robert M. Groves and Don Dillman describe in their works. Other trends suggest problems for survey researchers: refusal rates to traditional types of surveys have increased over time. The technology-assisted techniques, while offering new avenues for obtaining data from hard-to-reach respondents, appear to have a poor track record in terms of response rate.
SEE ALSO Data; Methods, Qualitative; Methods, Research (in Sociology); Survey
Abbott, Andrew. 2004. Process and Temporality in Sociology: The Idea Outcome in Sociology. In The Politics of Method in the Human Sciences: Positivism and Its Epistemological Others, ed. George Steinmetz. Durham, NC: Duke University Press.
Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method. 2nd ed. New York: John Wiley and Sons.
Groves, Robert M., Floyd J. Fowler Jr., Mick P. Couper, et al. 2004. Survey Methodology. Hoboken, NJ: Wiley.
House, James S., F. Thomas Juster, Robert L. Kahn, et al. 2004. A Telescope on Society: Survey Research and Social Science at the University of Michigan and Beyond. Ann Arbor: University of Michigan Press.
White, Katherine M., Michael A. Hogg, and Deborah J. Terry. 2002. Improving Attitude-Behavior Correspondence through Exposure to Normative Support from a Salient Ingroup. Basic and Applied Social Psychology 24 (2): 91–103.
Wm. Alex McIntosh