Polling involves gathering information by asking people to report their beliefs, attitudes, and behaviors. First, it is one of the most widely used techniques in social science research, of particular interest to political scientists, who use surveys such as the University of Michigan’s American National Election Studies to analyze relationships between political attitudes and other attitudes, voting behavior, and participation. Second, polling has numerous applications in business through market research studies to assess customer satisfaction, identify new markets, and identify new prospective customers. Third, polling is applied in campaign politics. Party and campaign organizations frequently use tracking polls to identify candidates’ standing, strengths, and weaknesses during campaigns. Fourth, polling is used in policy analysis and program evaluation. Nonprofit organizations and local, state, and national governments are often interested in their clients’ opinions of their services and programs—this information is useful in program assessment and evaluation. The political scientists Barbara Bardes and Robert Oldendick (2007) provide an especially extensive discussion on the uses of opinion polls. Throughout this entry, the terms poll and survey are used interchangeably.
During the twentieth century polling techniques became much more scientific, spurred in part by failed efforts by pollsters to predict presidential election results in 1936 and 1948. As noted by the political scientists Robert Erikson and Kent Tedin (2005), in 1936 a straw poll from Literary Digest magazine predicted that Republican candidate Alf Landon (1887–1987) would defeat Democratic president Franklin D. Roosevelt (1882–1945). The poll was off by nearly 20 percentage points, as Roosevelt won handily. Responses to this survey heavily overrepresented the wealthiest (and heavily Republican) groups of Americans. Respondents were drawn from automobile registration lists and telephone directories, but during the Great Depression, most automobile and telephone owners were wealthy. In 1948 preelection polls predicted that Republican candidate Thomas Dewey (1902–1971) would defeat Democratic president Harry S. Truman (1884–1972). However, this poll also relied on flawed sampling that overrepresented the better-off. As a result the 1948 poll overrepresented Republicans. Chastened by these polling mistakes and seeking to avoid future ones, pollsters were spurred to develop scientific polling methods.
Scientific polls have several major characteristics that distinguish them from nonscientific surveys. First and foremost, scientific polls use samples of respondents that mirror the larger population under study. In a large, diverse nation like the United States, interviewing all American adults in a survey is impractical and impossible owing to prohibitive costs, resource constraints, and time limits. Thus pollsters rely on adult samples, selected randomly, such that all individuals have an equal chance of being included in the sample. This random sampling process usually yields a sample that closely reflects the characteristics of the larger population—that is, the sample is representative of the larger population. Statistically samples are most likely to be representative when random sampling methods are used and sample size—that is, the number of completed surveys—approaches or exceeds one thousand, with larger sample sizes producing better results.
Second, scientific polls use survey questions that are carefully constructed, clear and nonconfusing, and free of biased or “leading” language. Confusing language can sometimes produce major polling surprises. A 1993 Gallup Poll sponsored by the U.S. Holocaust Memorial Museum in Washington, D.C., revealed that 22 percent of Americans either were unsure of or doubted the Nazi extermination of the Jews during the Holocaust. This stunning finding was soon called into question, however. Close examination of the poll questions revealed a confusing double negative in the survey question. Follow-up polling using a revised question showed more comforting results: Only 1 percent of Americans actively doubted the Holocaust had happened, with an additional 8 percent unsure. The sociologists Howard Schuman and Stanley Presser ( 1996) offer extensive evidence of the impact of question wording on survey results. Their and others’ research shows that tone of wording, question ordering, question context (or lack there of), and differences in response formats (i.e., a three-point scale versus a seven-point scale) all can significantly affect survey responses. How a question is asked, then, definitely shapes the answers received.
A third element of scientific polling is accurate and thorough reporting of results. Scientific surveys include a statement of how the poll was conducted and what the limitations of the survey are. Such a disclosure statement should include how many completed surveys there are; interviewing techniques used (in-person, telephone, or mail questionnaires); how respondents were selected (random sampling is best); the survey’s margin of error and confidence level, two numbers indicating how well results will likely extend to the general population; any additional survey techniques, such as weighting of respondents, use of variations in question wording, or interviewer characteristics; and limitations of the survey. Above all else, scientific polling means that the pollster seeks to accurately measure attitudes, opinions, and behaviors, not influence them.
Some surveys are not scientific, for varying reasons. Some news media, such as MSNBC, have Web sites where viewers can answer an online survey. These are not scientific surveys as they lack the key ingredient of random sampling. Scientific polls do not allow people to self select into completing the survey. Similarly mail-in surveys found in some magazines and call-in polls used on some television shows are unscientific. Other “surveys” sponsored by political parties, campaigns, and interest groups are unscientific because they frequently use “loaded” questions that (usually not subtly) encourage some responses over others. At best these qualify as “pseudo-polls” because their objective is not to accurately measure opinion but to arouse support for the sponsoring party or group or anger at political opponents. A 1993 TV Guide survey sponsored by Ross Perot contained clearly loaded questions, including “Should laws be passed to eliminate all possibilities of special interests giving huge sums of money to candidates?” Such loaded questions provide virtually no meaningful information, but they do provide examples of how some surveys fall well short of scientific standards for measuring public opinion.
An additional limitation of polling is that some respondents face questions they prefer not to answer, resulting in self-censorship, which can take several forms. Someone contacted by a pollster may refuse to answer some questions, refuse the entire survey, or give insincere answers. Insincere responses are especially likely on sensitive subjects, such as past drug use, sexual activity, or racial attitudes, where some respondents answer falsely to give more “socially desirable” responses. The sociologist Eduardo Bonilla-Silva (2006) studied white Americans’ discourse on racial issues and found a prevalent “color blind racism” in which many whites deny holding racist attitudes, contending that racism is “a thing of the past” and that race does not impact their attitudes and behaviors. This phenomenon perpetuates white dominance by denying continuing racial discrimination, revealing negative racial stereotypes, such as attitudes that minorities (especially blacks) tend to be lazy, violent, and lacking in self-restraint and attributing racial-group differences in income, housing, education, crime, and other areas to individual choices or market forces that have nothing to do with race. The political scientist Martin Gilens (1999) found that white Americans’ opposition to welfare is frequently driven by racial stereotypes that welfare recipients are usually black and that blacks are often lazy and shiftless, preferring to collect handouts rather than work.
The political scientists Jon Hurwitz and Mark Peffley (1997) found that white attitudes favoring punitive anti-crime policies are often driven by stereotypes of blacks as violent and disposed to criminal acts. Similarly the political scientists Joe Soss, Laura Langbein, and Alan Metelko (2003) studied white Americans’ attitudes toward the death penalty. They found that racial prejudices were by far the single strongest explanation for whites’ death penalty attitudes, especially in areas where blacks comprise a larger share of the population. In all these cases, white attitudes on issues that appear non-race-related on the surface are suffused with racial stereotypes. But few whites in the early twenty-first century would admit they hold negative racial attitudes (“I’m not racist”) or that those attitudes influence policy preferences. Social scientists must often use creative methods, such as unobtrusive survey questions on racial attitudes or experiments that vary question wording within surveys, to demonstrate the racial components underlying these attitudes.
Polling has a central place in political science research. Academic survey research centers exist at major universities, such as the University of Chicago, the University of Michigan, and the University of California at Berkeley; these frequently sponsor nationwide scientific surveys. More survey research centers conduct further polling in many states. Collectively the polling conducted by these centers yields invaluable data for political scientists. For example, a researcher wishing to examine how racial stereotypes or beliefs in biblical inerrancy impact voting can use data from the University of Michigan’s American National Election Studies, which measure these and many other social science variables. Statewide surveys, such as the Arkansas Poll sponsored by the University of Arkansas, or regional surveys, such as the Southern Focus Poll sponsored by the University of North Carolina at Chapel Hill, provide more data that political scientists find useful in researching attitudes in a state or region of the country. Although polling outside the United States presents many additional challenges, there is increasing demand for cross-national polling data, including that from Middle Eastern, Asian, and African nations. The World Values Survey, sponsored by multiple universities worldwide, has provided polling data from more than eighty nations since 1981. These data are opening new avenues for political scientists to better understand public opinion not just in the United States but worldwide.
SEE ALSO Attitudes; Attitudes, Political; Attitudes, Racial; Elections; Hypothesis and Hypothesis Testing; Polls, Opinion; Psychometrics; Public Opinion; Survey; Surveys, Sample; Voting
Bardes, Barbara A., and Robert W. Oldendick. 2007. Public Opinion: Measuring the American Mind. 3rd ed. Belmont, CA: Thomson Wadsworth.
Bonilla-Silva, Eduardo. 2006. Racism without Racists: ColorBlind Racism and the Persistence of Racial Inequality in the United States. 2nd ed. Lanham, MD: Rowman and Littlefield.
Erikson, Robert S., and Kent L. Tedin. 2005. American Public Opinion: Its Origins, Content, and Impact. 7th ed. New York: Pearson Longman.
Gilens, Martin. 1999. Why Americans Hate Welfare: Race, Media, and the Politics of Antipoverty Policy. Chicago: University of Chicago Press.
Hurwitz, Jon, and Mark Peffley. 1997. Public Perceptions of Race and Crime: The Role of Racial Stereotypes. American Journal of Political Science 41 (2): 375–401.
Schuman, Howard, and Stanley Presser.  1996. Questions and Answers in Attitude Surveys. Thousand Oaks, CA: Sage.
Soss, Joe, Laura Langbein, and Alan R. Metelko. 2003. Why Do White Americans Support the Death Penalty? Journal of Politics 65 (2): 397–421.
POLLING is a form of surveying conducted by the canvassing or questioning of a universe. A universe can consist of a particular group, such as steel workers, or can rely on a more general population, such as a political survey of public opinion in an election year. Polling dates back to 1824 in the United States, when two newspapers, the Harrisburg Pennsylvanian and the Raleigh Star, attempted to predict the presidential election by use of "show votes." By the twentieth century, polls would be taken by magazines, such as Farm Journal (1912) and Literary Digest (1916). However, these polls were mostly local in scope. The first major national poll was conducted during World War I, and asked participants whether or not the United States should become involved in the war.
In 1936, the process of polling would change forever. George H. Gallup, founder of the American Institute of Public Opinion (1935), had issued a challenge to Literary Digest, claiming that he could more accurately predict the outcome of that year's presidential election. At the time, this seemed foolhardy, for the Literary Digest had correctly predicted every presidential election correctly since 1916. Confident in his methods, Gallup had developed a system of interviewing based on quota samples, which employed a relatively small number of people to mathematically determine the views of the public at large. He came up with fifty-four different questions, and considered each question demographically, with such key determinants as age, sex, income, and region. The Literary Digest, meanwhile, had conducted an old-fashioned straw poll, based on telephone and car buyer lists. In the past, such lists had been serviceable, but in 1936, during the depression, they proved to be heavily biased in favor of the Republican candidate, Alfred M. Landon. For this reason, the Literary Digest, having predicted a landslide victory for Landon, was upstaged by the audacious Gallup, who correctly predicted another victory for Franklin D. Roosevelt. Other, lesser-known forecasters, such as Elmo Roper and Archibald Crossley, had also predicted Franklin's victory using similar sampling methods.
However, even Gallup would be proven wrong in the presidential election of 1948. In that year, all major pollsters predicted the Republican candidate, Thomas E. Dewey of New York, would defeat Harry S. Truman, the current president. Gallup had used the same sampling techniques as before, but had made a terrible mistake by concluding his poll weeks before Election Day. Furthermore, Gallup had made the incorrect assumption that the "undecided" votes would split in much the same way as the "decided" ones. This would prove untrue, as most of the "undecideds" either voted for Truman or did not vote at all.
Gallup would learn from his mistakes and recover ever stronger after the election. He improved his sampling techniques, taking into account a greater number of influences, and reanalyzing the effects of the inner city and other regions, which, to his undoing, his interviewers had neglected in 1948. Gallup also made certain that polling was done consistently, a process known as tracking, and that the likelihood of a person actually voting was now also taken into consideration.
With these improvements, Gallup's organization was able to accurately predict future elections. Between 1952 and 2000, the Gallup poll, as it came to be known, achieved an average accuracy of just over 2 percent deviation in presidential elections. In the controversial 2000 election between Al Gore and George W. Bush, Gallup's final preelection prediction had the race as a "statistical dead heat," meaning that the candidates were locked within a range of error of plus or minus 3 percent.
Gallup also introduced polling to social issues. He conducted polls on such complicated topics as the Vietnam War and education. He felt that polls gave the people a voice, and that they were therefore an important aspect of democracy.
Many, however, criticize polling, claiming that it has a "bandwagon effect" on voters, and too much control over politicians. Yet polling continues to play an important role in the political process.
Polling techniques are also extensively used in industry to conduct market research. Companies use sampling in order to determine what products consumers are willing to buy. Such techniques may include random sampling, in which everyone is a potential candidate for an interview; stratified sampling, in which sampling candidates are divided into nonoverlapping groups; quota sampling, random sampling subject to demographic controls; and cluster sampling, in which "clusters," or groups, are selected from various sectors of the population, such as the middle class or working class.
Chaffee, S. H. "George Gallup and Ralph Nafziger: Pioneers of Audience Research." Mass Communication and Society 3 (2000): 317–327.
Polling is a form of time division multiplexing. The precise polling strategy used depends upon the application. In roll-call polling the primary station addresses each secondary station in turn. Some stations may be addressed more often than others if their response-time requirements or traffic loads are heavier. Hub polling is used to minimize line turnaround delays on half duplex multidrop lines. The primary station polls the station at the opposite end of the line, which transmits any data it has and polls the next closest station. This process is repeated until control reaches the primary station again. Since data is flowing in one direction only, from the outermost nodes toward the primary station, the only turnaround delays occur when the primary station wishes to transmit.
Polling is not suitable for situations where the response delay time is fairly large, as is the case in satellite transmission systems.