Introduction and Summary

A unique survey research experiment finds that public opinion polls, as they are typically conducted, do not understate conservative opinions or support for the Republican Party. Conservative critics of the polls have charged that these surveys are politically biased. A methodological study by the Pew Research Center finds little evidence of this, but the study does suggest that white hostility toward blacks and other minorities may be understated in surveys which are conducted in just a few days — as most opinion polls are.

In recent presidential election campaigns, poll critics have charged that media-sponsored public opinion surveys produce biased and inaccurate results. These polls, critics claim, are based on skewed samples that do not fully represent certain kinds of people or points of view. They say it is increasingly difficult for pollsters to get people to participate in telephone surveys with the American public beset by telemarketers and harried by time pressures. As a result, national opinion polls are less reliable than they once were, critics charge.

Most recently, for example, critics argued that polls taken during the last presidential campaign regularly overstated President Clinton’s lead over Republican challenger Bob Dole. Political analyst Michael Barone wrote that one theory explaining this bias is that “conservatives are more likely than others to refuse to respond to polls, particularly those polls taken by media outlets that conservatives consider biased.” (The Weekly Standard, March 10, 1997.) New York Times columnist William Safire added that most media polls were “grievously misleading,” not only exaggerating President Clinton’s lead in 1996 but reducing turnout among dispirited Republicans. (The New York Times, December 17, 1997.)

Criticism of the national polls comes from other quarters, as well. Scholars argue that the national polls cut too many corners by producing surveys in a short time period — often in a few days — in order to get immediate results, compared to the more rigorous and exhaustive surveys fielded by university research centers.

Few pollsters would dispute that it is increasingly difficult to conduct public opinion surveys, and most would readily admit that time pressures and reduced news media budgets compel them to make a number of methodological compromises. But, they would also argue that their time-tested methods produce stable and reliable measures of public opinion and that their record in forecasting elections, including the last one, is pretty good.

Designed to shed light on the debate, the Pew Research Center conducted two surveys that asked exactly the same questions. The first — the “standard survey” — used typical polling techniques, contacting 1,000 adults by phone in a five-day period beginning June 18. The second — the “rigorous survey” — was conducted over eight weeks beginning June 18. The longer time frame allowed for an exhaustive effort to interview highly mobile people and to gain the cooperation of people who were initially reluctant to participate in the survey.

In addition, many of the respondents in the rigorous survey received an advance letter announcing that an interviewer would be calling and offering a small monetary gift as a token of appreciation. The rigorous survey also used a strictly random method for selecting the person in each household to be interviewed, while the standard survey used a systematic, but non-random technique.

The Findings

A leading criticism of media polls is that they miss some people. If a survey fails to interview some segments of society, then those people’s opinions may not be fully reflected in the poll results. Today, most major survey organizations use a statistical procedure known as weighting to mathematically correct their poll results by compensating for those segments of society that they know to be underrepresented.

These adjustments are typically designed to bring a survey sample in line with national figures on the basis of demographic measures. Much of the criticism of media polls suggests they are not representative of the nation in their measuring of political attitudes. The Pew Research Center experiment was designed to see who gets left out in a standard poll — and more importantly, whether the excluded segment of the population is any different politically from those people who are included in a more rigorous survey.

The rigorous survey did a better job than the standard five-day poll in two ways: by reaching more households and by getting people in those households to participate in the survey. The rigorous survey was successful in making contact with 92% of the working telephone numbers in its sample. In contrast, the standard survey only reached 67%. The rigorous survey also completed more interviews among the people it reached, in many cases because people who initially refused to take part in the poll were called again and persuaded to participate. The rigorous survey achieved a cooperation rate of 79%, compared to a 65% cooperation rate in the standard survey. (Still, while the rigorous survey represented a substantial improvement, neither survey was successful in reaching everyone, since some people repeatedly refused and others were not available or did not answer the telephone.)

But as it turned out, the standard and rigorous surveys produced strikingly similar results. Despite the differences in the way the surveys were administered, the findings of the two polls barely differed. The surveys included more than 85 questions concerning media use, lifestyle and a range of political and social issues. Excluding several time-sensitive measures, just five questions showed statistically significant differences between the two surveys.1

On the majority of questions, the responses given by each sample differed by only 3 percentage points or less (see chart). The average difference was just 2.7 percentage points. To put this in perspective, the margin of error for each of the surveys — the amount of error that is likely to occur simply by chance — is 4 percentage points. This means the average difference between the two surveys on a typical question was actually less than the margin of error for either survey.2

Other differences between the two surveys were equally slight:

The rigorous sample was slightly more affluent, somewhat better educated and included slightly more whites than the standard sample. But in most respects the two groups were the same — and, more importantly, basically representative of the U.S. population as a whole.

Politically, there were few significant differences between the two groups. Those in the rigorous sample had slightly higher opinions of the Republican Party and were less sympathetic to racial minorities. But on a number of other questions — including party identification and vote in the 1996 presidential election — the rigorous sample was no more conservative than the standard sample.

The people included in the rigorous and standard samples did not differ in their media use, daily activities and feelings toward others.

In a few instances, significant differences between the two samples seem to reflect actual changes in public opinion between June, when both surveys began, and August, when the rigorous survey was completed. These differences underscore one of the main advantages of the standard five-day survey: shorter-term surveys are able to take a relatively quick snapshot of American opinion that is not affected by changes in public attitudes over time.

For example, 34% of those in the standard sample said Republicans and Democrats have been working together more to solve problems, rather than “bickering and opposing one another.” In contrast, significantly more — 40% — in the rigorous sample said Republicans and Democrats have been working together to solve problems. But this difference may reflect an actual change in public attitudes over the course of the summer, following the passage of a balanced budget bill in July. A separate Pew Research Center survey conducted in August found fully 43% saying the two parties have been working together more.

Overall, however, the two surveys consistently offer the same picture of American public opinion in the summer of 1997. The numbers may differ by two or three percentage points, but the basic story is the same. According to the rigorous survey, for example, 57% held a favorable opinion of Congress; according to the standard survey, 52%. Fully 58% said government is “wasteful and inefficient” in the rigorous survey; 59% agreed in the standard survey.

Race and Reluctant Respondents

These findings suggest that for most topics, the typical media polls do a good job gauging public opinion. But results based on questions about racial issues may be more problematic. In fact, the Pew experiment suggests that accurately measuring racial antagonisms may be a problem in all survey research. This may help explain why pre-election polls have overestimated white support for black candidates in biracial elections.

On two of four questions involving racial issues, white respondents in the rigorous sample were noticeably less sympathetic toward blacks. For example, 64% of whites in the rigorous sample said blacks who can’t get ahead are responsible for their own condition, while just 26% blamed racial discrimination. This compares with a narrower 56% to 31% division on the question among whites in the standard sample.

These differences offer a clue into what may be the biggest challenge facing pollsters who seek to accurately measure public opinion on racial issues. People who are reluctant to participate in telephone surveys seem to be somewhat less sympathetic to blacks and other minorities than those willing to respond to poll questions. This suggests that to increase the accuracy of surveys that focus extensively on racial issues, pollsters need to make an extra effort to obtain interviews with people who initially refuse to participate.

On race-related issues, the differences between white respondents who agreed to be interviewed when they were first called and those who first refused are striking. Some 22% of those who initially cooperated held a “very favorable” opinion of blacks, compared to just 15% of those who initially refused. The pattern is similar for other minority group as well.

The remainder of this report outlines the findings of the Pew Research Center experiment. The next four sections provide a detailed comparison of the standard and rigorous surveys, focusing on demographic differences, political attitudes, lifestyles and attitudes toward public opinion surveys. The report concludes with a more extensive analysis focusing on the structure of opinion within the two samples.

A number of survey researchers contributed to the planning and design of this experiment. We are particularly grateful to Scott Keeter, Robert Groves, Stanley Presser, Mark Schulman, Carolyn Miller and Mary McIntosh for their assistance.