Some General Questions

  1. Why am I never called to be polled?

    You have roughly the same chance of being called as anyone else living in the United States who has a telephone. This chance, however, is only about 1 in 154,000 for a typical survey by the Pew Research Center for the People & the Press. To obtain that rough estimate, we divide the current adult population of the U.S. (about 235 million) by the typical sample size of our polls (usually around 1,500 people). Telephone numbers for Pew Research Center polls are generated through a process that attempts to give every household in the population a known chance of being included. Of course, if you don’t have a telephone at all (about 2% of households), then you have no chance of being included in our telephone surveys.

    Once we’ve completed a survey, we adjust the data to correct for the fact that some individuals (e.g., those with both a cell phone and a landline) have a greater chance of being included than others. For more on how that’s done, see the discussion of weighting in our detailed methodology statement.

  2. Can I volunteer to be polled?

    While we appreciate people who want to participate, we can’t base our polls on volunteers. A survey of volunteers is a “non-probability sample” and the results cannot be generalized to the public as a whole. The key to survey research is to have a random sample so that every type of person has an equal chance of having their views captured. Polls of volunteers would violate this principle since not everyone would have had an equal chance of being included. (See Why probability sampling for more information.) And more specifically, the kinds of people who might volunteer for our polls are likely to be very different from the average American – at the least they would probably be more politically interested and engaged.

  3. Why don’t your surveys ever reflect the opinions of people I know?

    Chances are you don’t hang out with a group of friends that represents everyone in America. Your friends, coworkers and family are probably like you in many ways. If you were to have a group of friends that represents all of the country, you would have acquaintances who are black, white, Asian, rich, poor, Muslim, Catholic, from the South, Northeast, etc., or any combination of those attributes. Few of us are lucky to have such a diverse group of friends.

  4. Why should I participate in surveys?

    You should participate in surveys for many reasons. Polls are a way for you to express your opinions to the nation’s leaders and the country as a whole. Public officials and other leaders pay attention to the results of polls and often take them into account in their decision-making. If certain kinds of people do not participate in the surveys, then the results won’t represent the full range of opinions in the nation.

  5. What good are polls?

    Polls seek to measure public opinion and document the experiences of the public on a range of subjects. The results provide information for academics, researchers, and government officials and help to inform the decision-making process for policy makers and others. Much of what the country knows about its media usage, labor and job markets, educational performance, crime victimization, and social conditions is based on data collected through polls.

  6. I’m on a “Do Not Call” list. Doesn’t that prevent you from calling me?

    No. Legitimate survey research is exempt from the Telemarketing Sales Rule, which was adopted by the Federal Trade Commission to fight fraud and protect consumers from harassment. The rule covers marketing but not opinion polling or market research that does not involve an effort to sell you something. Nonetheless, our telephone survey interviewing centers will honor any request not to be called.

  7. Do pollsters have a code of ethics? If so, what is in the code?

    The major professional organizations of survey researchers have very clear codes of ethics for their members. These codes cover the responsibilities of pollsters with respect to the treatment of respondents, their relationships with clients and their responsibilities to the public when reporting on polls.

    Most of the Pew Research Center’s pollsters belong to the American Association for Public Opinion Research (AAPOR) and subscribe to AAPOR’s code.

    Some good examples of a pollster’s Code of Ethics include:

    American Association for Public Opinion Research (AAPOR)
    Council of American Survey Research Organizations (CASRO)

  8. How are political polls different from market research?

    There are many similarities but the main difference is the subject matter. Market research explores opinions about products and services, and measures your buying patterns, awareness of products and services or willingness to buy something. Political polls typically focus on public policy issues and views about elected officials. Political polls also try to measure how voters are reacting to candidates in political campaigns and what issues are important to them in elections.

Collecting Survey Data

  1. How did you get my number?

    Most good telephone surveys of the general public use what is called a random digit dial (or “RDD”) sampling technique to generate the sample of phone numbers used in the survey. The goal is to ensure that your telephone has the same chance of being dialed as any other telephone in the United States. When using this type of telephone sample, pollsters do not know the names of the people who are called. For more information on our method of selecting telephone numbers, see Random digit dialing – our standard method

  2. How are people selected for your polls?

    Once numbers are selected through random digit dialing, the process of selecting respondents is different for landline and cell phone numbers. When interviewers reach someone on a landline phone, they randomly ask half the sample if they could speak with “the youngest male, 18 years of age or older, who is now at home” and the other half of the sample to speak with the youngest female, 18 years of age or older, who is now at home.” If there is no eligible person of the requested gender at home, interviewers ask to speak with the youngest adult of the opposite gender, who is now at home. This method of selecting respondents within each household improves participation among young people who are often more difficult to interview than older people because of their lifestyles. Unlike a landline phone, a cell phone is assumed in Pew Research polls to be a personal device. This means that, for those in the cell sample, interviewers ask if the person who answers the cell phone is 18 years of age or older to determine if the person is eligible to complete the survey.

  3. What if I only have a cell phone – am I represented in your surveys?

    Nearly all of the surveys conducted by the Pew Research Center now include people who only have cell phones (see About Our Survey Methodology in Detail for more information). As the proportion of Americans who rely solely or mostly on cell phones has continued to grow, sampling both landline and cell phone numbers helps to ensure that Pew Research surveys represent nearly all adults. However, there are several challenges and extra costs associated with sampling cell phones and conducting cell phone surveys.

  4. Don’t you have trouble getting people to answer your polls?

    Yes. The percentage of people we interview – out of all we try to interview – has been declining over the past decade or more. There are many reasons for this. Some stems from the fact that people are busier and harder to reach at home. Some has to do with the use of technologies such as caller identification, voice mail and privacy managers. And some is a result of a growing unwillingness on the part of some people to be interviewed. We have done a great deal of research on whether declining response rates harm the accuracy of polls. Fortunately there is, as yet, little evidence that nonresponse is creating a serious issue with the validity of polls. (Also see The problem of declining response rates for more information)

  5. What about people who don’t have any telephone service?

    Unfortunately, for most of our surveys, people who do not have telephones are not included in the sampling frame. Because of this, they have no chance of being included in telephone surveys. Only about 2% of households have no telephone service. Without using in-person interviewing or a mail survey, there is no way to reach these phoneless households. Statistical weighting of our telephone samples helps to correct for the omission of households without any telephone service, but some bias undoubtedly remains for certain kinds of questions, especially for surveys focused on low-income populations. Because people in households with no telephone service are less likely than others to vote, their omission has not seriously damaged the accuracy of pre-election polls. This is an issue of continuing concern to pollsters. It is the subject of a great deal of ongoing research.

Election Polling

  1. Are election polls accurate?

    There was a surge in interest in election polling in the highly competitive presidential elections in 2004 and 2008. In the last several election cycles, most national telephone polls have been very accurate. The National Council on Public Polls compiles the election forecasts of the major national polls, and in both 2004 and 2008, these estimates were very good predictors of the final vote.

  2. How do you know who is really going to vote?

    One of the most difficult aspects of conducting election polls is determining whether a respondent will actually vote in the election. Different pollsters use different sets of questions to help identify likely voters. Overall, the aim of defining likely voters is not to predict whether individuals will vote, but to accurately estimate the preferences of the electorate. The analysis Understanding Likely Voters provides more detail about the likely voter scale that the Pew Research Center for the People & the Press used during the 2008 election.

    Screening Likely Voters: A Survey Experiment describes a study conducted by the Pew Research Center to identify the questions that best predict whether an individual will turn out to vote and how best to model the electorate.

  3. Does an early lead in the polls usually hold up?

    This commentary provides an analysis of whether early front-runners are likely to capture their party’s nomination.

    Does an early lead in the polls usually hold up? March 4, 1999

  4. What is a post-convention “bounce”?

    The following commentaries discuss how candidate ratings may increase during or just after their political conventions but how these changes are often only short-lived.

    The Bounce Effect September 11, 2008

    Beware of the Bounce August 4, 2000

  5. So who’s ahead in the polls?

    This commentary addresses how different polls on the presidential horse race can produce different results, particularly early in an election year.

    How Reliable Are the Early Presidential Polls? February 14, 2007

    So Who’s Ahead? April 14, 2000

  6. What is the “generic ballot” test?

    The generic ballot question asked by the Pew Research Center is: “If the elections for U.S. Congress were being held TODAY, would you vote for the Republican Party’s candidate or the Democratic Party’s candidate for Congress in your district?” where the order of “Republican Party’s candidate” and “Democratic Party’s candidate” are randomized. For those who are unsure, we ask a follow-up question asking which candidate they lean to.

    This commentary discusses the “generic ballot,” a measure of voters in national surveys who say they would vote for either the Republican or Democratic candidate for the U.S. House of Representatives in their district.

    Why the Generic Ballot Test? October 1, 2002

  7. Are generic Congressional vote measures less accurate in presidential years?

    This commentary addresses how the generic measure of partisan support (see What is the “generic ballot” test?) can be less accurate in presidential elections years than in off-years.

    Generic Congressional Measures Less Accurate in Presidential Years September 18, 1996

Questionnaire Design

  1. Do people lie to pollsters?

    We know that not all survey questions are answered accurately but it’s impossible to say that any given inaccurate answer necessarily involves lying. People may simply not remember their behavior accurately.

    More people say they voted in a given election than voting records indicate actually cast ballots. In some instances, researchers have actually verified the voting records of people who were interviewed and found that some of them said they voted but did not. Voting is generally considered a socially desirable behavior, just like attending church or donating money to charity. Studies suggest these kinds of behaviors are overreported. Similarly, socially undesirable behaviors such as illegal drug use, certain kinds of sexual behavior or driving while intoxicated are underreported.

    We take steps to minimize errors related to questions about socially desirable or undesirable activities. For example, questions about voter registration and voting usually include acknowledges that not everyone takes part in elections. The Pew Research Center voter registration question is worded this way:

    “These days, many people are so busy they can’t find time to register to vote, or move around so often they don’t get a chance to re-register. Are you NOW registered to vote in your precinct or election district or haven’t you been able to register so far?”

  2. Do people really have opinions on all of those questions?

    People have opinions or attitudes on just about everything. Still, “I don’t know” is a legitimate answer, and people who are unsure, have no opinion or choose not to answer a question for whatever reason are always provided that opportunity.

  3. Why do you typically ask presidential approval first in the survey?

    The presidential approval question is a very important political indicator. It is a useful summary measure of the president’s standing with the public, and as such, can influence his power in dealing with the Congress, business leaders and foreign countries. We typically ask it first in the survey because we do not want any other questions to affect respondents’ answers to that question.

    For example, if the survey first asks about the economy and then asks about presidential approval, the respondent may still be thinking about the economy when answering the latter question. While economic conditions may be important in assessing the president’s overall performance, so are many other issues. If the respondent is only thinking about the economy because we brought up the issue, his or her response about the president may be biased by what we call a context effect: in this case we would be priming the respondent to consider the economy in an assessment of the President.

    However, asking presidential approval first can also affect later measures in the survey. We often stop asking presidential approval in election years because of the potential influence it may have on measures of the horserace. We have also found that asking presidential approval before general satisfaction can impact people’s opinions about the way things are going in the country. The reverse can also be true – people’s assessment of their general satisfaction can impact their rating of the president (see Question order for more information about this experiment).

  4. Why are demographic questions asked at the end of the survey?

    Demographic questions tend to be boring to respondents and also can seem inappropriate and threatening if asked before a level of trust is established in the interview. The interviewer wants to engage the respondent from the beginning of the conversation so that the respondent is interested in the survey and will continue to answer questions. If the interviewer started the survey by asking the respondent’s age or sex, the respondent might get bored and decide not to continue with the survey. In addition, if someone called you and first started asking how much money you make, your race, how many children you have, etc., you might be put off by these personal questions and decide not to continue with the survey.

    You can view the most commonly asked demographic questions in Pew Research Center for the People & the Press surveys, in the order we ask them, here.

  5. What’s all this rotating and randomizing going on in your questionnaires?

    Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. We know that answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list.

    The same principle applies to the order of response options in a single question. For many questions, we randomize the order in which the answer choices are presented. That way, any affect that the order of the answer choices has on responses is spread randomly across the options.

    Also see Question order and Order of answer categories for more information.

  6. How is form 1 different from form 2?

    We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form 1 or form 2 so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between form 1 and form 2 tell us that the difference is a result of the way we worded the two versions. For example, in January 2003, we asked this question on form 1: “Would you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule?” On form 2, we asked: “Would you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties?” In this experiment, the form 1 question found 68% favored removing Hussein from power. But the mention of thousands of U.S. casualties in form 2 led to far fewer respondents supporting military action: only 43% said they favored removing Hussein under those circumstances. For more information on question wording experiments we have conducted see Question wording.

    We also have different forms of the questionnaire so we can ask more questions than we would otherwise be able to ask. If we determine that half of the sample will include enough interviews for a reliable estimate, we will often ask some questions of only half of the sample. That allows us to include more questions on the survey without burdening any individual respondent with a longer interview.