Determining voter preference among the candidates running for office would appear to be a relatively simple task: just ask them who they are going to vote for on Election Day. In fact, differences in how this question is asked and where it is placed in the questionnaire can affect the results. While most voters have usually made up their minds and are not likely to be affected by how the question is posed, many people have given less thought to the campaign or are genuinely ambivalent about the choices. For these voters, certain features of the question can make a difference.

The questions in Pew Research Election Questions in blue type were used by the Pew Research Center for the People & the Press in its final poll of the 2008 presidential election to measure voter preference. The particular features of these questions reflect several choices:

These features are an effort to make the presentation of the options as similar as possible to what voters would actually experience when casting their ballots. Because Nader and Barr were not on the ballot in all states, respondents were asked whether they favored these candidates only in the states where they were listed as an option. And the randomization of the order of the two major party tickets reflects the fact that the order of the ballot may vary in different locations. However, this effort is not perfect, since not all states mention the party affiliation of the ticket and not all states feature a random selection of the ballot order. In addition, there are often other candidates on the ballot. The Pew Research Center and most other national polling organizations make a judgment as to which third-party tickets should be included in their survey questions.

Pollsters face an even more difficult challenge in primary elections, where the number of candidates is often very large and many include names that may be unfamiliar to voters. Long lists of candidates can create difficulties in a telephone survey and the effect of a candidate’s position on the ballot can be more consequential than in a general election contest where fewer candidates are listed. In general, our practice in primary elections is to read all of the candidates that remain in the race, randomizing the order of the names.

Two other choices in the Pew Research Center’s election questions are important to note:

The remaining questions in the series shown in Pew Research Election Questions are used to gauge strength of support (Q3b) and certainty of support (Q5 and Q6). We also sometimes distinguish between “positive” and “negative” voting by asking whether someone’s vote is more a vote for their selected candidate or against the other candidate.

The Pew Research Center also reports the size of the so-called “swing vote” — defined as voters who are either undecided, only leaning to a candidate, or who say they might change their mind before Election Day. In addition to discussing the swing vote in many of our election reports, Swing Voters Slow to Decide, Still Cross-Pressured describes a more extensive analysis, conducted in the late stages of the 2004 campaign, of the size of the swing vote and how swing voters who were identified in earlier surveys responded when re-contacted in mid-October.

One final issue in determining voter preference is the question of whether respondents will always answer honestly when asked for their choice in an election. For the most part this has not proven to be a problem, as most election polling has been very accurate. However, a small percentage of people – typically less than 5% – will refuse to answer the vote choice question.

A pattern of polling errors during the 1980s and 1990s in elections involving African American candidates raised the question of whether some people are reluctant to say that they are voting against a black candidate. Alternatively, there is the possibility that members of some demographic groups that are more likely to be racially conservative are also disproportionately likely to refuse to participate in surveys. If so, this could potentially produce a bias in the poll’s estimate of the outcome of the election. Regardless of what caused it, polls in many of these elections tended to understate the level of support for the white candidate.

This phenomenon is sometimes called the “Bradley effect” because it was first observed in the 1982 California gubernatorial election between Tom Bradley, a black Democrat, and George Deukmejian, a white Republican. The Pew Research Center has examined the question of whether polling in elections continues to understate support for white candidates when they are running against black candidates. While the pattern was clear in the 1980s and earlier in the 1990s, more recent elections in 2006 showed little sign of the so-called “Bradley effect” (see Can You Trust What Polls Say about Obama’s Electoral Prospects? for more information).

Concerns about the Bradley effect had obvious relevance for the 2008 presidential election, both in the Democratic primaries and in the general election contest between Barack Obama and John McCain. There was, however, no evidence of systematic polling errors consistent with the Bradley effect in either the primaries or the general election (see Perils of Polling in Election ’08 for more information).