Released: May 18, 2001
Screening Likely Voters: A Survey Experiment
The data used in this analysis are from two telephone surveys (Wave 1, Oct 13-21, 1999; Wave 2, Oct 27-30, 1999) conducted in the City of Philadelphia by the Pew Research Center for the People & the Press, under the direction of Schulman, Ronca and Bucuvalas, Inc. Each wave of the survey consists of approximately 1,600 interviews, drawn from two distinct samples (see below for details). Roughly two-thirds of respondents in each wave were drawn from a standard random-digit sample of telephone numbers selected from telephone exchanges in the City of Philadelphia. The random digit aspect of the sample is used to avoid “listing” bias and provide representation of both listed and unlisted numbers (including not-yet-listed). The design of the sample ensures this representation by random generation of the last two digits of telephone numbers selected on the basis of their telephone exchange and bank number.
The other third of each wave is drawn directly from voter registration lists maintained by local government agencies. Registration lists were used to identify households encompassing at least one registered voter, with standard household randomization applied once telephone contact was made. This alternative sampling methodology was utilized to test whether “matching” survey respondents to voter registration lists more or less efficient using different sampling techniques. Though there are many possible sources of bias in the voter-list sample (not all registered voters provide a phone number when registering, registration records may not be completely up-to-date), the respondents drawn from each separate sampling procedure were similar in most demographic and political characteristics.
For both RDD and listed samples, numbers were released for interviewing in replicates. Using replicates to control the release of sample to the field ensures that the complete call procedures are followed for the entire sample. At least 10 attempts were made to complete an interview at every sampled telephone number. The calls were staggered over times of day and days of the week to maximize the chances of making a contact with a potential respondent. All interview breakoffs and refusals were recontacted at least once in order to attempt to convert them to completed interviews. In each contacted household, interviewers asked to speak with the “youngest male 18 or older who is at home.” If there is no eligible man at home, interviewers asked to speak with “the oldest woman 18 or older who is at home.” This systematic respondent selection technique has been shown empirically to produce samples that closely mirror the population in terms of age and gender.
Survey respondents were matched to voter registration lists after election day to validate their voting behavior. The matching process took into account five parameters: phone number, first name, last name, address, and the respondent’s age. Overall, we successfully matched 70% of respondents who identified themselves as registered voters to the voter registration lists (68% from the combined RDD samples, 75% from the combined listed samples). The inability to match 30% of respondents who claim to be registered reflects three factors, each of which might affect the representativeness of the sample. First, many respondents overreport voter registration. Second, many respondents refused to give their name and address, making matching difficult or impossible. Third, registration lists maintained by local government agencies may not be completely up-to-date.
Non-response in telephone interview surveys produces some known biases in survey-derived estimates because participation tends to vary for different subgroups of the population. Both respondents and non-responding households were “matched” to voter registration lists, in order to gauge the relationship between survey participation and turnout.
Though each wave of telephone interviewing is drawn from two separate sampling frames, the analysis of likely voter methodology is based only on matched cases which, in effect, are all drawn from the same sampling frame of the registration lists of local government agencies. As a result, all analysis is conducted on the combined listed and RDD samples. Data are not weighted to census parameters due to the fact that the registration lists do not represent a random distribution of the city’s population.