The Opinion Leaders Survey Sample

The results of the opinion leaders survey are based on Americans who are influential in their chosen field. The sample was designed to represent these influentials in eight professional areas of expertise: media; foreign affairs; national security; state and local government; university administration and think tanks; religious organizations; science and engineering; and military. Every effort was made to make the sample as representative of the leadership of each particular field as possible. However, because the goal of the survey was to identify people of particular power or influence, the sampling was purposive in overall design, but systematic with regard to respondent selection wherever possible.

The final selected sample was drawn from eight subsamples. Subsamples were split into replicates, and quotas were set for the number of completed interviews from each subsample. These quotas were set because the size of the sampling frame for each subsample varied a great deal. In order to ensure adequate representation of the smaller groups in the final sample of complete interviews, it was necessary to set quotas. The subsamples and final completed interviews for each are listed below:

The specific sampling procedures for each subsample are outlined below.

News Media
The media sample included people from all types of media: newspapers, magazines, television and radio. Various editors (editors, editors of the editorial page, managing editors) and D.C. bureau chiefs were selected from: the top daily newspapers (based on circulation); additional newspapers selected to round out the geographic representation of the sample; news services; and different types of magazines including news, literary, political, and entertainment and cultural magazines.

For the television sample, people such as D.C. bureau chiefs, news directors or news editors, anchors, news executives, and executive producers were selected from television networks, chains and news services.

The radio sample included news directors and/or D.C. bureau chiefs at several top radio stations.

Top columnists listed in the Leadership Directories’ News Media Yellow Book and Bacon’s MediaSource were also selected as part of the media subsample.

In each part of the media subsample, it is possible that more than one individual at an organization was interviewed.

Foreign Affairs
The Foreign Affairs sample was randomly selected from the membership roster of the Council on Foreign Relations.

Security
The Security sample was randomly selected from a list of American members of the International Institute for Strategic Studies.

State and Local Government
Governors of the 50 states were drawn for the sample, as well as a random sample of mayors of cities with a population of 80,000 or more.

Academic and Think Tank Leaders
The heads of various influential think tanks listed in National Journal’s The Capital Source were selected. For the academic sample, officers (President, Provost, Vice-President, Dean of the Faculty) of the most competitive schools overall and the most competitive state schools (as identified in Peterson’s Guide to Four-Year Colleges 2006) in the United States were selected.

Religious Leaders
For the religion sample, leaders of Protestant, Catholic, Jewish and Muslim organizations with membership over 700,000 each were sampled. Top U.S. figures in each national body were selected in addition to the leading people at the National Council of the Churches of Christ in the U.S.A.

Scientists and Engineers
The science sample was a random sample of scientists drawn from the membership of the National Academy of Sciences.

The engineering sample was a random sample of engineers drawn from the membership of the National Academy of Engineering.

Military
The military leaders sample was drawn from a Lexis-Nexis search of retired generals and admirals quoted in American news sources in the past year. Also included was a sample of outstanding officers selected to participate in the Council on Foreign Relations’ Military Fellowship program since 2000.

The Opinion Leaders Survey Process

Each person sampled for this survey was mailed an advance letter on a joint Pew Research Center for the People & the Press and Council on Foreign Relations letterhead and signed by Andrew Kohut and Richard Haass. These letters were intended to introduce the survey to prospective respondents, describe the nature and purpose of the survey, and encourage participation in the survey.

Unlike previous America’s Place in the World telephone mode surveys, in 2005 respondents were given the option to take this survey via the Internet. The advance letter contained a URL and a password to complete the survey online, a toll-free number to call in to do the survey by phone, as well as notification that interviewers would be calling as well. As soon as the letters were mailed, a website was available for respondents to complete the interview online.

A follow-up email invitation was sent six days after letters were mailed to those for whom email addresses were available, repeating the substance of the letter and providing a URL to click to take the survey.

Approximately one week after the letter was mailed, calling began to sample members who had not yet taken the survey online and had not been sent an email invitation. Interviewers attempted to conduct the survey over the telephone or set up appointments to conduct the survey at a later date. Approximately four days later, interviewers began calling sample members who were sent an email invitation and had not yet taken the survey online.

For groups not meeting the target number of interviews, follow-up letters and emails were sent to those who refused encouraging them to reconsider. Another letter was sent to those who had not participated but had not explicitly refused. Interviewers also continued to call those respondents in the remaining groups who did not explicitly refuse in an attempt to complete the interview.

The “Don’t know/Refused” response category was volunteered exclusively in the telephone survey, while in the online survey mode not selecting a response category and clicking ahead to the next question constituted a “No answer” response.

Interviewers who administered the telephone survey were experienced, executive, and specially trained to ensure their familiarity with the questionnaire and their professionalism in dealing with professionals of this level. The interviewing was conducted from September 5 through October 31, 2005.

About the General Public Survey

Results for the general public survey are based on telephone interviews conducted under the direction of Princeton Survey Research Associates International among a nationwide sample of 2,006 adults, 18 years of age or older, during the period October 12 – 24, 2005. For results based on the total sample, one can say with 95% confidence that the error attributable to sampling and other random effects is plus or minus 2.5 percentage points. For results based on either Form 1 (N=1003) or Form 2 (N=1003), the sampling error is plus or minus 3.5 percentage points. For Q.42 the forms are further divided into Form 1A, 1B, 2A and 2B (N is approximately 500) with a sampling error of plus or minus 5 percentage points.

In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls.

General Public Survey Methodology in Detail

The sample for this survey is a random digit sample of telephone numbers selected from telephone exchanges in the continental United States. The random digit aspect of the sample is used to avoid “listing” bias and provides representation of both listed and unlisted numbers (including not-yet-listed). The design of the sample ensures this representation by random generation of the last two digits of telephone numbers selected on the basis of their area code, telephone exchange, and bank number.

The telephone exchanges were selected with probabilities proportional to their size. The first eight digits of the sampled telephone numbers (area code, telephone exchange, bank number) were selected to be proportionally stratified by county and by telephone exchange within county. That is, the number of telephone numbers randomly sampled from within a given county is proportional to that county’s share of telephone numbers in the U.S. Only working banks of telephone numbers are selected. A working bank is defined as 100 contiguous telephone numbers containing one or more residential listings.

The sample was released for interviewing in replicates. Using replicates to control the release of sample to the field ensures that the complete call procedures are followed for the entire sample. The use of replicates also ensures that the regional distribution of numbers called is appropriate. Again, this works to increase the representativeness of the sample.

As many as 10 attempts were made to complete an interview at every sampled telephone number. The calls were staggered over times of day and days of the week to maximize the chances of making a contact with a potential respondent. All interview breakoffs and refusals were re-contacted at least once in order to attempt to convert them to completed interviews. In each contacted household, interviewers asked to speak with the “youngest male, 18 years of age or older, who is now at home.” If there is no eligible man at home, interviewers asked to speak with “the youngest female, 18 years of age or older, who is now at home.” This systematic respondent selection technique has been shown empirically to produce samples that closely mirror the population in terms of age and gender.

Non-response in telephone interview surveys produces some known biases in survey-derived estimates because participation tends to vary for different subgroups of the population, and these subgroups are likely to vary also on questions of substantive interest. In order to compensate for these known biases, the sample data are weighted in analysis.

The demographic weighting parameters are derived from a special analysis of the most recently available Census Bureau’s Current Population Survey (March 2004). This analysis produced population parameters for the demographic characteristics of households with adults 18 or older, which are then compared with the sample characteristics to construct sample weights. The analysis only included households in the continental United States that contain a telephone.

The weights are derived using an iterative technique that simultaneously balances the distributions of all weighting parameters.