Complementing Random-Digit-Dial Telephone
Surveys with Other Approaches to Collecting
Mirta Galesic, PhD, Roger Tourangeau, PhD, Mick P. Couper, PhD
Surveys of sensitive topics, such as the Injury Control and Risk Surveys (ICARIS) or the
Behavioral Risk Factors Surveillance System (BRFSS), are often conducted by telephone
using random-digit-dial (RDD) sampling methods. Although this method of data collection
is relatively quick and inexpensive, it suffers from growing coverage problems and falling
response rates. In this paper, several alternative methods of data collection are reviewed,
including audio computer-assisted interviews as part of personal visit surveys, mail surveys,
web surveys, and interactive voice response surveys. Their strengths and weaknesses are
presented regarding coverage, nonresponse, and measurement issues, and compared with
RDD telephone surveys. The feasibility of several mixed mode designs is discussed; none of
them stands out as clearly the right choice for surveys on sensitive issues, which implies
increased need for methodologic research.
(Am J Prev Med 2006;31(5):437–443) © 2006 American Journal of Preventive Medicine
related to injury prevention and control. These surveys
are typically conducted by telephone using random-
digit-dial (RDD) sampling methods. Key examples in-
clude the Sexual Violence and Intimate Partner Vio-
Surveillance System (BRFSS), the National Violence
against Women Survey, and the Injury Control and Risk
Surveys. An important characteristic of all of these
surveys—one that they share with many other federal
surveys—is that they ask sensitive questions about peo-
ple’s behavior and experiences.
Telephone surveys have been an attractive alternative
to face-to-face surveys for decades. The principal rea-
sons have been savings in costs and time, with relatively
high levels of coverage of the general population. In
addition, there is some evidence that reducing the
physical presence of the interviewer may increase re-
porting of sensitive behaviors. However, telephone
surveys have come under increasing threat in the past
few years in terms of both coverage and nonresponse.
In this paper, the issues relating to telephone surveys
he National Center for Injury Prevention and
Control of the Centers for Disease Control and
Prevention routinely conducts surveys on topics
and the possible alternatives for accurately measuring
sensitive topics among probability samples of the gen-
eral public are reviewed, beginning with a brief review
of the prevailing threats to inference from telephone
surveys. Selected alternative modes of data collection
are then reviewed, followed by a discussion of their
strengths and weaknesses relative to RDD telephone
Problems with Random-Digit-Dial Surveys
Telephone coverage of households in the United States
has remained relatively steady in the range of about
95% for several decades now, and the coverage prop-
erties of telephone surveys are well established.1As of
the first half of 2005, Blumberg et al.2estimated that
91.6% of adults in the United States live in households
with landline telephone service. This leaves 6.7% of the
adults who live in households with wireless service only
and 1.7% without any service at all. However, while the
under-coverage of the poor, those in rural areas, and
those with less access to health care has been shown to
have limited impact on the estimates on a wide variety
of topics, the fact that certain at-risk groups are dispro-
portionately excluded from telephone surveys may bias
estimates regarding domestic violence and injury, espe-
cially for certain subgroups.
The problem of noncoverage in telephone surveys is
again receiving attention from survey researchers, pri-
marily because of the trend away from fixed landline
phones to mobile or cellular phones. The United States
From the Joint Program in Survey Methodology, University of Mary-
land (Galesic, Tourangeau), College Park, Maryland; and Survey
Research Center, University of Michigan (Tourangeau, Couper),
Ann Arbor, Michigan
Address correspondence and reprint requests to: Mirta Galesic,
PhD, 1218 LeFrak Hall, University of Maryland, College Park, MD
20742. E-mail: email@example.com.
Am J Prev Med 2006;31(5)
© 2006 American Journal of Preventive Medicine • Published by Elsevier Inc.
0749-3797/06/$–see front matter
is in the midst of an upheaval in the structure of
telephone service, following similar changes in Europe
and Asia in the last decade.2–6While the current
situation does not appear too problematic, it seems
inevitable that the trend toward mobile phones will
force survey researchers to re-think their approach to
Cell phone users without landlines are especially
challenging to estimates gained through RDD surveys,
because RDD samples exclude cell phone numbers.
Various sources converge at the estimate that about 6%
of U.S. households can be reached only by cell
phones.2,5,7Compared with adults living in landline
households, adults living in cell phone–only house-
holds are more often younger (almost 20% of the
group aged 15 to 24 years lives in cell phone–only
households), Hispanic, not married, and renters.5In
addition to demographic differences, the cell phone–
only people may be different in some health-related
behaviors. For example, data from the National Health
Interview Survey show that cell phone–only adults
consume more alcoholic drinks and are more likely to
smoke and to be uninsured than adults who live in
households with a landline.2This can induce bias in
prevalence estimates of these behaviors obtained
through RDD surveys. A recent experiment conducted
by the Joint Program in Survey Methodology8has
shown that although cell phone numbers can be in-
cluded in RDD samples, there are three drawbacks:
response rates are lower (e.g., 22.1% vs 34.0% for a
10-minute questionnaire via cell phone and landline
telephone, respectively), incentives are needed to com-
pensate for the cost to the respondent, and respon-
dents are more likely to be involved in other activities
while completing the survey.
The paper by Johnson et al.9deals with nonresponse in
RDD surveys in greater depth, but we briefly review a
few key issues here. First, it is generally accepted that
telephone response rates are declining, and the pace of
decline may have accelerated in recent years.10,11Many
blame the rise of telemarketing for this trend, although
evidence of this link is hard to come by. The introduc-
tion of the Federal Communications Commission
(FCC)’s National Do-Not-Call Registry may help, but we
have not yet seen any evidence of a slowing or reversal
in the decline in telephone response rates, and the
registry may have come too late. People have already
changed their answering behavior and adopted a vari-
ety of tools to screen unwanted telephone calls.12The
decline in telephone response rates may also be related
to emerging trends in time away from home, compet-
ing demands, and a host of other factors outside the
control of survey researchers.13
Second, there may not be a direct relationship be-
tween response rates and nonresponse bias. For exam-
ple, Keeter et al.14compared a standard RDD design
with a relatively low response rate to a more rigorous
design with a higher response rate and found few
differences on a variety of attitude measures. Curtin et
al.10assessed the impact of excluding initial refusals
and harder-to-reach respondents from samples ob-
tained in the Survey of Consumer Attitudes from 1979
to 1996 and found only minimal effects on survey
estimates. However, surveys with both high and low
response rates may suffer from substantial nonresponse
bias, as recently demonstrated by Groves.15
Several comparisons of telephone surveys with face-to-
face surveys have generally concluded that the two yield
similar results for nonsensitive items.16–18However,
telephone interviews appear to be less effective than
personal interviewing in eliciting sensitive information,
and the data typically show a higher social desirability
bias for data collected by telephone.17,19–21While some
studies find the opposite effect,22,23the de Leeuw and
van der Zouwen study17is particularly telling since it is
a meta-analysis based on a large number of mode
To an increasing degree, both face-to-face and tele-
phone surveys use self-administration rather than an
interviewer for sensitive items. In the case of face-to-
face interviewing, audio computer-administered self-
interviewing (ACASI) is used. In telephone surveys, the
corresponding technology is called telephone ACASI
or interactive voice response (IVR). Both methods are
discussed in more detail below.
Alternatives to Telephone Surveys
Audio Computer-Assisted Self-Interviewing
Of all the methods of data collection currently available
to survey researchers, the combination of ACASI, face-
to-face contact with the household by an interviewer,
and area probability sampling may be the closest thing
to a “gold standard” for measuring sensitive topics.
Coverage and nonresponse issues. Area probability
sampling may not yield complete coverage of the
population. For instance, it necessarily misses the
homeless, typically misses some fraction of dwelling
units, and omits some people within partially enumer-
ated households. In high-quality surveys, such as the
Current Population Survey, the coverage is probably
close to 95% of the population,24far exceeding that of
any other method. For example, 10% or more of all
households are typically omitted from telephone sur-
veys (because 8.4% of the households have no landline
service,2and about 4% are in “zero banks” [groups of
American Journal of Preventive Medicine, Volume 31, Number 5www.ajpm-online.net
100 consecutive possible numbers, none of which is a
Surveys in which interviewers contact the household
in person also typically have higher response rates than
those in which the household is contacted by telephone
interviewer or is mailed a questionnaire.26,27Despite
recent declines in response rates, several federal surveys
that are done face-to-face still have response rates of
about 90%.28Thus, face-to-face recruitment of mem-
bers of an area probability sample is likely to minimize
both coverage and nonresponse errors compared to
telephone, mail, or web surveys.
Measurement issues. The ACASI approach also has
very desirable measurement properties as well. An
interviewer is present to establish the legitimacy and
importance of the survey and to instruct the respon-
dent in the use of the laptop computer that administers
the questions. The questions are presented both visu-
ally onscreen and aurally via earphones so that even
respondents with low levels of literacy can take part.29
ACASI combines the power, flexibility, and standard-
ization of automation with the privacy of self-adminis-
tration. At least five experiments have compared ACASI
with some other method of self-administration (such as
a paper self-administered questionnaires), and all indi-
cate that ACASI is at least as good as the alternative
methods for eliciting reports about sensitive informa-
tion.29–33For example, Lessler et al.31reported higher
levels of reporting of illicit drug use with ACASI than
with a self-administered paper questionnaire.
Because of these desirable properties, several na-
tional surveys that collect sensitive information have
switched to ACASI in conjunction with area probability
samples; these include the National Survey of Drug Use
and Health and the National Survey of Family Growth.
However, one major drawback to this methodology is
the expense involved. Currently, face-to-face interviews
with national area probability samples can run as high
as $1000 per completed case, depending on the length
of the interview, the need for initial screening to
identify members of rare populations, the target re-
sponse rate, and other factors. Clearly, for many studies
these costs will be prohibitive.
Surveys on domestic violence raise particular issues
in that it is important to keep other family members,
especially the potential abuser, from learning the topic
of the survey. Aquilino et al.34showed that computer
assisted self-administration reduced the impact of the
presence of other people, apparently because the an-
swer “disappeared” in the computer, rather than leav-
ing a paper trail. However, it is not clear which modes
provide the greatest confidentiality, because other fam-
ily members can open a mail survey, look over the
shoulder in an ACASI or web survey, or listen in on a
Mail surveys have some advantages over other methods
of data collection. They are considerably cheaper, have
relatively stable response rates, and may improve re-
porting of sensitive issues. On the other hand, there is
little control over who completes the questionnaire,
whether the instructions are being followed, and
whether the questions are understood as intended.
Coverage and nonresponse issues. Mail surveys usually
require a list of addresses to which the questionnaires
can be sent. Such lists may exist for limited populations,
such as the employees of an organization or subscribers
to a certain service, but they are not available for most
nationally targeted surveys. Nevertheless, once a satis-
factory frame is available, it is relatively easy to select
good quality samples. The inherently lower cost of mail
surveys (e.g., half or less than half of the cost of a
completed telephone interview35) allows for more geo-
graphically dispersed and larger samples than in com-
parable interviewer-administered surveys.
Although mail surveys are often perceived as suffer-
ing from low response rates, some studies suggest that
response rates of 60% may be possible in mail sur-
veys.26,27,35–37Furthermore, the response rates to mail
surveys have remained relatively stable during the pe-
riod of significant decline in response rates for tele-
phone and face-to-face surveys.27However, nonrespon-
dents to mail surveys may be different from the
respondents in some important ways, such as in gender,
education, or cognitive abilities.38
A further challenge for mail surveys is lack of control
over selecting the target respondent among all persons
who can be reached at a given address. While in
face-to-face and telephone surveys interviewers can
implement various respondent selection procedures, in
mail surveys one has to rely on the good judgment and
conscientiousness of survey recipients. Even with spe-
cific written instructions about the appropriate selec-
tion procedure, there is no guarantee that this proce-
dure will actually be followed.39,40
Measurement issues. Because they are self-adminis-
tered, mail surveys have some important advantages
over interviewer-assisted modes. Most notably, many
studies have shown an overwhelming effect of self-
administration on levels of reporting sensitive behav-
iors.41For example, Schober et al.42showed that paper
self-administered questionnaires sharply increased the
reported rate of illicit drug use compared to inter-
viewer administration. A major disadvantage of mail
surveys is that the researcher has little control over the
response process. For example, there is no way to know
whether the respondents are reading the questions and
instructions thoroughly, whether they understand the
question in the intended way, or whether they look up
records when asked to do so.
November 2006 Am J Prev Med 2006;31(5)
Web surveys have several advantages over other survey
modes. The most prominent ones are lower costs and
higher speed of data collection, elimination of geo-
graphic limitations, ease of use of audio and video
elements, and automated features that can improve
data quality. On the other hand, web surveys are
threatened by serious coverage and sampling difficul-
ties and generally have relatively high nonresponse
Coverage and nonresponse issues. Internet surveys of
the general population are prone to serious coverage
biases, since a significant portion of the population still
does not have access to the Internet, and there are large
differences between those with access and those with-
out. According to Nielsen/Net Ratings, only 75% of the
U.S. population had access to the Internet from home
in 2004,44and less than 50% used the Internet at least
once in the last month.44Furthermore, people without
access to the Internet are significantly different from
those who do have access, with the former more likely
to be older, less educated, poorer, black or Hispanic,
Nonresponse can arise in different phases of an
Internet survey. Some of the e-mail invitations never
reach some sample members, some who received the
e-mail never read it, of those who start the online survey
some give up immediately and some later, and often
only a relatively small portion of the sample completes
the questionnaire (e.g., only 35% of the sample de-
scribed by Lozar Manfreda et al.48and Vehovar et
al.49). An advantage of online surveys is that respon-
dents’ behavior can be well documented even when
they break off before the end of the survey.50
Overall, response rates in web surveys vary widely and
depend on factors such as the type of population, the
sampling procedure, and whether incentives are of-
fered. In his review of web surveys, Schonlau51reported
response rates that ranged from less than 1% in a
convenience sample of web users contacted by web site
advertising to 75% in a U.S. Census Bureau establish-
ment survey. For list samples, web surveys usually get
lower response rates than mail surveys,43,51although
there are some exceptions.52–54
As in mail surveys, a researcher cannot be sure who
completes the web survey. In pre-recruited web survey
panels, participants usually complete a screening ques-
tionnaire and provide a personal e-mail address
through which they can be invited to surveys. While
there is no guarantee that the invitees will not forward
the survey to others, in practice this appears to be of
Measurement issues. Like all computer-assisted sur-
veys, web surveys allow for complex routing and skips,
automated edits and checks, randomizations, and feed-
back and assistance to the respondents. These auto-
mated routines can improve the quality of data,55,56
although some require that a respondent’s browser can
enable easy implementation of various visual elements,
such as pictures, symbols, and animation. While this
can make a web questionnaire more interesting to the
respondents and improve their comprehension of the
questions, the added visual elements can also change
the measurement properties of questions.57,58
Like mail surveys, web surveys are inherently self-
administered. That can enhance reporting of sensitive
behaviors, although it also reduces the researcher’s
control over how respondents understand and answer
the questions. The absence of interviewers may reduce
social desirability bias in web surveys, in accord with
comparable findings from computer-assisted self inter-
viewing (CASI) and computer-assisted personal inter-
viewing (CAPI) questionnaires,59but these gains have
yet to be clearly demonstrated. Some studies suggest
that computerization itself may have similar effects to
Interactive Voice Response
The telephone counterpart to ACASI is known vari-
ously as interactive voice response, touchtone date
entry, or telephone ACASI. We will refer to this mode
of data collection as IVR, the most widely used term.
Regardless of the label, the method involves an auto-
mated telephone interview, in which the computer
plays a recording of the questions and the respondents
indicate their answers by pressing one of the number
keys on their telephone hand-set or, increasingly, by
saying the number corresponding to their answer
aloud. Tourangeau et al.64provide a review of most of
the studies examining this method of data collection.
Relative to computer-assisted telephone interviewing
(CATI), IVR should produce some cost savings from
reduced interviewer time, but we know of no studies
that estimate these savings.
Coverage and nonresponse issues. Since the members
of an IVR sample are usually recruited via RDD sam-
pling, the coverage problems for an IVR are typically
the same as those for telephone surveys generally—that
is, more than 10% of all households are likely to be
excluded because they lack a landline telephone or
their telephone number falls into a bank that is ex-
cluded from the frame.
In a typical IVR study, the respondents are initially
contacted by a telephone interviewer, who collects
some demographic information and switches the re-
spondent to the IVR system. The response rates to the
initial portion of the interview are similar to those of
other RDD surveys; in addition, however, respondents
may drop out during the switch to IVR or part way
through the IVR portion of the interview.64These
American Journal of Preventive Medicine, Volume 31, Number 5www.ajpm-online.net
dropout rates are often substantial (e.g., 24% in Cooley
et al.,65and 18% in Gribble et al.66). Tourangeau et
al.64found that asking a few innocuous questions
before the switch can significantly reduce the dropout
Measurement issues. Advocates of IVR point to its
positive impact on reports about sensitive behaviors.
For example, Turner et al.67report that respondents
are more likely to admit risky sexual behaviors in an
IVR interview than in a conventional CATI. Gribble et
al.66found that the IVR respondents were more likely
to report illicit drug use than CATI respondents. Tou-
rangeau et al.,64show reduced positivity bias in cus-
tomer satisfaction surveys done via IVR compared to
the same questions administered in CATI interviews.
More generally, IVR seems to bring some of the gains
from self-administration into telephone interviews, al-
though it is not yet clear whether the gains from IVR
areas largeas those
Many survey researchers see mixed-mode designs as a
promising means to offset the rising costs of survey data
collection and to counter declining response rates and
coverage. Mixed-mode designs come in many different
flavors, but three are relevant to surveys of sensitive
topics: dual-frame designs, single-frame designs, and
panel designs. Each of these approaches is reviewed
briefly in turn.
The idea behind this approach is to use more than one
frame to compensate for the coverage weaknesses of a
single frame. Such designs may be attractive for re-
peated cross-sectional surveys, where relatively infre-
quent surveys conducted using more costly and higher-
quality methods (e.g., face-to-face or telephone) are
supplemented with more frequent measurement using
another mode (e.g., Internet, mail). The data from the
high-quality surveys are then used to calibrate the
estimates from the lower-quality surveys with larger
sample sizes to produce trends that can be projected to
the larger population. Perhaps the most typical dual
frame design combines a large-scale RDD survey with
smaller face-to-face surveys targeted at those areas with
low telephone coverage.68This approach is used on the
National Survey of America’s Families.69It is also
possible to use a second frame to increase efficiency in
finding members of a specific subgroup. For example,
Census data could be used to identify areas with a high
percentage of minority residents (e.g., Hispanics), and
an address list could be used to sample from those
Single-Frame Designs with Mode Option
This approach has been used in several surveys in which
the main mode of data collection is mail. Two of the
more prominent examples are the Decennial Census70
and the American Community Survey (ACS).71In each
case, the mailed questionnaire contains an invitation to
complete the survey over the Internet. The goal is to
reduce the costs of data collection, but relatively few
ACS or Census respondents availed themselves of the
opportunity to provide the data via the web. This
method is based on the assumption that measurement
error is invariant across modes for the basic demo-
graphic items that these surveys collect. This assump-
tion is less likely to hold in the case of surveys on
In a variant of this approach, Link and Mokdad40
explored the feasibility of the web and mail as options
for the BRFSS. In two separate experiments conducted
in four states, a subset of an RDD sample that could be
matched to addresses was invited, by mail, to complete
a web or mail survey. Telephone follow-up was used for
nonrespondents. Link and Mokdad40found that over-
all response rates were increased using this strategy,
relative to the telephone-only approach, but they found
significant demographic differences in those who com-
pleted the survey using the different modes.40For
example, mail respondents were significantly older
than those who responded on the web, and both of the
self-administered groups were older than the telephone
respondents. They also found significant differences
between web and telephone modes in responses to
several questions on health conditions and risk behav-
iors. For example, even after controlling for differences
in demographic characteristics, web respondents were
consistently more likely than both mail and telephone
respondents to report that they had five or more drinks
on at least one occasion within the last 30 days.72
There are also many ways to mix modes in the context
of panel studies. We focus here on the recruitment of
respondents via the telephone, with follow-up via mail
or Internet. This approach does not solve the coverage
or nonresponse concerns about the telephone mode,
but uses the pool of telephone respondents as a frame
for later surveys.
In a recent example, Fricker et al.73conducted a
short telephone screener using RDD sampling meth-
ods. Those who reported Internet access were ran-
domly assigned to a web survey or a telephone inter-
view. The response rate for the screener was 42.5%. Of
those assigned to the web mode, 51.6% completed the
online survey, compared with 98.1% of those assigned
to the telephone interview, which immediately followed
the screener. Two similar studies74,75had even lower
November 2006 Am J Prev Med 2006;31(5)
overall response rates. In another example of this
approach, called the Knowledge Networks panel, mem-
bers are recruited through RDD surveys and then
provided with Internet access in exchange for partici-
pation in online surveys.76,77
To summarize, while mixed-mode designs are an
attractive option for certain types of studies (particu-
larly for panel surveys and list-based samples), they are
not a panacea for the problems that ail RDD surveys of
the general population. In particular, the use of web
data collection as part of a mixed-mode strategy be-
comes an attractive option for survey researchers under
several conditions: (1) enough members of the sample
are both willing and able to respond via the web to
justify the investment in this mode; (2) the data quality
differences across the modes are negligible, permitting
the merging of data from different sources; and (3) the
introduction of the alternative mode offers some tan-
gible benefit such as increased overall response rates,
greater timeliness, or reduced costs.
Traditionally, survey researchers have relied on three
main methods of data collection—telephone interviews
with members of RDD samples, face-to-face interviews
with members of area probability samples, and mail
questionnaires sent to members of list samples. We
discussed these three methods (with ACASI used in
place of the traditional face-to-face interviews), plus two
relatively recent additions to the list—IVR interviews
and web surveys. Finally, we also briefly discussed briefly
mixed mode surveys that combine two or more of these
Each of these methods of data collection has it pros
and cons, but no method stands out as clearly the right
choice for surveys on sensitive issues. As societal trends
make people increasingly harder to reach and more
difficult to persuade to take part in surveys,78research
organizations will likely need considerable investments
in methodologic research just to maintain the current
levels of coverage and response rates. We believe such
investments are well worth the cost if surveys are to
continue to yield accurate information for important
No financial conflict of interest was reported by the authors of
1. Thornberry OT, Massey JT. Trends in United States telephone coverage
across time and subgroups. In: Groves RM, Biemer PP, Lyberg LE, Massey
JT, Nicholls WL II, Waksberg J, eds. Telephone survey methodology. New
York: Wiley, 1988:25–50.
2. Blumberg SJ, Luke JV, Cynamon ML. Telephone coverage and health
survey estimates: evaluating the need for concern about wireless substitu-
tion. Am J Public Health 2006;96:926–31.
3. Callegaro M, Poggio T. Where can I call you? The “mobile (phone)
revolution” and its impact on survey research and coverage error: a
discussion of the Italian case. Paper presented at RC33 6th International
Conference on Social Science Methodology, Amsterdam, August 16–20,
2004. Available at: www.websm.org/uploadi/editor/Callegaro_Poggio-
4. Kuusela V, Simpanen M. Effects of mobile phones on telephone surveys
practice and results. Paper presented at the International Conference on
Improving Surveys, Copenhagen, August 25–28, 2002.
5. Tucker C, Brick JM, Meekins B. Household telephone service and usage
patterns in the U.S. in 2004: implications for telephone samples. Paper
presented at the Annual Conference of the American Association for
Public Opinion Research, Miami FL, May 12–15, 2005.
6. Vehovar V, Belak E, Batagelj Z, Cikic S. Mobile phone surveys: the
Slovenian case study. Metodoloski zvezki 2004;1:1–19. Available at: http://
7. Link MW, Battaglia M, Frankel MR, Giambo P, Mokdad AH. Effectiveness
of address-based sampling frame alternative to RDD: BRFSS mail survey
experiment results. Paper presented at the Joint Statistical Meeting,
Minneapolis MI, August 7–11, 2005.
8. Yuan YA, Allen B, Brick M, et al. Surveying cell phone households—results
and lessons? Paper presented at the Annual Conference of the American
Association for Public Opinion Research, Miami FL, May 12–15, 2005.
9. Johnson TP, Holbrook AL, Cho YI, Bossarte RM. Nonresponse error in
injury risk surveys. Am J Prev Med 2006;31:427–436.
10. Curtin R, Presser S, Singer E. The effects of response rate changes on the
Index of Consumer Sentiment. Public Opinion Q 2000;64:413–28.
11. Curtin R, Presser S, Singer E. Changes in telephone survey nonresponse
over the past quarter century. Public Opinion Q 2005;69:87–98.
12. Tuckel P, O’Neill H. A profile of answering machine owners and screeners.
In: 1995 Proceedings of the Section on Survey Research Methods. Alexan-
dria VA: American Statistical Association, 1995:1157–62.
13. Groves RM, Couper MP. Nonresponse in household interview surveys. New
York: Wiley, 1998.
14. Keeter S, Miller C, Kohut A, Groves RM, Presser S. Consequences of
reducing nonresponse in a national telephone survey. Public Opinion Q
15. Groves RM. Research synthesis: nonresponse rates and nonresponse error
in surveys. Paper presented at the 16th International Workshop on
Household Survey Nonresponse, Tällberg, Sweden, August 28–31, 2005.
16. Groves RM, Kahn RL. Surveys by telephone: a national comparison with
personal interviews. New York: Academic Press, 1979.
17. de Leeuw ED, van der Zouwen J. Data quality in telephone and face to face
surveys: a comparative meta-analysis. In: Groves RM, Biemer PP, Lyberg LE,
Massey JT, Nicholls WL II, Waksberg J, eds. Telephone survey methodol-
ogy. New York: Wiley, 1988:273–99.
18. Smith TW. A comparison of telephone and personal interviewing. Chicago:
National Opinion Research Center, 1984 (GSS Methodological Report 28).
19. Aquilino WS. Telephone versus face-to-face interviewing for household
drug use surveys. Int J Addict 1992;27:71–91.
20. Johnson T, Hoagland J, Clayton R. Obtaining reports of sensitive behaviors:
a comparison of substance use reports from telephone and face-to-face
interview. Soc Sci Q 1989;70:174–83.
21. Aquilino WS. Interview mode effects in surveys of drug and alcohol use: a
field experiment. Public Opinion Q 1994;58:210–40.
22. McQueen DV. Comparison of results of personal interview and telephone
surveys of behavior related to risk of AIDS: advantages of telephone
techniques. In: Proceedings of the Health Survey Research Methods
Conference. Washington DC: U.S. Department of Health and Human
23. Sykes W, Collins M. Effects of mode of interview: experiments in the UK.
In: Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL II, Waksberg
J, eds. Telephone survey methodology. New York: Wiley, 1988:301–20.
24. Fay R. An analysis of within-household undercoverage in the Current Population
Survey. Proceedings of the Bureau of the Census Fifth Annual Research Confer-
ence. Washington DC: U.S. Bureau of the Census, 1989:156–75.
25. Brick JM, Waksberg J, Kulp D, Starer A. Bias in list-assisted telephone
samples. Public Opinion Q 1995;59:218–35.
26. Goyder J. The silent minority. Boulder CO: Westview Press, 1987.
27. Hox JJ, de Leeuw E. A comparison of nonresponse in mail, telephone, and
face-to-face surveys. Qual Quant 1994;28:329–44.
28. Atrostic K, Bates N, Burt G, Silberstein A. Nonresponse in U.S. government
household surveys: consistent measures, recent trends, and new insights. J
Official Stat 2001;17:209–26.
American Journal of Preventive Medicine, Volume 31, Number 5www.ajpm-online.net
29. O’Reilly J, Hubbard M, Lessler J, Biemer P, Turner CF. Audio and video Download full-text
computer assisted self-interviewing: preliminary tests of new technology for
data collection. J Official Stat 1994;10:197–214.
30. Couper MP, Singer E, Tourangeau R. Understanding the effects of
audio-CASI on self-reports of sensitive behavior. Public Opinion Q
31. Lessler JT, Caspar RA, Penne MA, Barker PR. Developing computer
assisted interviewing (CAI) for the National Household Survey on Drug
Abuse. J Drug Issues 2000;30:9–34.
32. Tourangeau R, Smith TW. Asking sensitive questions: the impact of data
collection mode, question format, and question context. Public Opinion Q
33. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL.
Adolescent sexual behavior, drug use, and violence: increased reporting
with computer survey technology. Science 1998;280:867–73.
34. Aquilino W, Wright D, Supple A. Response effects due to bystander
presence in CASI and paper-and-pencil surveys of drug use and alcohol use.
Substance Use Misuse 2000;35:845–67.
35. Dillman DA. Mail and Internet surveys: the tailored design method. New
York: John Wiley, 2000.
36. Heberlein TA, Baumgartner R. Factors affecting response rates to mailed
surveys: a quantitative analysis of the published literature. Am Sociol Rev
37. Dillman DA, Tarnai J. Mode effects of cognitively-designed recall questions:
a comparison of answers to telephone and mail surveys. In: Biemer PP,
Groves RM, Lyberg LE, Mathiowetz NA, Sudman S, eds. Measurement
errors in surveys. New York: Wiley, 1991:73–93.
38. Hauser RM. Survey response in the long run: the Wisconsin Longitudinal
Study. Field Methods 2005;17:3–29.
39. Biemer PB, Lyberg LE. Introduction to survey quality. Hoboken NJ: Wiley,
40. Link MW, Mokdad A. Are web and mail modes feasible options for the
Behavioral Risk Factor Surveillance Survey? In: Cohen SB, Lepkowski JM,
eds. Proceedings of the Eighth Conference on Health Survey Research
Methods. HyattsvilleMD: National
41. Turner CF, Lessler JT, Devore J. Effects of mode of administration and
wording on reporting of drug use. In: Turner C, Lessler J, Gfroerer J, eds.
Survey measurement of drug use: methodological studies. Rockville MD:
National Institute on Drug Abuse, 1992:177–220.
42. Schober S, Caces MF, Pergamit M, Branden L. Effects of mode of
administration on reporting of drug use in the National Longitudinal
Survey. In: Turner C, Lessler J, Gfroerer J, eds. Survey measurement of
drug use: methodological studies. Rockville MD: National Institute on Drug
43. Couper MP. Web surveys: a review of issues and approaches. Public
Opinion Q 2000;64:464–94.
44. Nielsen/NetRatings. Three out of four Americans have access to the
Internet, 2004. Available at: http://direct.www.nielsen-netratings.com/
45. Nielsen/NetRatings. Active Internet users by country, July 2004. Available
46. Nielsen/NetRatings. Internet users earning $150k in household income
grow 20 percent year-over-year, leading all income groups, 2005. Available
47. U.S. Department of Commerce. A nation online: entering the broadband
age. Washington DC: Economics and Statistics Administration, National
Telecommunications and Information Administration, 2004.
48. Lozar Manfreda K, Vehovar V, Batagelj Z. Participation in solicited web
surveys: who comes farthest? Paper presented at the Fifth International
Conference on Social Science Methodology, Cologne, Germany, October
3–6, 2000.Availableat: www.ris.org/si/ris2000/pub/Cologne2000_
49. Vehovar V, Batagelj Z, Lozar Manfreda K, Zaletel M. Nonresponse in web
surveys. In: Groves RM, Dillman D, Eltinge JL, Little JAR, eds. Survey
nonresponse. New York, Wiley, 2002:229–42.
50. Bosnjak M. Participation in nonrestricted web surveys: a typology and
explanatory model for item nonresponse. In: Reips U-D, Bosnjak M, eds.
Dimensions of Internet Science. Lengerich, Germany: Pabst Science Pub-
51. Schonlau M. Conducting research surveys via e-mail and the web. Santa
Monica CA: RAND, 2002.
52. Cobanoglu C, Warde B, Moreo PJ. A comparison of mail, fax, and
web-based survey methods. Int J Market Res 2000;43:441–52.
Centerfor Health Statistics,
53. McCabe SE, Boyd C, Couper MP, Crawford S, d’Arcy H. Mode effects for
collecting alcohol and other drug data: web and U.S. mail. J Stud Alcohol
54. Wygant S, Lindorf R. Survey collegiate net surfers: web methodology or
mythology. Quirk’s Market Res Rev, July 1999. Available at: www.quirks.
55. Conrad FG, Couper MP, Tourangeau R, Peytchev A. Use and non-use of
clarification features in web surveys. J Official Stat. In press.
56. Peytchev A, Couper MP, McCabe SE, Crawford S. Web survey design: paging vs.
Public Opinion Research, Phoenix AZ, May 13–16, 2004.
57. Couper MP, Tourangeau R. Picture this! Exploring visual effects in web
surveys. Public Opinion Q 2004;68:255–66.
58. Couper MP, Tourangeau R, Conrad F, Crawford S. What they see is what we
get: response options for web surveys. Soc Sci Comput Rev 2004;22:111–27.
59. Tourangeau R, Smith TW. Collecting sensitive information with different
modes of data collection. In: Couper MP, Baker RP, Bethlehem J, Clark
CZF, Martin J, Nicholls WL II, O’Reilly J, eds. Computer assisted survey
information collection. New York: Wiley, 1998:431–54.
60. Entin EE, Kerrigan C, Berbaum M, Lancey P, McCallum D. Issues of adaptive
Research Institute for the Behavioral and Social Sciences, 2001.
61. Kiesler S, Sproull L. Response effects in the electronic survey. Public
Opinion Q 1986;50:402–13.
62. Martin C, Nagao H. Some effects of computerized interviewing on job
applicant responses. J Appl Psychol 1986;74:72–80.
63. Moon Y. Intimate exchanges: using computers to elicit self-disclosure from
consumers. J Consum Res 2000;26:323–37.
64. Tourangeau R, Steiger DM, Wilson D. Self-administered questions by
telephone: Evaluating interactive voice response. Public Opinion Q
65. Cooley PC, Miller HG, Gribble JN, Turner CF. Automating telephone
surveys: using T-ACASI to obtain data on sensitive topics. Comput Hum
66. Gribble JN, Miller HG, Cooley PC, Catania JA, Pollack L, Turner CF. The
impact of T-ACASI interviewing on reporting drug use among men who
have sex with men. Substance Use Misuse 2000;80:869–90.
67. Turner CF, Miller HG, Smith TK, Cooley PC, Rogers SM. Telephone audio
computer-assisted self-interviewing (T-ACASI) and survey measurement of
sensitive behaviors: preliminary results. In: Banks R, Fairgrieve J, Gerrard L.
Orchard T, Payne C, Westlake A, eds. Survey and statistical computing.
Chesham, Bucks, UK: Association for Statistical Computing, 1996:121-30.
68. Groves RM, Lepkowski JM. Dual frame, mixed mode survey designs. J
Official Stat 1985;622–7.
69. Brick JM, Ferraro D, Strickler T, Liu B. NSAF sample design. Washington
DC: Urban Institute, 2002.
70. Whitworth EM. Implementation and results of the Internet response mode for
Census 2000. Paper presented at the Annual Meeting of the American
Association for Public Opinion Research, Montreal, Canada, May 17–20, 2001.
71. Griffin DH, Fischer DP, Morgan MT. Testing an Internet response option
for the American Community Survey. Paper presented at the Annual
Meeting of the American Association for Public Opinion Research, Mon-
treal, Canada, May 17–20, 2001.
72. Link MW, Mokdad A. Effects of survey mode on self-reports of adult alcohol
consumption: a comparison of mail, web and telephone approaches. J Stud
73. Fricker S, Galesic M, Tourangeau R, Yan T. An experimental comparison of
web and telephone surveys. Public Opinion Q 2005;69:370–92.
74. Flemming G, Sonner M. Can Internet polling work? Strategies for conduct-
ing public opinion surveys online. Paper presented at the Annual Meeting
of the American Association for Public Opinion Research, St. Petersburg
Beach FL, May 13–16, 1999.
75. Neubarth W, Kaczmirek L, Bandilla W, Bosnjak M. Feasibility of a random
sample. Paper presented at RC33 6th International Conference on Social
Science Methodology, Amsterdam, August 16–20, 2004.
76. Huggins VJ, Dennis MJ, Seryakova K. An evaluation of nonresponse bias in
internet surveys conducted using the Knowledge Networks panel. In: 2002
Proceedings of the Survey Research Methods Section. Alexandria VA:
American Statistical Association, 2000:1525-30.
77. Pineau V, Slotwiner D. Probability samples v. volunteer respondents in
Internet research. Knowledge Networks, Inc., 2003. Available at: www.
78. Tourangeau R. Survey research and societal change. Annu Rev Psychol
November 2006 Am J Prev Med 2006;31(5)