Complementing Random-Digit-Dial Telephone Surveys with Other Approaches to Collecting Sensitive Data

Joint Program in Survey Methodology, University of Maryland, College Park, Maryland 20742, USA.
American Journal of Preventive Medicine (Impact Factor: 4.53). 12/2006; 31(5):437-43. DOI: 10.1016/j.amepre.2006.07.023
Source: PubMed


Surveys of sensitive topics, such as the Injury Control and Risk Surveys (ICARIS) or the Behavioral Risk Factors Surveillance System (BRFSS), are often conducted by telephone using random-digit-dial (RDD) sampling methods. Although this method of data collection is relatively quick and inexpensive, it suffers from growing coverage problems and falling response rates. In this paper, several alternative methods of data collection are reviewed, including audio computer-assisted interviews as part of personal visit surveys, mail surveys, web surveys, and interactive voice response surveys. Their strengths and weaknesses are presented regarding coverage, nonresponse, and measurement issues, and compared with RDD telephone surveys. The feasibility of several mixed mode designs is discussed; none of them stands out as clearly the right choice for surveys on sensitive issues, which implies increased need for methodologic research.

Download full-text


Available from: Roger Tourangeau, Aug 13, 2014
  • Source
    • "As stated earlier, IVR approaches do not utilize live interviewers who provide barriers to dropping out through psychological factors, such as authority (e.g., people usually find it rude to hang-up on an interviewer once engaged) and reciprocity (e.g., interviewers can provide additional encouragement and/or feedback to respondents to keep them engaged in the process) (Groves et al., 1992). Respondent fatigue for longer IVR surveys may also increase dropout rates due the lack of interviewer barriers and/or respondent boredom with an automated system (Galesic et al., 2006). Survey completion patterns observed with IVR typically result in an initial dropout during transition to the automated system continued by dropout throughout the survey (compared to only an initial dropout observed with CATI), suggesting respondent fatigue and/or reaction to the IVR modality may result when interviewer barriers are removed (Galesic et al., 2006; Kreuter et al., 2008; Tourangeau et al., 2002). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Few methods estimate the prevalence of child maltreatment in the general population due to concerns about socially desirable responding and mandated reporting laws. Innovative methods, such as interactive voice response (IVR), may obtain better estimates that address these concerns. This study examined the utility of interactive voice response (IVR) for child maltreatment behaviors by assessing differences between respondents who completed and did not complete a survey using IVR technology. A mixed-mode telephone survey was conducted in English and Spanish in 50 cities in California during 2009. Caregivers (n=3,023) self-reported abusive and neglectful parenting behaviors for a focal child under the age of 13 using computer-assisted telephone interviewing and IVR. We used hierarchical generalized linear models to compare survey completion by caregivers nested within cities for the full sample and age-specific ranges. For demographic characteristics, caregivers born in the United States were more likely to complete the survey when controlling for covariates. Parenting stress, provision of physical needs, and provision of supervisory needs were not associated with survey completion in the full multivariate model. For caregivers of children 0-4 years (n=838), those reporting they could often or always hear their child from another room had a higher likelihood of survey completion. The findings suggest IVR could prove to be useful for future surveys that aim to estimate abusive and/or neglectful parenting behaviors given the limited bias observed for demographic characteristics and problematic parenting behaviors. Further research should expand upon its utility to advance estimation rates.
    Child abuse & neglect 05/2014; 38(10). DOI:10.1016/j.chiabu.2014.04.001 · 2.34 Impact Factor
  • Source
    • "Despite studies which support findings from self-reported information [3,10], for some scholars and practitioners self-reported data are perceived to be unreliable estimates of health factor prevalence. Moreover in recent years, telephone survey response rates have declined [5,11]. BRFSS response rates have also declined from medians in the 70–75 percent in the 1980s to a median of 57 percent in 2010 [2], resulting in targeted efforts to improve coverage and reach nonrespondents through the use of new contact methods including cell phones [12,13] and reduction of non-response bias through the introduction of new weighting techniques [14]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future.
    BMC Medical Research Methodology 03/2013; 13(1):49. DOI:10.1186/1471-2288-13-49 · 2.27 Impact Factor
  • Source
    • "But this does not quite answer the question about the actual accuracy of what is provided. Galesic et al., (2006) suggested that the absence of interviewers in web surveys may reduce social desirability bias but again there is no guarantee that respondents will report accurately and there is even less of a change for survey administrators to perform validation and verification. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Surveys have long become a standard data collection tool. In more recent times web surveys have become particularly popular given their cost-effectiveness on one hand, and the ubiquitous nature of the internet and the World Wide Web on the other. Regardless of the type of survey, a number of issues still challenge survey researchers and practitioners. The accuracy of data collected for socio-demographic and other factual –type research questions is of utmost importance if the researcher is to make any claim about the data collected. Accuracy as characteristic of data quality is perhaps the most important issue of all. This paper specifically reviews the body of literature on work completed on data quality and identifies and analyzes studies on accuracy and reliability of data. This paper critically examines the most significant published research in the literature that addresses the issue of accuracy and reliability of survey data. Specifically, this review addresses and critique existing research methods, identifies and discusses important limitations, and concludes with a discourse on key questions to be answered and suggestions for future research.
Show more