To assess whether a mixed-mode survey design reduced bias and enhanced methods commonly used to correct for bias (poststratification weighting).
The data for this paper are from a study of 1,900 adult patients enrolled in a randomized controlled trial to promote repeat treatment for relapsed smokers at five Veteran's Affairs Medical Centers. A sequential mixed-mode design was used for data collection whereby the initial attempt was conducted using phone administration, with mail follow-up for nonresponders. Analyses examined demographic, health, and smoking cessation treatment seeking differences between telephone responders, mail responders, and nonresponders and compared the relative effectiveness of global vs. targeted poststratification weighting adjustments for correcting for response bias.
The findings suggest (1) that responders to the additional survey mode (mail) did not significantly differ from responders to the first mode (phone) or nonresponders and (2) that poststratification weighting adjustments that take this additional information into account perform better than the standard global adjustments.
A mixed-mode design can improve survey representativeness and enhance the performance of poststratification weighting adjustments.
"In second and third round of phone calls (reminders) respondents were also offered face-to-face surveys . The mixed-mode approach is accepted as a useful method to improve the response rate of surveys and their representativeness (Dillman et al., 2009; Baines et al., 2007). "
[Show abstract][Hide abstract] ABSTRACT: Many proposals to improve biodiversity governance target the stage of policy formulation. In this paper we highlight the importance of the subsequent policy realization stage, which is mostly carried out by sub-national administrative levels. We explore the differences in the opinions of practitioners representing regional and local public institutions in conservation policy design and implementation. The research was conducted through surveying a representative sample of local and regional practitioners within Małopolska, Poland. The results illustrate a cross-level mismatch between the regional and local practitioners. That is, practitioners operating at different administrative levels have significantly different opinions on nature conservation system performance, system effectiveness, the distribution of power among actors, and on the allocation of costs and benefits stemming from nature conservation. Local level representatives are generally more pleased with overall nature conservation performance and its outcomes, while regional level representatives are more skeptical, especially toward local level performance and the overall effectiveness of nature conservation. Also, local level respondents are more critical, while regional practitioners hold more positive images of the procedures involved during policy implementation. We highlight the practical implications of this kind of research, and the importance of quantitative data in evaluating the overall performance of conservation policy.
"This indicates there is no selective attrition of the most vulnerable T1-hard-to-recruit participants along the four measurement waves. We may conclude that extensive recruitment effort does not only increase the representativeness of the sample at initial assessment waves [5,11,12], but also eight years later. This is an important finding. "
[Show abstract][Hide abstract] ABSTRACT: Background
Extensive recruitment effort at baseline increases representativeness of study populations by decreasing non-response and associated bias. First, it is not known to what extent increased attrition occurs during subsequent measurement waves among subjects who were hard-to-recruit at baseline and what characteristics the hard-to-recruit dropouts have compared to the hard-to-recruit retainers. Second, it is unknown whether characteristics of hard-to-recruit responders in a prospective population based cohort study are similar across age group and survey method.
First, we compared first wave (T1) easy-to-recruit with hard-to-recruit responders of the TRacking Adolescents’ Individual Lives Survey (TRAILS), a prospective population based cohort study of Dutch (pre)adolescents (at first wave: n = 2230, mean age = 11.09 (SD 0.56), 50.8% girls), with regard to response rates at subsequent measurement waves. Second, easy-to-recruit and hard-to-recruit participants at the fourth TRAILS measurement wave (n = 1881, mean age = 19.1 (SD 0.60), 52.3% girls) were compared with fourth wave non-responders and earlier stage drop-outs on family composition, socioeconomic position (SEP), intelligence (IQ), education, sociometric status, substance use, and psychopathology.
First, over 60% of the hard-to-recruit responders at the first wave were retained in the sample eight years later at the fourth measurement wave. Hard-to-recruit dropouts did not differ from hard-to-recruit retainers. Second, extensive recruitment efforts for the web based survey convinced a population of nineteen year olds with similar characteristics as the hard-to-recruit eleven year olds that were persuaded to participate in a school-based survey. Some characteristics associated with being hard-to-recruit (as compared to being easy-to-recruit) were more pronounced among non-responders, resembling the baseline situation (De Winter et al.2005).
First, extensive recruitment effort at the first assessment wave of a prospective population based cohort study has long lasting positive effects. Second, characteristics of hard-to-recruit responders are largely consistent across age groups and survey methods.
BMC Medical Research Methodology 07/2012; 12(1):93. DOI:10.1186/1471-2288-12-93 · 2.27 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To examine the impact of response rate variation on survey estimates and costs in three health telephone surveys.
Three telephone surveys of noninstitutionalized adults in Minnesota and Oklahoma conducted from 2003 to 2005.
We examine differences in demographics and health measures by number of call attempts made before completion of the survey or whether the household initially refused to participate. We compare the point estimates we actually obtained with those we would have obtained with a less aggressive protocol and subsequent lower response rate. We also simulate what the effective sample sizes would have been if less aggressive protocols were followed.
Unweighted bivariate analyses reveal many differences between early completers and those requiring more contacts and between those who initially refused to participate and those who did not. However, after making standard poststratification adjustments, no statistically significant differences were observed in the key health variables we examined between the early responders and the estimates derived from the full reporting sample.
Our findings demonstrate that for the surveys we examined, larger effective sample sizes (i.e., more statistical power) could have been achieved with the same amount of funding using less aggressive calling protocols. For some studies, money spent on aggressively pursuing high response rates could be better used to increase statistical power and/or to directly examine nonresponse bias.
Health Services Research 10/2010; 45(5 Pt 1):1324-44. DOI:10.1111/j.1475-6773.2010.01128.x · 2.78 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.