ArticlePDF Available

“Money Will Solve the Problem”: Testing the Effectiveness of Conditional Incentives for Online Surveys

Authors:

Abstract and Figures

Incentivizing survey participation through the use of cash or other rewards has often been used to encourage participation. This is often done with the hopes of increasing response rates and, therefore, representativeness of the responding sample as well. The effectiveness of incentives has generally been shown to be positive, but results have been mixed for conditional incentives and for online surveys. Using an experimental design, this study uses a random sample of undergraduate students to estimate group differences, incorporating both official and self-report data. Participants were randomly assigned to one of three different groups with varying incentives of five dollars, two dollars, or nothing. Results indicate that a five-dollar conditional reward credited to students’ campus ID card account does increase participation rates, which does improve the representativeness of the sample, but does not appear to significantly change substantive conclusions.
Content may be subject to copyright.
“Money Will Solve the Problem”:
Testing the Effectiveness of Conditional Incentives for Online Surveys
Whitney DeCamp
Western Michigan University
Matthew J. Manierre
Clarkson University
The data used in this research were collected using funding made available by Western Michigan
University through the Arts and Sciences Teaching and Research Award. The authors wish to
thank Western Michigan University and its College of Arts and Sciences for their generous
support.
DeCamp, Whitney, and Matthew J. Manierre. (2016). “Money Will Solve the Problem”: Testing
the Effectiveness of Conditional Incentives for Online Surveys. Survey Practice, 9.
“Money Will Solve the Problem”:
Testing the Effectiveness of Conditional Incentives for Online Surveys
Abstract
Incentivizing survey participation through the use of cash or other rewards has often been used to
encourage participation. This is often done with the hopes of increasing response rates and,
therefore, representativeness of the responding sample as well. The effectiveness of incentives
has generally been shown to be positive, but results have been mixed for conditional incentives
and for online surveys. Using an experimental design, this study uses a random sample of
undergraduate students to estimate group differences, incorporating both official and self-report
data. Participants were randomly assigned to one of three different groups with varying
incentives of five dollars, two dollars, or nothing. Results indicate that a five-dollar conditional
reward credited to students’ campus ID card account does increase participation rates, which
does improve the representativeness of the sample, but does not appear to significantly change
substantive conclusions.
Introduction
Having a representative sample is important for virtually any quantitative study. Often, the
response rate is a key indicator used to determine if it is plausible that the sample might have a
bias (Groves 2006), causing researchers to seek methods for improving response rates to ensure
better representation. Financial incentives have often been used to increase response rates and
have been regarded as a successful approach in field-based studies where cash can be offered in-
person (Dillman et al. 2009; Singer and Ye 2013). Increased use of Internet-based surveys has
made it burdensome to deliver up front incentives, however. Therefore, an important question is
whether an incentive provided upon completion increases response rates for electronic surveys.
In addition, a better understanding is needed regarding whether these economic incentives alter
sample characteristics by changing who might response to the incentive, increasing non-response
bias. This study tests the effectiveness of conditional monetary incentives for web-surveys
through an experimental design in which participants are randomly assigned to an incentive or
incentive-less group.
Groves and colleagues’ (2000) leverage-saliency theory of survey participation provides
the theoretical framework for the present study. This framework posits that survey participation
depends in part on the survey request’s emphasis on particular elements of the survey (salience)
and leverage; the promised incentives are hypothesized to work because they add a salient
element (payment) to with increasingly positive leverage as compensation increases (Singer and
Ye 2013). Regardless of quantity offered, the leverage attributed to the survey incentive depends
on the individual - not everyone will react the same way (Groves et al. 2000). Therefore, it is also
important to also consider the possibility that survey demographics and responses may be biased
due to some individuals being more motivated by financial incentives than others.
Monetary incentives typically fall into three categories: 1) prepaid incentives are given to
the respondent before they have completed the survey, 2) conditional incentives are given to
respondents after they have completed the survey, and 3) lotteries are used for their low cost
because only a few respondents are rewarded for their efforts. The literature has explored the
effectiveness of each of these incentives, though research on incentives in web surveys is still
2
uncommon. The prepaid cash incentive is widely regarded as the best method of improving
survey response for mail and in-person surveys (Church 1993; Kypri and Gallagher 2003;
Birnholtz 2004; Dillman et al. 2009), but is frequently infeasible for online surveys (Porter and
Whitcomb 2003; Hoonakker and Carayon 2009). Conditional incentives, in which the respondent
is guaranteed a reward after completion an online survey, are less researched and they have
yielded mixed results, ranging from null effects to clear improvements (Birnholtz et al. 2004;
Göritz 2004; Göritz 2006; Patrick et al. 2013).
Implicit in the pursuit of high response rates is the assumption that higher response rates
corresponds to a better sample, but even surveys with high response rates can suffer from bias.
When incentives are used to motivate response, it is hoped that they improve response for every
member of the population but different groups may interpret the “weight” of incentives
differently (Groves et al. 2000). In mail surveys, it has been found that a variety of incentives
produce differences in demographic characteristics (Ryu et al. 2005; Teisl et al. 2006). Few
studies have examined the effect of incentives on nonresponse bias in online surveys, none of
which have explored the effect of conditional incentives. Two studies have found that lottery
incentives attract female respondents to web surveys disproportionately (Heerwegh 2006,
Laguilles et al. 2011). A similar result was found for prepaid incentives and online surveys, with
a group receiving a $2 prepaid incentive being less reflective of administrative records than the
control group, with the control group matching official records for gender, but the incetivized
group being overly female (Parsons and Manierre 2013). Some research also suggests that the
gender gap is more pronounced with smaller dollar figures, but still exists with higher values
nonetheless (Boulianne 2013).
Overall, evidence is inconsistent regarding how effective conditional incentives are for
online surveys. Whether this potential improvement in survey response actually makes the
sample more representative is also in question. The present study examines these lingering
questions using an experimental design with official data to analyze for nonresponse bias. The
following hypotheses are posed based on the literature and leverage-salience theory: a) offering
these incentives will result in a better response rate, and increasing the size of the incentive
promised will further improve the response rate for the web survey, b) a better response rate will
result in different demographic characteristics, which may be more or less accurate in
representing the population, and c) incentive groups will have significantly different responses to
substantive items on the survey.
Methods
An experimental design is used to test the effectiveness of conditional incentives. For the sample,
1,000 full-time, undergraduates were randomly selected at a large, American Midwest university.
The selection process was performed with coordination from the university’s research office
under the approval of the institutional review board. These students were randomly assigned to
one of three groups and were invited to participate in a survey on “college behaviors.” The email
invitation contained a unique link to the survey with no additional login credentials required,
which made access simple and convenient. The unique link stopped working after survey
completion, preventing duplicate responses from the same link. The students were promised
confidentiality and their names were removed from all datasets.
Those assigned to the control group (n=600) were sent an invitation without any
discussion of incentives. Students assigned to the first experimental group (n=250) or the second
3
(n=150) were given the same invitation, but with a promise of a $2 or $5 (respectively) credit
being applied to their student ID card. To maximize the utility of the incentive, they were told
that they would be able to select the type of credit: for use at the bookstore (which has a large
selection of merchandise) or for use at dining services (including various options across the
campus). All students equally benefited from such a credit, as all students are required to have
student ID cards. The invitation emails were sent on the Monday of the third week of classes in
the fall 2012 semester. Reminder emails were sent on the fourth, ninth, and seventeenth days to
potential participants who had not yet responded or unsubscribed (only 3.9 percent of students
opted-out).
In addition to simply comparing the response rates based on the incentives offered,
official individual-level data were provided by the university for use in determining whether any
response biases are present among the groups. The official data include: gender, race (non-
Hispanic white or other), age, nationality (American or international), state residency (in-state
tuition rate or not), campus residency (lives on-campus or not), class status in years completed (0
= freshmen, 1 = sophomore, etc.), and GPA. The substantive questions on the survey can also
provide some insight into differences that might occur across incentive groups. The variables
used from the survey include measures of personal characteristics, substance use, victimization,
and offending. The true population proportions for these indicators are unknown, but bias can be
identified by comparing the three study groups against one another. Comparisons between
groups are performed using chi-square tests or t-test as appropriate for the level of measure.1
Results
During the three weeks in which the survey was available, 322 students (32.2 percent)
responded, including 182 (30.4 percent) in the control group, 77 (30.8 percent) in the two-dollar
group, and 63 (42.0 percent) in the five-dollar group. Students in the five-dollar group were
significantly more likely to respond than those in either the control or two-dollar groups (p = .
021). The graph in Figure 1 illustrates that the incentive group only has a higher response rate
during the first week, suggesting that the incentive has a stronger effect on early respondents.
The differences between the population parameters (official data) and the sample estimate
are displayed in Table 1. The control group has significant deviations from administrative records
in the form of higher observed GPAs, a larger proportion of females, and a larger proportion of
students living on-campus. The two-dollar group similarly had a larger proportion of females
than the original sampling frame. Conversely, the five-dollar group was virtually identical to the
sampling frame. Overall, none of the groups were radically different from the population, though
the five-dollar group was closest to the original sampling frame.
The self-reported question responses are displayed in Table 2. Although the percentages
do vary from group to group, there were no significant differences for any of these measures with
the sole exception of academic cheating on exams. Caution should be used in interpreting the
significance here, as these analyses included 22 chi-square tests and the significance level (p < .
10) indicates only a one-in-ten chance of incorrectly rejecting the null hypothesis. Given 22 tests
and a 10 percent error rate, it is likely – even expected – that something would be significant by
chance alone. Thus, there is more evidence suggesting similarities than dissimilarities.
1 Marginal significance (p < .10) is reported in the analyses given the moderate sample sizes of the groups. As will
be demonstrated in the results to come, this is done not to over-report significant differences. Rather, given that very
few tests show significance falling in this gray area, this is done to provide even further evidence that differences are
negligible.
4
Figure 1: Responses as a Percent of Total Group Sample by Day
Table 1: Sample characteristics compared to administrative data by group
Control Group Two-Dollar Group Five-Dollar Group
Full
Sample
Respon-
dents
Only
Full
Sample
Respon-
dents
Only
Full
Sample
Respon-
dents
Only
Mean Age 21.04 21.10 21.16 20.79 20.90 21.06
Mean Class Year 1.54 1.49 1.60 1.56 1.55 1.62
Mean GPA 2.96 3.12** 3.09 3.20 3.06 3.20
% Female 48.91 58.24* 47.20 58.44* 50.00 49.21
% Racial Minority 23.95 20.81 17.65 20.00 25.17 23.33
% State Resident 91.49 91.76 88.80 88.31 90.00 90.48
% International 4.01 4.40 4.40 2.60 4.67 4.76
% On-Campus 32.05 41.21* 26.80 29.87 30.67 33.33
Sample Size n=600 n=182 n=250 n=77 n=150 n=63
p < .10 - * p < .05 - ** p < .01
5
Table 2: Self-report characteristics (percentages) by group
Control Two-
Dollar
Five-
Dollar
Athlete 32.97 29.87 30.16
Religion (Christian) 63.74 59.74 58.73
Religion (Other Religion) 7.14 5.19 7.94
Religion (Atheist/Agnostic/None) 29.12 35.06 33.33
Sexual Activity 65.56 67.53 61.90
Unprotected Sex 28.81 36.84 36.51
Cigarette Use 28.89 19.74 25.40
Alcohol Use 77.35 76.62 69.84
Binge Drinking 52.49 48.68 49.21
Marijuana Use 22.91 22.67 26.98
Other Illegal Drugs 28.73 29.73 31.75
Non-Prescribed Rx Use 22.10 26.32 28.57
Theft Victim 31.64 29.87 33.33
Assault Victim 18.08 14.29 20.63
Robbery Victim 9.09 5.19 3.17
Academic Cheating10.17 9.21 20.63
Assault 11.30 13.16 12.70
Theft 5.65 2.63 3.23
Forgery / Fake ID 3.98 6.58 4.84
DUI Alcohol 12.99 14.47 12.70
DUI Drugs 18.18 18.42 14.52
DUI Overall 22.60 25.00 25.40
The $5 group is significantly different from the control (p < .05) and $2 (p < .10) groups
for cheating. No other variables are significantly different between groups.
6
Worth considering is that a lack of significance in a relationship does not necessarily
indicate that there is no relationship. Significance tests single relationships, while ignoring trends
across multiple tests. In other words, although the groups are not significantly different, they
could be consistently different. To examine this possibility, deviations as a measure of each
group’s difference from the others were examined. When the deviations for each variable are
averaged, the two-dollar group has an average deviation from the control group of -0.27. The
five-dollar group has an average deviation from the control group of 0.32 and from the two-
dollar group of 0.59. The average deviation is therefore less than one percentage point for each
comparison. Thus, rather than any bias trend, substantive differences between groups appear to
be more akin to random noise that averages nearly zero when taken as a whole, so incentive
groups do not have significantly or substantively different responses to items on the survey.
Discussion
In line with leverage-salience theory, this experimental design supports the assertion that offering
a conditional reward as payment for completing a survey does have an impact on response rates.
This is evidenced by the increase in response rates from 30% in the control group to 42% in the
five-dollar group. However, the lack of significant improvement from the two-dollar group
implies that the impact is tied to the amount of the reward. These findings lend support to
leverage-salience theory’s assertion that this type of incentive functions as a payment rather than
a trigger for reciprocity, as reciprocity would have resulted in both incentives increasing response
rates.
Research has demonstrated that conditional incentives increase response rates in some
situations (Dillman et al. 2009; Singer and Ye 2013), but the empirical support for that effect is
mixed for online surveys (e.g. Birnholtz et al. 2004, Singer and Ye 2013). It is possible that the
reason this study contradicts prior null findings is because the reward was provided
electronically and respondents had an option for their compensation type, increasing the leverage
of what is often a static and delayed reward.
It also appears that the higher response rate of the five-dollar group coincided with
improved representativeness compared to both the control group and small incentive group. This
improvement likely indicates that the incentive has higher leverage among college students who
are normally unlikely to respond, such as off-campus students and men. This is a key finding
given that some prior studies of college students have found that increasing the response rate
through prepaid and lottery incentives may further bias responses in web surveys towards certain
groups (Heerwegh 2006; Laguilles et al. 2011; Parsons and Manierre 2013; Boulianne 2013).
Contrary to prior studies, these data suggest that a five-dollar conditional incentive may increase
representativeness while also improving response.
Similar to response rates being used as a proxy for representativeness, accurate
demographics are assumed to correspond to more representative measurement of dependent and
independent variables. When examining substantive survey items, the vast majority of measures
indicated no significant differences based on the incentive offered and the response rate
achieved, suggesting that neither has a significant impact on substantive conclusions.
Although offering several advances, the present study has a few limitations. First, the
novel reward distribution system used here to provide electronic deposits to ID cards may affect
generalizability, as cash or other types and sizes of rewards might yield different effects. Second,
this study focused exclusively on college students, and the effectiveness of incentives may vary
7
based on the target audience. Finally, it is possible that the groups in this study are too small to
detect statistically significant deviations from administrative records in some cases. Most of the
deviations were substantively small, however, so it is unlikely that the core conclusions would be
changed by a larger sample.
In addition to replicating this study, future research should expand on these findings. It
remains unclear whether the use of conditional rewards for web surveys is more or less effective
than the use of raffle designs or a prepaid cash incentive. It would also be beneficial to examine
the effect of allowing respondents to choose their reward, as they were in this study, as this may
increase the appeal of the incentive. This choice element may help to explain this study’s
somewhat surprising findings, which contradict much of the literature on promised incentives
and mail surveys.
Until future research can clarify these and other unanswered questions, it remains unclear
whether there are sufficient gains from using conditional incentives. In sum, this study suggests
that a sufficiently large promised incentive may help to improve web survey response among
college students while also improving the representativeness of that data. However, it remains
unclear whether there are sufficient substantive gains to justify this investment. On one hand, the
increased response rate and representativeness suggests that there is a benefit for the increased
cost. On the other hand, the minimal differences on substantive items implies that conclusions
are not necessarily impacted. Overall, this research suggests that incentives are effective, but also
that failing to use incentives may not necessarily result in “bad data” that are substantively less
valuable. Therefore, it is advised that web survey researchers base their decision on whether to
use a conditional incentive on whether demographically accurate data or additional power is
required, rather than the prospect of enhanced data on substantive issues.
References
Birnholtz, J. P., Horn, D. B., Finholt, T. A., Bae, S.J. (2004). The effects of cash, electronic,
and paper gift certificates as respondents incentives for a web-based survey of
technologically sophisticated respondents. Social Science Computer Review, 22, 355-62.
Boulianne, S. 2013. Examining the Gender Effects of different Incentive Amounts in a Web
Survey. Field Methods 25(6), 91-104.
Church, A. H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-
analysis. Public Opinion Quarterly, 57, 62-79.
Dillman, D. A., Smyth, J.D., Christian, L.M. ( 2009). Internet, mail, and mixed mode surveys:
The tailored design method (3rd ed.). New York: Wiley.
Göritz, A. S. (2004). The impact of material incentives on response quantity, response quality,
sample composition, survey outcome, and cost in online access panels. International
Journal of Market Research. 46(3), 327-345.
Göritz, A. S. (2006). Incentives in web studies: Methodological issues and a review.
International Journal of Internet Science, 1, 58-70.
Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public
Opinion Quarterly, 70(5), 646-675.
Groves, R. M., Singer, E., and Corning E. (2000). Leverage-Salience theory of survey
participation: Description and an Illustration. Public Opinion Quarterly, 64, 299-308.
Heerwegh, D. (2006). An investigation of the effect of lotteries on web survey response rates.
8
Field Methods, 18, 205-220.
Hoonakker, P. & Carayon, P (2009). Questionnaire survey nonresponse: A comparison of postal
mail and internet surveys. International Journal of Human-Computer Interaction, 25,
348-73.
Kypri, K. & Gallagher S.J. (2003). Incentives to increase participation in an internet survey of
alcohol use: A controlled experiment. Alcohol & Alcoholism, 38(5), 437-441.
Laguilles, J. S., Williams, E.A., and Saunders, D.B. (2011). Can lottery incentives boost web
survey response rates? Findings from four experiments. Research in Higher Education,
52, 537-53.
Parsons, N. L., & Manierre, M. J. (2013). Investigating the relationship among prepaid token
incentives, response rates, and nonresponse bias in a web survey. Field Methods 1-14.
Patrick, M.E., Singer, E., Boyd, C.J., Cranford, J.A., McCabe, S.E. 2013. “Incentives for college
student participation in web substance use surveys. Additive Behaviors, 38(8), 1710-
1714.
Porter, S. R., & Whitcomb, M. E. (2003). The impact of lottery incentives on student survey
response rates. Research in Higher Education, 44, 389-407.
Ryu, E., Couper, M.P., Marans, R.W. (2005). Survey incentives: Cash vs. in-kind; face-to-face
vs. mail; resonse rate vs. nonresponse error. International Journal of Public Opinion
Research, 18(1): 89-106.
Singer, E. & Ye, C. (2013). The Use and Effects of Incentives in Surveys. The ANNALS of the
American Academy of Political and Social Science, 645, 112-141.
Teisl, M.F., Roe, B., Vayda, M. (2006) Incentive effects on response rates, data quality, and
survey administration costs. International Journal of Public Opinion Research, 18(3),
364-373.
9
... 216). Indeed, DeCamp and Manierre (2016) find that only a $5-and not $2-digital postincentive in the form of university credit provides a benefit in participation rates over no incentive at all. ...
... For example, in a study of LGBT individuals, social media recruitment was associated with more data quality issues compared to in-person recruitment, and researchers dropped a significantly larger portion of data from social media respondents (Guillory et al., 2018). In addition, although financial incentives may increase participation rates, it is possible that these incentives may lead to duplicate responses, especially in adolescent populations where financial incentives are particularly enticing (DeCamp & Manierre, 2016). Therefore, online studies in adolescents may require additional attention to data quality through deliberate study design (Michaels et al., 2015). ...
Article
Online methods hold promise as effective research tools for adolescent psychopathology research. Such methods may be the most effective way to reach large, representative samples of adolescents and harder-to-reach populations. They also may increase adolescent disclosure of risky behaviors, reduce recruitment costs, and increase the cost and time efficiency of recruitment. Despite these advantages, researchers may be concerned about including measures assessing risky behaviors, like suicidal thoughts and behaviors and nonsuicidal self-injury, in online studies of youth. In addition, parental consent in online studies is impractical and difficult to obtain. Concerns also include potential iatrogenic effects, sample bias, and data quality issues. This review discusses the benefits and challenges for online adolescent self-injury research, proposes strategies to overcome barriers, and provides examples and recommendations for future research.
... Even taking into consideration that low response rates do not necessarily imply lower representativeness (Groves & Peytcheva, 2008), high response rates are desirable. Future studies could enhance response rates by extending the fieldwork period (12 days in the current study), sending advance notification (Aizpurua et al., 2018), developing websites for potential respondents to access further information on the study (in this case, we provided the contact information of the two lead researchers), and, if budget allows, offering incentives (DeCamp & Manierre, 2016). ...
Article
Mental health symptoms are overrepresented among college students worldwide. The current research investigates the associations among substance use, family functionality, and mental health (depression, anxiety, and stress) among college students in Spain. A total of 828 (59.2% female and 40.8% male) college students from two public universities completed a self-reported online survey that included items on demographic information, substance use (alcohol, tobacco, cannabis, cocaine, prescription sedatives, and recreational sedatives), mental health symptoms (using the DASS-21 questionnaire) and family functionality (using the APGAR questionnaire). College students reporting substance use (especially recreational sedatives) and family dysfunctionality were more likely to exhibit symptoms of depression , anxiety and stress. These findings provide support for the underlying role of substance use and family functionality on mental health symptoms. Treatments targeting depression, anxiety, and stress among college students in Spain should aim to reduce substance use and increase family support of students.
... Particular focus within the current literature is given to incentives (e.g., honoraria), which may be noncontingent (i.e., a preincentive) or contingent upon participation in the survey (i.e., a promised incentive). Research has conclusively shown that the use of incentives systematically yields higher rates of participation than the absence of incentives; however, most studies show only a marginal effect of the amount of the incentive (DeCamp and Manierre 2016;Godwin 1979;Hsu et al. 2017;James and Bolstein 1992;Jobber, Saunders, and Mitchell 2004). Furthermore, most researchers observe that noncontingent incentives yield higher response rates than contingent incentives (Church 1993;Edwards et al. 2002;Göritz 2006;Hsu et al. 2017;Martin, Abreu, and Winters 2001;Robbins et al. 2018); however, it is often assumed that noncontingent strategies will be less cost-effective since the incentive is sent to a multitude of nonrespondents. ...
... These entries had similar or identical IP addresses, and contact information and IP addresses were from outside the northeastern United States. These activities may be explained by our use of monetary incentives, which can not only enhance recruitment and retention in mHealth and Web-based studies [12] but also encourage ineligible applicants and multiple attempts by eligible individuals. Studies using electronic health have discussed similar challenges [13,14] and suggested that a systematic approach for eligibility screening may help obtain valid data [13][14][15]. ...
Article
BACKGROUND: Mobile health (mHealth) and Web-based research methods are becoming more commonplace for researchers. However, there is a lack of mHealth and Web-based human papillomavirus (HPV) prevention experimental studies that discuss potential issues that may arise. OBJECTIVE: This study aimed to assess the feasibility of research procedures and discuss the challenges and lessons learned from an mHealth and Web-based HPV prevention experimental study targeting female Korean American college students in the United States. METHODS: A pilot randomized controlled trial (RCT) was conducted in an mHealth and Web-based platform with 104 female Korean American college students aged 18-26 years between September 2016 and December 2016. Participants were randomized to either the experimental group (a storytelling video intervention) or the comparison group (a nonnarrative, information-based intervention). Outcomes included the feasibility of research procedures (recruitment, eligibility, randomization, and retention). RESULTS: From September 2016 to October 2016, we recorded 225 entries in our initial eligibility survey. The eligibility rate was 54.2% (122/225). This study demonstrated a high recruitment rate (95.6%, 111/122) and retention rate (83.7%, 87/104) at the 2-month follow-up. CONCLUSIONS: Findings from this study demonstrated sufficient feasibility in terms of research procedures to justify a full-scale RCT. Given the increased possibility of invalid or misrepresentative entries in mHealth and Web-based studies, strategies for detection and prevention are critical. TRIAL REGISTRATION: ISRCTN Registry ISRCTN12175285; http://www.isrctn.com/ISRCTN12175285.
... In this way, the research community where studies are implemented is disconnected from the context in which the interventions will eventually be implemented. Previous research has found that incentives increase response rates for surveys (Brown et al., 2016;DeCamp & Manierre, 2016;Edwards et al., 2002;Göritz, 2010), but these results strictly apply to research studies. Empirical support for intervention strategies is often based on controlled studies which have recruited participants using monetary or other incentives. ...
Article
Objectives: The current research evaluated delivery modality and incentive as factors affecting recruitment into a personalized normative feedback (PNF) alcohol intervention for heavy drinking college students. We also evaluated whether these factors were differentially associated with participation based on relevance of the intervention (via participants' drinking levels). Method: College students aged 18-26 who endorsed at least one heavy drinking episode and one alcohol-related consequence in the past month (N = 2059; 59.1% female) were invited to participate in a PNF intervention study. In this 2 × 2 design, participants were randomized to: (1) complete the computer-based baseline survey and intervention procedure remotely (i.e., at a time and location of their convenience) or in-person in the laboratory, and (2) receive an incentive ($30) for their participation in the baseline/intervention procedure or no incentive. Results: Consistent with hypotheses, students were more likely to participate when participation occurred remotely (OR = 1.87, p < .001) and when an incentive (OR = 1.64, p = .007) was provided. Moderation analyses suggested that incentives were only associated with higher recruitment rates among remote participants (OR = 2.10, p < .001), consistent with cognitive evaluation theory. Moreover, heavier drinkers were more likely to participate if doing so remotely, whereas drinking was not associated with likelihood of participation among in-person participants. Discussion: The present results showed a strong selection bias for participation in a web-based intervention study relative to one in which participants were required to participate in-person. Results have implications for researchers recruiting college students for alcohol interventions.
Preprint
Full-text available
Face-to-face interviews are still the standard in conducting cross-national surveys. Although web surveys have many advantages, so far they have rarely been used in cross-national surveys. The main problem of using web in cross-national surveys are coverage error of people without internet access and problems with the availability of sampling frames. This study reports on a large-scale experiment with a push-to-web survey design in Croatia, Germany and Portugal to overcome these problems.We experimentally assigned individuals to a face-to-face only condition or a push-to-web condition, in which non-respondents to the web-phase of the study were followed-up by face-to-face interviewers. We additionally conducted three within-country experiments to better understand how incentive structures (in Germany) and the spacing of reminders (In Croatia) affected response rates and nonresponse bias. In Portugal, we test different within-household selection procedures.We find that in Germany and Croatia the push-to-web design was equally or more successful than the face-to-face survey in terms of the response rate and nonresponse bias. In Portugal, the push-to-web design was not successful, leading to low response rates and problems in the respondent selection process. We also find that a mix of unconditional and conditional incentives works best, and that weekly reminders work better than two-weekly reminders. Overall, we conclude that it is possible to use a push-to-web design as long as a sampling frame of individuals is available.
Article
We analyse the effect of different incentive structures on response rates for an online survey of New Zealand landowners who have previously not responded to an earlier wave of the survey. We find response rates are lowest for direct cash payment, lower even than for a control group with no incentive, which may be due to direct payment extinguishing any warm glow people receive from the charitable act of completing a survey. Lottery and charitable donation incentives do not increase response rates relative to a control group with no incentive.
Article
Background Mobile health (mHealth) and Web-based research methods are becoming more commonplace for researchers. However, there is a lack of mHealth and Web-based human papillomavirus (HPV) prevention experimental studies that discuss potential issues that may arise. Objective This study aimed to assess the feasibility of research procedures and discuss the challenges and lessons learned from an mHealth and Web-based HPV prevention experimental study targeting female Korean American college students in the United States. Methods A pilot randomized controlled trial (RCT) was conducted in an mHealth and Web-based platform with 104 female Korean American college students aged 18-26 years between September 2016 and December 2016. Participants were randomized to either the experimental group (a storytelling video intervention) or the comparison group (a nonnarrative, information-based intervention). Outcomes included the feasibility of research procedures (recruitment, eligibility, randomization, and retention). Results From September 2016 to October 2016, we recorded 225 entries in our initial eligibility survey. The eligibility rate was 54.2% (122/225). This study demonstrated a high recruitment rate (95.6%, 111/122) and retention rate (83.7%, 87/104) at the 2-month follow-up. Conclusions Findings from this study demonstrated sufficient feasibility in terms of research procedures to justify a full-scale RCT. Given the increased possibility of invalid or misrepresentative entries in mHealth and Web-based studies, strategies for detection and prevention are critical. Trial Registration ISRCTN Registry ISRCTN12175285; http://www.isrctn.com/ISRCTN12175285
Article
Full-text available
While nonresponse rates in household surveys are increasing in most industrialized nations, the increasing rates do not always produce nonresponse bias in survey estimates. The linkage between nonresponse rates and nonresponse bias arises from the presence of a covariance between response propensity and the survey variables of interest. To understand the covariance term, researchers must think about the common influences on response propensity and the survey variable. Three variables appear to be especially relevant in this regard: interest in the survey topic, reactions to the survey sponsor, and the use of incentives. A set of randomized experiments tests whether those likely to be interested in the stated survey topic participate at higher rates and whether nonresponse bias on estimates involving variables central to the survey topic is affected by this. The experiments also test whether incentives disproportionately increase the participation of those less interested in the topic. The experiments show mixed results in support of these key hypotheses.
Article
Full-text available
This article reports the results of a meta-analysis of 38 experimental and quasi-experimental studies that implemented some form of mail survey incentive in order to increase response rates. A total of 74 observations or cases were classified into one of four types of incentive groups: those using prepaid monetary or nonmonetary rewards included with the initial survey mailing and those using monetary or nonmonetary rewards as conditional upon the return of the survey. Results were generated using an analysis of variance approach. The overall effect size across the 74 observations was reported as low to moderate at d = .241. When compared across incentive types, only those surveys that included rewards (both monetary and nonmonetary) in the initial mailing yielded statistically significant estimates of effect size (d = .347, d = .136). The average increase in response rates over control conditions for these types of incentives was 19.1 percent and 7.9 percent, respectively. There was no evidence of any impact for those incentive types offering rewards contingent upon the return of the survey.
Article
Few studies have employed a controlled experimental design to test the effectiveness of unconditional cash incentives on the rates of participation in web surveys. Even fewer studies have looked at the effects of these incentives on nonresponse bias in web surveys. This article addresses these two underresearched areas by utilizing two separate sources of data on a random sample of college students. Specifically, we examine the impact of prepaid token incentives on response rates to a web survey and compare survey data on respondents to administrative records of all sampled persons. Results support the use of unconditional incentives in web surveys as an effective way to improve response. However, contrary to several studies on the relationship between token incentives and nonresponse bias, our findings suggest that prepaid cash incentives may actually produce data that are less representative of the target population.
Article
Lottery incentives are widely used by institutional researchers despite a lack of research documenting the effectiveness of postpaid incentives in general and lottery incentives in particular. A controlled experiment tested the effects of lottery incentives using a prospective college applicant Web survey, with e-mails sent to more than 9,000 high school students. The impact of the level of lottery incentive on response rates and response bias is discussed.
Article
Researchers are struggling to determine effective methods to improve response rates to web surveys. This study presents the results of an experiment that varied the disbursement of an incentive in a web survey. Participants were randomly assigned to receive either a $5 or a $10 prepaid incentive. In line with the social exchange theory of survey participation, no significant differences were found in response rate between the two conditions. However, the incentive amount interacted with gender. Specifically, women were more likely to respond to the survey when provided with a $5 incentive compared to a $10 incentive.
Article
This article is intended to supplement rather than replace earlier reviews of research on survey incentives, especially those by Singer (2002); Singer and Kulka (2002); and Cantor, O’Hare, and O’Connor (2008). It is based on a systematic review of articles appearing since 2002 in major journals, supplemented by searches of the Proceedings of the American Statistical Association’s Section on Survey Methodology for unpublished papers. The article begins by drawing on responses to open-ended questions about why people are willing to participate in a hypothetical survey. It then lays out the theoretical justification for using monetary incentives and the conditions under which they are hypothesized to be particularly effective. Finally, it summarizes research on how incentives affect response rates in cross-sectional and longitudinal studies and, to the extent information is available, how they affect response quality, nonresponse error, and cost-effectiveness. A special section on incentives in Web surveys is included.
Article
Inspired by the positive effects of incentives on mail survey response rates, researchers have started using incentives to increase response rates to Web surveys. The established best practice of presending cash incentives is difficult to implement in Web surveys, and studies suggest that its presumed effects might not be witnessed in Web surveys. In contrast, several studies have found that lotteries can significantly increase Web survey response rates. Some authors have argued that this could reflect the fact that Internet users have come to expect Web surveys to be associated with lotteries. An experimental study among university students found that the lottery influences the Web survey response rates, but there are indications that different subgroups might be more influenced by this incentive than others. The observed differences are interpreted along the lines of possible differences in the degree to which different respondent groups expect incentives in return for their participation.
Article
Two incentive experiments were conducted in different online access panels. Experiment 1 was carried out in a commercial market research panel. It examined whether three different types of promised incentives (redeemable bonus points, money lottery and gift lottery), four different amounts of bonus points or raffled money, and two different denominations of raffled money influenced response quantity, sample composition, response quality and survey outcome. Type of incentive and number of bonus points mildly influenced dropout and sample composition. Moreover, response was higher with bonus points than with the two types of lotteries. Response quality and survey outcome were not affected. Experiment 2 was conducted in a non-profit panel, which holds one half self- selected and one half non-self-selected participants. Incentives were two different amounts of raffled money in two different denominations. Response, dropout, response quality, survey outcome and sample composition were not affected. Based on a cost-benefit analysis, recommendations for employing incentives in online access panels are given.
Article
Many surveys of the U.S. household population are experiencing higher refusal rates. Nonresponse can, but need not, induce nonresponse bias in survey estimates. Recent empirical findings illustrate cases when the linkage between nonresponse rates and nonresponse biases is absent. Despite this, professional standards continue to urge high response rates. Statistical expressions of nonresponse bias can be translated into causal models to guide hypotheses about when nonresponse causes bias. Alternative designs to measure nonresponse bias exist, providing different but incomplete information about the nature of the bias. A synthesis of research studies estimating nonresponse bias shows the bias often present. A logical question at this moment in history is what advantage probability sample surveys have if they suffer from high nonresponse rates. Since postsurvey adjustment for nonresponse requires auxiliary variables, the answer depends on the nature of the design and the quality of the auxiliary variables.
Article
The purpose of this study was to examine the effects of two incentive conditions (a $10 pre-incentive only vs. a $2 pre-incentive and a $10 promised incentive) on response rates, sample composition, substantive data, and cost-efficiency in a survey of college student substance use and related behaviors. Participants were 3000 randomly-selected college students invited to participate in a survey on substance use. Registrar data on all invitees was used to compare response rates and respondents, and web-based data collection on participants was used to compare substantive findings. Participants randomized to the pre-incentive plus promised incentive condition were more likely to complete the survey and less likely to give partial responses. Subgroup differences by sex, class year, and race were evaluated among complete responders, although only sex differences were significant. Men were more likely to respond in the pre-incentive plus promised incentive condition than the pre-incentive only condition. Substantive data did not differ across incentive structure, although the pre-incentive plus promised incentive condition was more cost-efficient. Survey research on college student populations is warranted to support the most scientifically sound and cost-efficient studies possible. Although substantive data did not differ, altering the incentive structure could yield cost savings with better response rates and more representative samples.