Content uploaded by Whitney DeCamp
Author content
All content in this area was uploaded by Whitney DeCamp on Jul 24, 2018
Content may be subject to copyright.
“Money Will Solve the Problem”:
Testing the Effectiveness of Conditional Incentives for Online Surveys
Whitney DeCamp
Western Michigan University
Matthew J. Manierre
Clarkson University
The data used in this research were collected using funding made available by Western Michigan
University through the Arts and Sciences Teaching and Research Award. The authors wish to
thank Western Michigan University and its College of Arts and Sciences for their generous
support.
DeCamp, Whitney, and Matthew J. Manierre. (2016). “Money Will Solve the Problem”: Testing
the Effectiveness of Conditional Incentives for Online Surveys. Survey Practice, 9.
“Money Will Solve the Problem”:
Testing the Effectiveness of Conditional Incentives for Online Surveys
Abstract
Incentivizing survey participation through the use of cash or other rewards has often been used to
encourage participation. This is often done with the hopes of increasing response rates and,
therefore, representativeness of the responding sample as well. The effectiveness of incentives
has generally been shown to be positive, but results have been mixed for conditional incentives
and for online surveys. Using an experimental design, this study uses a random sample of
undergraduate students to estimate group differences, incorporating both official and self-report
data. Participants were randomly assigned to one of three different groups with varying
incentives of five dollars, two dollars, or nothing. Results indicate that a five-dollar conditional
reward credited to students’ campus ID card account does increase participation rates, which
does improve the representativeness of the sample, but does not appear to significantly change
substantive conclusions.
Introduction
Having a representative sample is important for virtually any quantitative study. Often, the
response rate is a key indicator used to determine if it is plausible that the sample might have a
bias (Groves 2006), causing researchers to seek methods for improving response rates to ensure
better representation. Financial incentives have often been used to increase response rates and
have been regarded as a successful approach in field-based studies where cash can be offered in-
person (Dillman et al. 2009; Singer and Ye 2013). Increased use of Internet-based surveys has
made it burdensome to deliver up front incentives, however. Therefore, an important question is
whether an incentive provided upon completion increases response rates for electronic surveys.
In addition, a better understanding is needed regarding whether these economic incentives alter
sample characteristics by changing who might response to the incentive, increasing non-response
bias. This study tests the effectiveness of conditional monetary incentives for web-surveys
through an experimental design in which participants are randomly assigned to an incentive or
incentive-less group.
Groves and colleagues’ (2000) leverage-saliency theory of survey participation provides
the theoretical framework for the present study. This framework posits that survey participation
depends in part on the survey request’s emphasis on particular elements of the survey (salience)
and leverage; the promised incentives are hypothesized to work because they add a salient
element (payment) to with increasingly positive leverage as compensation increases (Singer and
Ye 2013). Regardless of quantity offered, the leverage attributed to the survey incentive depends
on the individual - not everyone will react the same way (Groves et al. 2000). Therefore, it is also
important to also consider the possibility that survey demographics and responses may be biased
due to some individuals being more motivated by financial incentives than others.
Monetary incentives typically fall into three categories: 1) prepaid incentives are given to
the respondent before they have completed the survey, 2) conditional incentives are given to
respondents after they have completed the survey, and 3) lotteries are used for their low cost
because only a few respondents are rewarded for their efforts. The literature has explored the
effectiveness of each of these incentives, though research on incentives in web surveys is still
2
uncommon. The prepaid cash incentive is widely regarded as the best method of improving
survey response for mail and in-person surveys (Church 1993; Kypri and Gallagher 2003;
Birnholtz 2004; Dillman et al. 2009), but is frequently infeasible for online surveys (Porter and
Whitcomb 2003; Hoonakker and Carayon 2009). Conditional incentives, in which the respondent
is guaranteed a reward after completion an online survey, are less researched and they have
yielded mixed results, ranging from null effects to clear improvements (Birnholtz et al. 2004;
Göritz 2004; Göritz 2006; Patrick et al. 2013).
Implicit in the pursuit of high response rates is the assumption that higher response rates
corresponds to a better sample, but even surveys with high response rates can suffer from bias.
When incentives are used to motivate response, it is hoped that they improve response for every
member of the population but different groups may interpret the “weight” of incentives
differently (Groves et al. 2000). In mail surveys, it has been found that a variety of incentives
produce differences in demographic characteristics (Ryu et al. 2005; Teisl et al. 2006). Few
studies have examined the effect of incentives on nonresponse bias in online surveys, none of
which have explored the effect of conditional incentives. Two studies have found that lottery
incentives attract female respondents to web surveys disproportionately (Heerwegh 2006,
Laguilles et al. 2011). A similar result was found for prepaid incentives and online surveys, with
a group receiving a $2 prepaid incentive being less reflective of administrative records than the
control group, with the control group matching official records for gender, but the incetivized
group being overly female (Parsons and Manierre 2013). Some research also suggests that the
gender gap is more pronounced with smaller dollar figures, but still exists with higher values
nonetheless (Boulianne 2013).
Overall, evidence is inconsistent regarding how effective conditional incentives are for
online surveys. Whether this potential improvement in survey response actually makes the
sample more representative is also in question. The present study examines these lingering
questions using an experimental design with official data to analyze for nonresponse bias. The
following hypotheses are posed based on the literature and leverage-salience theory: a) offering
these incentives will result in a better response rate, and increasing the size of the incentive
promised will further improve the response rate for the web survey, b) a better response rate will
result in different demographic characteristics, which may be more or less accurate in
representing the population, and c) incentive groups will have significantly different responses to
substantive items on the survey.
Methods
An experimental design is used to test the effectiveness of conditional incentives. For the sample,
1,000 full-time, undergraduates were randomly selected at a large, American Midwest university.
The selection process was performed with coordination from the university’s research office
under the approval of the institutional review board. These students were randomly assigned to
one of three groups and were invited to participate in a survey on “college behaviors.” The email
invitation contained a unique link to the survey with no additional login credentials required,
which made access simple and convenient. The unique link stopped working after survey
completion, preventing duplicate responses from the same link. The students were promised
confidentiality and their names were removed from all datasets.
Those assigned to the control group (n=600) were sent an invitation without any
discussion of incentives. Students assigned to the first experimental group (n=250) or the second
3
(n=150) were given the same invitation, but with a promise of a $2 or $5 (respectively) credit
being applied to their student ID card. To maximize the utility of the incentive, they were told
that they would be able to select the type of credit: for use at the bookstore (which has a large
selection of merchandise) or for use at dining services (including various options across the
campus). All students equally benefited from such a credit, as all students are required to have
student ID cards. The invitation emails were sent on the Monday of the third week of classes in
the fall 2012 semester. Reminder emails were sent on the fourth, ninth, and seventeenth days to
potential participants who had not yet responded or unsubscribed (only 3.9 percent of students
opted-out).
In addition to simply comparing the response rates based on the incentives offered,
official individual-level data were provided by the university for use in determining whether any
response biases are present among the groups. The official data include: gender, race (non-
Hispanic white or other), age, nationality (American or international), state residency (in-state
tuition rate or not), campus residency (lives on-campus or not), class status in years completed (0
= freshmen, 1 = sophomore, etc.), and GPA. The substantive questions on the survey can also
provide some insight into differences that might occur across incentive groups. The variables
used from the survey include measures of personal characteristics, substance use, victimization,
and offending. The true population proportions for these indicators are unknown, but bias can be
identified by comparing the three study groups against one another. Comparisons between
groups are performed using chi-square tests or t-test as appropriate for the level of measure.1
Results
During the three weeks in which the survey was available, 322 students (32.2 percent)
responded, including 182 (30.4 percent) in the control group, 77 (30.8 percent) in the two-dollar
group, and 63 (42.0 percent) in the five-dollar group. Students in the five-dollar group were
significantly more likely to respond than those in either the control or two-dollar groups (p = .
021). The graph in Figure 1 illustrates that the incentive group only has a higher response rate
during the first week, suggesting that the incentive has a stronger effect on early respondents.
The differences between the population parameters (official data) and the sample estimate
are displayed in Table 1. The control group has significant deviations from administrative records
in the form of higher observed GPAs, a larger proportion of females, and a larger proportion of
students living on-campus. The two-dollar group similarly had a larger proportion of females
than the original sampling frame. Conversely, the five-dollar group was virtually identical to the
sampling frame. Overall, none of the groups were radically different from the population, though
the five-dollar group was closest to the original sampling frame.
The self-reported question responses are displayed in Table 2. Although the percentages
do vary from group to group, there were no significant differences for any of these measures with
the sole exception of academic cheating on exams. Caution should be used in interpreting the
significance here, as these analyses included 22 chi-square tests and the significance level (p < .
10) indicates only a one-in-ten chance of incorrectly rejecting the null hypothesis. Given 22 tests
and a 10 percent error rate, it is likely – even expected – that something would be significant by
chance alone. Thus, there is more evidence suggesting similarities than dissimilarities.
1 Marginal significance (p < .10) is reported in the analyses given the moderate sample sizes of the groups. As will
be demonstrated in the results to come, this is done not to over-report significant differences. Rather, given that very
few tests show significance falling in this gray area, this is done to provide even further evidence that differences are
negligible.
4
Figure 1: Responses as a Percent of Total Group Sample by Day
Table 1: Sample characteristics compared to administrative data by group
Control Group Two-Dollar Group Five-Dollar Group
Full
Sample
Respon-
dents
Only
Full
Sample
Respon-
dents
Only
Full
Sample
Respon-
dents
Only
Mean Age 21.04 21.10 21.16 20.79 20.90 21.06
Mean Class Year 1.54 1.49 1.60 1.56 1.55 1.62
Mean GPA 2.96 3.12** 3.09 3.20 3.06 3.20†
% Female 48.91 58.24* 47.20 58.44* 50.00 49.21
% Racial Minority 23.95 20.81 17.65 20.00 25.17 23.33
% State Resident 91.49 91.76 88.80 88.31 90.00 90.48
% International 4.01 4.40 4.40 2.60 4.67 4.76
% On-Campus 32.05 41.21* 26.80 29.87 30.67 33.33
Sample Size n=600 n=182 n=250 n=77 n=150 n=63
† p < .10 - * p < .05 - ** p < .01
5
Table 2: Self-report characteristics (percentages) by group
Control Two-
Dollar
Five-
Dollar
Athlete 32.97 29.87 30.16
Religion (Christian) 63.74 59.74 58.73
Religion (Other Religion) 7.14 5.19 7.94
Religion (Atheist/Agnostic/None) 29.12 35.06 33.33
Sexual Activity 65.56 67.53 61.90
Unprotected Sex 28.81 36.84 36.51
Cigarette Use 28.89 19.74 25.40
Alcohol Use 77.35 76.62 69.84
Binge Drinking 52.49 48.68 49.21
Marijuana Use 22.91 22.67 26.98
Other Illegal Drugs 28.73 29.73 31.75
Non-Prescribed Rx Use 22.10 26.32 28.57
Theft Victim 31.64 29.87 33.33
Assault Victim 18.08 14.29 20.63
Robbery Victim 9.09 5.19 3.17
Academic Cheating†10.17 9.21 20.63
Assault 11.30 13.16 12.70
Theft 5.65 2.63 3.23
Forgery / Fake ID 3.98 6.58 4.84
DUI Alcohol 12.99 14.47 12.70
DUI Drugs 18.18 18.42 14.52
DUI Overall 22.60 25.00 25.40
† The $5 group is significantly different from the control (p < .05) and $2 (p < .10) groups
for cheating. No other variables are significantly different between groups.
6
Worth considering is that a lack of significance in a relationship does not necessarily
indicate that there is no relationship. Significance tests single relationships, while ignoring trends
across multiple tests. In other words, although the groups are not significantly different, they
could be consistently different. To examine this possibility, deviations as a measure of each
group’s difference from the others were examined. When the deviations for each variable are
averaged, the two-dollar group has an average deviation from the control group of -0.27. The
five-dollar group has an average deviation from the control group of 0.32 and from the two-
dollar group of 0.59. The average deviation is therefore less than one percentage point for each
comparison. Thus, rather than any bias trend, substantive differences between groups appear to
be more akin to random noise that averages nearly zero when taken as a whole, so incentive
groups do not have significantly or substantively different responses to items on the survey.
Discussion
In line with leverage-salience theory, this experimental design supports the assertion that offering
a conditional reward as payment for completing a survey does have an impact on response rates.
This is evidenced by the increase in response rates from 30% in the control group to 42% in the
five-dollar group. However, the lack of significant improvement from the two-dollar group
implies that the impact is tied to the amount of the reward. These findings lend support to
leverage-salience theory’s assertion that this type of incentive functions as a payment rather than
a trigger for reciprocity, as reciprocity would have resulted in both incentives increasing response
rates.
Research has demonstrated that conditional incentives increase response rates in some
situations (Dillman et al. 2009; Singer and Ye 2013), but the empirical support for that effect is
mixed for online surveys (e.g. Birnholtz et al. 2004, Singer and Ye 2013). It is possible that the
reason this study contradicts prior null findings is because the reward was provided
electronically and respondents had an option for their compensation type, increasing the leverage
of what is often a static and delayed reward.
It also appears that the higher response rate of the five-dollar group coincided with
improved representativeness compared to both the control group and small incentive group. This
improvement likely indicates that the incentive has higher leverage among college students who
are normally unlikely to respond, such as off-campus students and men. This is a key finding
given that some prior studies of college students have found that increasing the response rate
through prepaid and lottery incentives may further bias responses in web surveys towards certain
groups (Heerwegh 2006; Laguilles et al. 2011; Parsons and Manierre 2013; Boulianne 2013).
Contrary to prior studies, these data suggest that a five-dollar conditional incentive may increase
representativeness while also improving response.
Similar to response rates being used as a proxy for representativeness, accurate
demographics are assumed to correspond to more representative measurement of dependent and
independent variables. When examining substantive survey items, the vast majority of measures
indicated no significant differences based on the incentive offered and the response rate
achieved, suggesting that neither has a significant impact on substantive conclusions.
Although offering several advances, the present study has a few limitations. First, the
novel reward distribution system used here to provide electronic deposits to ID cards may affect
generalizability, as cash or other types and sizes of rewards might yield different effects. Second,
this study focused exclusively on college students, and the effectiveness of incentives may vary
7
based on the target audience. Finally, it is possible that the groups in this study are too small to
detect statistically significant deviations from administrative records in some cases. Most of the
deviations were substantively small, however, so it is unlikely that the core conclusions would be
changed by a larger sample.
In addition to replicating this study, future research should expand on these findings. It
remains unclear whether the use of conditional rewards for web surveys is more or less effective
than the use of raffle designs or a prepaid cash incentive. It would also be beneficial to examine
the effect of allowing respondents to choose their reward, as they were in this study, as this may
increase the appeal of the incentive. This choice element may help to explain this study’s
somewhat surprising findings, which contradict much of the literature on promised incentives
and mail surveys.
Until future research can clarify these and other unanswered questions, it remains unclear
whether there are sufficient gains from using conditional incentives. In sum, this study suggests
that a sufficiently large promised incentive may help to improve web survey response among
college students while also improving the representativeness of that data. However, it remains
unclear whether there are sufficient substantive gains to justify this investment. On one hand, the
increased response rate and representativeness suggests that there is a benefit for the increased
cost. On the other hand, the minimal differences on substantive items implies that conclusions
are not necessarily impacted. Overall, this research suggests that incentives are effective, but also
that failing to use incentives may not necessarily result in “bad data” that are substantively less
valuable. Therefore, it is advised that web survey researchers base their decision on whether to
use a conditional incentive on whether demographically accurate data or additional power is
required, rather than the prospect of enhanced data on substantive issues.
References
Birnholtz, J. P., Horn, D. B., Finholt, T. A., Bae, S.J. (2004). The effects of cash, electronic,
and paper gift certificates as respondents incentives for a web-based survey of
technologically sophisticated respondents. Social Science Computer Review, 22, 355-62.
Boulianne, S. 2013. Examining the Gender Effects of different Incentive Amounts in a Web
Survey. Field Methods 25(6), 91-104.
Church, A. H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-
analysis. Public Opinion Quarterly, 57, 62-79.
Dillman, D. A., Smyth, J.D., Christian, L.M. ( 2009). Internet, mail, and mixed mode surveys:
The tailored design method (3rd ed.). New York: Wiley.
Göritz, A. S. (2004). The impact of material incentives on response quantity, response quality,
sample composition, survey outcome, and cost in online access panels. International
Journal of Market Research. 46(3), 327-345.
Göritz, A. S. (2006). Incentives in web studies: Methodological issues and a review.
International Journal of Internet Science, 1, 58-70.
Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public
Opinion Quarterly, 70(5), 646-675.
Groves, R. M., Singer, E., and Corning E. (2000). Leverage-Salience theory of survey
participation: Description and an Illustration. Public Opinion Quarterly, 64, 299-308.
Heerwegh, D. (2006). An investigation of the effect of lotteries on web survey response rates.
8
Field Methods, 18, 205-220.
Hoonakker, P. & Carayon, P (2009). Questionnaire survey nonresponse: A comparison of postal
mail and internet surveys. International Journal of Human-Computer Interaction, 25,
348-73.
Kypri, K. & Gallagher S.J. (2003). Incentives to increase participation in an internet survey of
alcohol use: A controlled experiment. Alcohol & Alcoholism, 38(5), 437-441.
Laguilles, J. S., Williams, E.A., and Saunders, D.B. (2011). Can lottery incentives boost web
survey response rates? Findings from four experiments. Research in Higher Education,
52, 537-53.
Parsons, N. L., & Manierre, M. J. (2013). Investigating the relationship among prepaid token
incentives, response rates, and nonresponse bias in a web survey. Field Methods 1-14.
Patrick, M.E., Singer, E., Boyd, C.J., Cranford, J.A., McCabe, S.E. 2013. “Incentives for college
student participation in web substance use surveys. Additive Behaviors, 38(8), 1710-
1714.
Porter, S. R., & Whitcomb, M. E. (2003). The impact of lottery incentives on student survey
response rates. Research in Higher Education, 44, 389-407.
Ryu, E., Couper, M.P., Marans, R.W. (2005). Survey incentives: Cash vs. in-kind; face-to-face
vs. mail; resonse rate vs. nonresponse error. International Journal of Public Opinion
Research, 18(1): 89-106.
Singer, E. & Ye, C. (2013). The Use and Effects of Incentives in Surveys. The ANNALS of the
American Academy of Political and Social Science, 645, 112-141.
Teisl, M.F., Roe, B., Vayda, M. (2006) Incentive effects on response rates, data quality, and
survey administration costs. International Journal of Public Opinion Research, 18(3),
364-373.
9