ArticlePDF Available

Financial incentives, personal information and drop-out in online studies

Authors:
Financial incentives, personal information and drop-
out rate in online studies
A. Frick1, M. T. Bächtiger & U.-D. Reips
Whereas in a classical laboratory setting participants often feel compelled to stay and
finish the experiment, participants in online studies can leave the session at any time.
Though, from an ethical point of view this is an advantage of online studies, it
might pose methodological problems. Of course, web experimenters would like their
participants to stay until the end of the experiment. To ensure this they use special
techniques.
One such technique is to make web pages shorter and more attractive the further
the participant gets. If a web page has a long loading time at the beginning
participants with small interest or little time leave right away without even starting
the experiment. This “high hurdle technique” is particularly effective in combination
with a warm-up phase (Reips, 1996, 1999).
A second frequently used technique to prevent participants from leaving is to
initially announce a lottery with prizes, in which only those who finish the
experiment can take part. Whether or not this procedure is a successful method to
reduce the drop-out rate has never been examined experimentally. One might argue
that promising financial incentives is negligible in reducing drop-out or might even
reduce the intrinsic motivation of the potential participant (Deci, 1975).
A survey among 21 web experimenters recently conducted by Musch and Reips
(in press) suggests that this is not the case. In contrast to the expectation of a purely
intrinsic motivation to participate in online studies they found a clear link between
lack of financial incentives and drop-out rate. A monetary prize might diminish drop-
out tendency whenever intrinsically motivating factors are not sufficient. The web
experiment at hand has been conducted to further investigate the causal nature of the
relationship between financial incentives and drop-out.
The experiment was also designed to test the hypothesis that asking participants
for personal information early in the experiment would lead to increased drop-out as
well as different answering behavior in questions that are likely to be influenced by
social desirability. Participants’ answers might be more strongly influenced by social
norms, if they believe they could be identified (e. g., by their e-mail address). Or they
might discontinue participation in the experiment if they realize that their behavior
would force them to answer contrary to what is usually desired or accepted. The
question whether personal or demographic data should be assessed at the beginning
1Andrea Frick; a.frick@access.unizh.ch;
Universität Zürich, Allgemeine und Entwicklungspsychologie, Attenhoferstr. 9, CH-8032 Zürich,
of an experiment is of high relevance in online research, if the perceived anonymity
would affect the participants’ answers.
Design
The experiment was conducted in the Web Experimental Psychology Lab at Zürich
University2. At the lab, and again on the first page of the experiment, the participants
had the option to choose between a German and a comparable English version.
On the first page of our experiment the participant was informed that all collected
data would be used for scientific purpose only, that all the information would be
treated confidential, and that the results of the study would be published on the
WWW.
A Common Gateway Interface program (CGI) randomly assigned the participants
to one of two versions of the first page. One of the two groups received additional
information about a lottery in which those participant who answered all questions
could take part and win either 40$, 25$, or 15$.
From these starting pages, another randomizing CGI would lead the participants
to one of the four experimental conditions. In each of these four conditions three
forms where presented in a different order (see Figure 1).
One form assessed personal information (PI) like gender, age, e-mail address or
telephone number, and nationality. In two of the four conditions this personal
information was assessed at the beginning of the experiment, and in two conditions it
was assessed at the end.
Figure 1: Experimental conditions.
The remaining two forms (TV and CO) contained two questions that are likely to be
influenced by social desirability: “How many hours per week do you watch
television?” (TV); “How many hours per week would you work free of charge for a
charitable organization?” (CO). The order of these two questions was manipulated.
The independent variables were:
information about the lottery: provided or not provided
personal information (PI): assessed at the beginning or at the end
order of presentation of TV and CO
the language of the completed version: German or English
German version
TV
TV
CO
CO
PIPI
TV
TV
PI
CO
CO
PI TV
TV
CO
CO
PIPI
TV
TV
PI
CO
CO
PI
No Lottery
Lottery
T
C
P
English version
T
T
C
C
P
P
T
T
P
C
C
P
T
T
C
C
P
P
T
P
C
No
Lotte
The dependent variables were:
the answers to the two questions concerning television (TV) and charitable
organization (CO) and
number of participants leaving the experiment before finishing it completely
(drop-out rate)
Participants
804 visits from unique IPs were reported on the first page of the experiment, 482 in
the English version, and 322 in the German version. 61.4% of responders to
demographic questions (n=686) reported they were female, 33.8% reported they
were male, 4.8% did not report their gender. 3.2% did not report their age. Figure 2
shows the indicated age of the remaining 96.8%.
Age
0
10
20
30
40
50
60
70
80
90
100
<10
10-14
15-19
20-24
25-29
30-34
35-39
40-44
45-49
50-54
55-59
60-64
65-69
>69
Age groups
Reports in %
Figure 2: Reported age.
8.9% of those participants who did request the page with demographic questions did
not report their nationality. The most frequently indicated nationalities (reported by 4
or more participants) were as follows:
Table 1:
Nation Reports Nation Reports
USA (american) 275 Australia / New Zealand 8
Germany 173 India 7
Switzerland 29 “hispanic” 6
Canada 24 Netherlands 5
“white” 18 Africa 5
England 13 China 5
Austria 9 Italy 4
Ireland 9 Korea 4
“caucasian” 9 “black” 4
Results
Incentives
Data supported the hypothesis that announcing a lottery at the beginning of a study
results in a reduced drop-out rate. As in the Musch and Reips study, drop-out was
found to be about twice as large in the non-lottery information condition than in the
lottery information condition (18.5% versus 9.5%). This differing drop in number of
participants was not immediately after the first page (with the lottery information)
but rather distributed over the whole length of the experiment. This suggests that the
lottery information does not result in additional motivation to start with the
experiment, but diminishes drop-out tendency caused by other factors.
The overall drop-out was relatively low, supporting the notion that with the proper
design even in online studies without financial incentives, drop-out rates should not
pose much of a methodological problem.
We conducted a 2 (lottery) x 2 (PI) x 2 (order) x 2 (language) ANOVA with the
data. Except for an unexpected three-way interaction between lottery, PI, and order,
none of the interactions were significant. Incentives did not significantly affect the
answers to the two questions concerning television (TV) and charitable organization
(CO).
Main effects for other factors than incentives will be reported in the sections
below.
Personal Information
Asking participants for personal information (PI) early in the experiment did not
increase drop-out. Surprisingly, drop-out within the condition with PI at the
beginning was even less (10.3% versus 17.5%). Apparently, the tendency of leaving
the experiment when PI is requested is higher after the experiment has already been
finished.
Asking participants for personal information early in the experiment did not lead
to different answering behavior in the questions about television and charitable
organization.
A detailed analysis was conducted to see how many questions were not answered
by those who did request the page with questions about demographic data. The
results showed that, on average, the group with PI at the beginning did not answer to
4.2% of the demographic questions, whereas the group with PI at the end left 11.8%
of the demographic questions unanswered. E. g., when the e-mail address was
requested at the beginning, 9.5% (33 out of 349) did not give that information,
whereas 20.5% (69 out of 337) did not give their e-mail address when it was
requested at the end of the experiment. (The corresponding figures for the other
demographic questions were: gender: 2.1% (7) to 7.7% (26), age: 0.6% (2) to 5.9%
(20), nation: 4.9% (17) to 13.0% (44).)
Effect of financial incentive information and location of
demographic questionnaire on drop-out
5.
7
14.
9
13.
2
21.
9
0.0
5.0
10.0
15.0
20.0
25.0
Lottery
information No lottery
information
Drop-out in percent
PI at beginning
PI at end
Figure 3: Effect of financial incentive information and location of demographic questionnaire
on drop-out.
Order
The order of the two questions (TV/CO vs. CO/TV) had a significant effect on the
answers to the TV question F (1, 664) = 5.00, p < .05. Those participants who
answered the question about their weekly television consumption first, reported to
watch more television (M = 679 minutes per week, SD = 31.2) than those who
answered the question about the charitable organization first (M = 580 minutes per
week, SD = 31.5).
Effect of order on TV- and CO-time reports
0
100
200
300
400
500
600
700
800
TV / CO-->TV TV / TV-->CO CO / CO-->TV Co / TV-->CO
Question / Order
Minutes
Figure 4: Effect of order on answers to TV- and CO-questions.
Language
There was also a significant effect of language on the answers about the charitable
organization, F (1, 664) = 22.78, p < .05. Those participants who completed the
German version showed less readiness to work free of charge for a charitable
organization (M = 203 minutes per week, SD = 15,1) than those who completed the
English version (M = 321 minutes per week, SD = 19,6).
Effect of language on TV- and CO-time reports
0
100
200
300
400
500
600
700
TV / English TV / German CO / English CO / German
Question / Language
Minutes
Figure 5: Effect of language on answers to TV- and CO-questions.
Language had another significant effect on the drop-out rate. Drop-out in the
English version was less than in the German version (20.4% versus 9.7%).
Effect of financial incentive information and language
on drop-out
6.
2
14.
7
13.
4
25.
9
0.0
5.0
10.0
15.0
20.0
25.0
30.0
English German
Drop-out in percent
Lottery informatio
n
No lottery informat
Figure 6: Effect of financial incentive information and language of completed version on
drop-out.
Discussion
Incentives
The fact that the presence of incentives did affect the number of participants leaving
the experiment before finishing it but not the answers about TV and CO shows that
this procedure might be a very promising means to reduce general drop-out in online
studies. Other factors potentially influencing drop-out in online studies are: the
design of the web pages, loading time, and impression of the institution conducting
the online study. These factors should also be studied experimentally.
Personal Information
The results suggest that whether demographic data or personal information is
requested at the beginning or at the end of an experiment does not seem to
systematically influence data. This result is quite pleasing because, if demographic
data is indispensable for a study, it may be better to put the request at the beginning.
As our results have shown, chances are much higher for participants not to provide
that information if it is assessed at the end of the experiment.
Order
A possible explanation for the order effect in the results of the two answers (TV and
CO) would be that those who answered the TV question first rated their consumption
more appropriately. Participants who answered the CO question first probably
reported a rather small value first, and then adjusted their rating in the TV question.
This supports the assumption that the answers to this questions are influenced by
social desirability.
Language
The significant difference between the German and the English version might imply
that, on average, in English speaking societies people show a higher devotion to
social volunteer work while in German speaking societies the social welfare system
is more professionally organized.
Of course, an alternative explanation would be a slight difference in the meaning
of German and English terms. ‘Charitable organization’ in both languages is not a
clearly defined term – and even within one language or culture people might interpret
it differently.
Conclusions
This experiment has shown that (1) financial incentives can reduce drop-out; (2)
assessing participants’ personal information at the beginning of an experiment can
reduce drop-out and may lead to more complete demographic data about the
participants; (3) these positive effects can be reached without biasing the data; (4) the
order and language of presentation of items can play a significant role.
References
Deci, E. L. (1975). Intrinsic Motivation. New York, NY: Plenum Press.
Musch, J., & Reips, U.-D. (in press). The Brief History of Web Experimenting: A
Survey. In M. H. Birnbaum (Ed.), Psychology Experiments on the Internet. San
Diego, CA: Academic Press.
Reips, U.-D. (1996, October). Experimenting in the World Wide Web. Paper
presented at the 1996 Society for Computers in Psychology conference,
Chicago.
Reips, U.-D. (1999). Theorie und Techniken des Web-Experimentierens [Theory and
techniques of web experimenting]. In B. Batinic, A. Werner, L. Gräf, & W.
Bandilla (Eds.): Online Research: Methoden, Anwendungen und Ergebnisse.
Göttingen: Hogrefe.
Reips, U.-D. (in press). The Web Experiment Method: Advantages, Disadvantages,
and Solutions. In M. H. Birnbaum (Ed.), Psychology Experiments on the
Internet. San Diego, CA: Academic Press.
... There may be ways that help to increase caregivers' commitment. Incentives such as monetary compensation (Frick et al., 1999), loyalty points (Göritz, 2008), or sweepstakes offering of a certain monetary value (LaRose & Tsai, 2014) have been shown to increase commitment in online studies (as indicated by an increase in response rate to invitations and completion rate, but see (Göritz, 2006) for null results). For an extensive overview of psychological and data collection via the Internet, the reader is referred to the extant literature (e.g., Birnbaum, 2004;Manfreda et al., 2008;Reips, 2002;Shih & Fan, 2008). ...
Article
Full-text available
Today, a vast number of tools exist to measure development in early childhood in a variety of domains such as cognition, language, or motor, cognition. These tools vary in different aspects. Either children are examined by a trained experimenter, or caregivers fill out questionnaires. The tools are applied in the controlled setting of a laboratory or in the children’s natural environment. While these tools provide a detailed picture of the current state of children’s development, they are at the same time subject to several constraints. Furthermore, the measurement of an individual child’s change of different skills over time requires not only one measurement but high-density longitudinal assessments. These assessments are time-consuming, and the breadth of developmental domains assessed remains limited. In this paper, we present a novel tool to assess the development of skills in different domains, a smartphone-based developmental diary app (the kleineWeltentdecker App , henceforth referred to as the APP (The German expression “kleine Weltentdecker” can be translated as “young world explorers”.)). By using the APP, caregivers can track changes in their children’s skills during development. Here, we report the construction and validation of the questionnaires embedded in the APP as well as the technical details. Empirical validations with children of different age groups confirmed the robustness of the different measures implemented in the APP. In addition, we report preliminary findings, for example, on children’s communicative development by using existing APP data. This substantiates the validity of the assessment. With the APP, we put a portable tool for the longitudinal documentation of individual children’s development in every caregiver’s pocket, worldwide.
... Of all 169 participants who started the study, 36 dropped out before finishing the study. This drop-out rate of 21.3% is comparable to other online studies in German-speaking regions [33]. From a total of 133 subjects participating in the study, six participants had to be excluded because of invalid scores in the IAT. ...
Article
Full-text available
The application of anthropomorphic design features is widely believed to facilitate human–robot interaction. However, the preference for robots’ anthropomorphism is highly context sensitive, as different application domains induce different expectations towards robots. In this study the influence of application domain on the preferred degree of anthropomorphism is examined. Moreover, as anthropomorphic design can reinforce existing gender stereotypes of different work domains, gender associations were investigated. Therefore, participants received different context descriptions and subsequently selected and named one robot out of differently anthropomorphic robots in an online survey. The results indicate that lower degrees of anthropomorphism are preferred in the industrial domain and higher degrees of anthropomorphism in the social domain, whereas no clear preference was found in the service domain. Unexpectedly, mainly functional names were ascribed to the robots and if human names were chosen, male names were given more frequently than female names even in the social domain. The results support the assumption that the preferred degree of anthropomorphism depends on the context. Hence, the sociability of a domain might determine to what extent anthropomorphic design features are suitable. Furthermore, the results indicate that robots are overall associated more functional, than gendered (and if gendered then masculine). Therefore, the design features of robots should enhance functionalities, rather than specific gendered anthropomorphic attributes to avoid stereotypes and not further reinforce the association of masculinity and technology.
... One disadvantage associated with online experiments, as noted by Reips (2002), is a higher attrition rate, which can be addressed by incorporating financial incentives, immediate feedback, and personalization (Frick et al., 2001). We did not notice an increased rate of withdrawal compared to previous in-person studies. ...
Article
Full-text available
With increased public access to the Internet and digital tools, web-based research has gained prevalence over the past decades. However, digital adaptations for developmental research involving children have received relatively little attention. In 2020, as the COVID-19 pandemic led to reduced social contact, causing many developmental university research laboratories to close, the scientific community began to investigate online research methods that would allow continued work. Limited resources and documentation of factors that are essential for developmental research (e.g., caregiver involvement, informed assent, controlling environmental distractions at home for children) make the transition from in-person to online research especially difficult for developmental scientists. Recognizing this, we aim to contribute to the field by describing three separate moderated virtual behavioral assessments in children ranging from 4 to 13years of age that were highly successful. The three studies encompass speech production, speech perception, and reading fluency. However varied the domains we chose, the different age groups targeted by each study and different methodological approaches, the success of our virtual adaptations shared certain commonalities with regard to how to achieve informed consent, how to plan parental involvement, how to design studies that attract and hold children’s attention and valid data collection procedures. Our combined work suggests principles for future facilitation of online developmental work. Considerations derived from these studies can serve as documented points of departure that inform and encourage additional virtual adaptations in this field.
... The participants were guided through the five subjects with the particular questions, but were not forced to answer all questions in order to minimise drop out (Frick et al., 2001;Goeritz, 2006;Hoerger, 2010). To make it more interesting, easier and more diversified for the participants to answer, there was the possibility to publish the answers (Clifford, & Jerit, 2015;Michalak & Szabo, 1998). ...
... The participants were guided through the five subjects with the particular questions, but were not forced to answer all questions in order to minimise drop out (Frick et al., 2001;Goeritz, 2006;Hoerger, 2010). To make it more interesting, easier and more diversified for the participants to answer, there was the possibility to publish the answers (Clifford, & Jerit, 2015;Michalak & Szabo, 1998). ...
Article
Aim: To characterise training for, and conduct of, image-guided liver tumour ablation amongst UK interventional radiologists. Materials and methods: A web-based survey of British Society of Interventional Radiology members was carried out between 31 August to 1 October 2022. Twenty-eight questions were designed, covering four domains: (1) respondent background, (2) training, (3) current practice, and (4) operator technique. Results: One hundred and six responses were received, with an 87% completion rate and an approximate response rate of 13% of society members. All UK regions were represented, with the majority from London (22/105, 21%). Seventy-two out of 98 (73%) were either extremely or very interested in learning about liver ablation during training, although levels of exposure varied widely, and 37/103 (36%) had no exposure. Performed numbers of cases also varied widely, between 1-10 cases and >100 cases per operator annually. All (53/53) used microwave energy, and most routinely used general anaesthesia (47/53, 89%). Most 33/53 (62%) did not have stereotactic navigation system, and 25/51(49%) always, 18/51 (35%) never, and 8/51(16%) sometimes gave contrast medium (mean 40, SD 32%) after procedures. Fusion software to judge ablation completeness was never used by 86% (43/55), sometimes used by 9% (5/55), and always used by 13% (7/55) of respondents. Conclusion: Although there are high levels of interest in image-guided liver ablation amongst UK interventional radiologists, training arrangements, operator experience, and procedural technique vary widely. As image-guided liver ablation evolves, there is a growing need to standardise training and techniques, and develop the evidence base to ensure high-quality oncological outcomes.
Article
Full-text available
Self-objectification is associated with a number of negative mental and behavioral outcomes. Though previous research has established associations between self-objectification and risky sex, no study to date has examined whether self-objectification affects propensity to engage in risky sex. The current research employed an experimental design to examine the effect of heightened self-objectification on a laboratory analog of risky sex (n = 181). We observed that when college-attending women experienced a heightened state of self-objectification, they were more likely to engage in sex without a condom and less likely to wait to use a condom with a highly desirable partner. Given the frequency of intended and unintended objectifying messages that young women face, this increase in willingness to engage in risky sex behavior represents a consequential health concern.
Article
Full-text available
Undertaking an experience-sampling study via smartphones is complex. Scheduling and sending mobile notifications often requires the use of proprietary software that imposes limits on participants’ operating systems (whether iOS or Android) or the types of questions that can be asked via the application. We have developed an open-source platform— Samply— which overcomes these limitations. Researchers can access the entire interface via a browser, manage studies, schedule and send notifications linking to online surveys or experiments created in any Internet-based service or software, and monitor participants' responses—all without the coding skills usually needed to program a native mobile application. Participants can download the Samply Research mobile application for free from Google Play or the App Store, join a specific study, receive notifications and web links to surveys or experiments, and track their involvement. The mobile application leverages the power of the React Native JavaScript library, which allows it to be rendered in the native code of Android and iOS mobile operating systems. We describe Samply , provide a step-by-step example of conducting an experience-sampling study, and present the results of two validation studies. Study 1 demonstrates how we improved the website’s usability for researchers. Study 2 validates the mobile application’s data recording ability by analyzing a survey’s participation rate. The application’s possible limitations and how mobile device settings might affect its reliability are discussed.
Article
Full-text available
The World Wide Web (WWW) provides a new tool for experimental research. The Web experiment method differs in fundamental aspects from traditional laboratory and field experiments; therefore it can be used to validate previous findings. Web experiments offer (1) easy access to a demographically and culturally diverse participant population, including participants from unique and previously inaccessible target populations; (2) bringing the experiment to the participant instead of the opposite; (3) high statistical power by enabling access to large samples; (4) the direct assessment of motivational confounding; and (5) cost savings of lab space, person-hours, equipment, and administration. These and 13 other advantages of Web experiments are reviewed and contrasted with 7 disadvantages, such as (1) multiple submissions, (2) lack of experimental control, (3) self-selection, and (4) drop out. Several techniques and other detailed solutions are described that avoid potential problems or even turn them into useful features of Web experimentation.
Experimenting in the World Wide Web. Paper presented at the 1996 Society for Computers in Psychology conference
  • U.-D Reips
Reips, U.-D. (1996, October). Experimenting in the World Wide Web. Paper presented at the 1996 Society for Computers in Psychology conference, Chicago.
Intrinsic Motivation
  • E L Deci
Deci, E. L. (1975). Intrinsic Motivation. New York, NY: Plenum Press.
The Brief History of Web Experimenting: A Survey
  • J Musch
  • U.-D Reips
Musch, J., & Reips, U.-D. (in press). The Brief History of Web Experimenting: A Survey. In M. H. Birnbaum (Ed.), Psychology Experiments on the Internet. San Diego, CA: Academic Press.