Conference PaperPDF Available

Gamification in Crowdsourcing: A Review

Abstract and Figures

This study investigates how different gamification implementations can increase crowdsourcees' motivation and participation in crowdsourcing (CS). To this end, we review empirical literature that has investigated the use of gamification in crowdsourcing settings. Overall, the results of the review indicate that gamification has been an effective approach for increasing crowdsourcing participation. When comparing crowdcreating,-solving,-processing and-rating CS approaches, the results show differences in the use of gamification across CS types. Crowdsourcing initiatives that provide more monotonous tasks most commonly used mere points and other simpler gamification implementations, whereas CS initiatives that seek for diverse and creative contributions have employed gamification in more manifold ways employing a richer set of mechanics. These findings provide insights for designers of gamified systems and further research on the topics of gamification and crowdsourcing.
Content may be subject to copyright.
Gamification in crowdsourcing: A review
Benedikt Morschheuser
CR, Robert Bosch GmbH and
Karlsruhe Institute of Technology
benedikt.morschheuser@
de.bosch.com
Juho Hamari
Game Research Lab
School of Information Sciences
University of Tampere
juho.hamari@uta.fi
Jonna Koivisto
Game Research Lab
School of Information Sciences
University of Tampere
jonna.koivisto@uta.fi
Abstract
This study investigates how different gamification
implementations can increase crowdsourcees’
motivation and participation in crowdsourcing (CS).
To this end, we review empirical literature that has
investigated the use of gamification in crowdsourcing
settings. Overall, the results of the review indicate that
gamification has been an effective approach for
increasing crowdsourcing participation. When
comparing crowdcreating, -solving, -processing and -
rating CS approaches, the results show differences in
the use of gamification across CS types.
Crowdsourcing initiatives that provide more
monotonous tasks most commonly used mere points
and other simpler gamification implementations,
whereas CS initiatives that seek for diverse and
creative contributions have employed gamification in
more manifold ways employing a richer set of
mechanics. These findings provide insights for
designers of gamified systems and further research on
the topics of gamification and crowdsourcing.
1. Introduction
During recent years modern ICT technologies have
spawned two interwoven phenomena: gamification and
crowdsourcing (CS). Today, multitude of different
organizations employ crowdsourcing (CS) as a way to
outsource various tasks to be carried out by ‘the
crowd’; a mass of people reachable through the
internet (see [24]). The rapid diffusion of these
technologies can be seen both in industry as well as in
the academia [13, 24]. Business analysts have
estimated that a majority or at least 50% of
organizations have gamified some of their processes by
2015 [14, 26]. As illustrated in Figure 1, the body of
literature on both CS and gamification has been rapidly
growing. Moreover, these technologies appear together
frequently: crowdsourcing is one of the major
application areas for gamification [20]. Naturally, the
main goals of CS in general are either cost savings or
the possibility to innovate solutions that would be
difficult to cultivate in-house. However, CS relies on
the existence of a reserve of people that are willing to
take on tasks for free or for a small monetary
compensation. Along this reasoning, CS tasks are
increasingly gamified, that is, organizations attempt to
make the work activity more like playing a game in
order to provide other motives for working than just
the monetary compensation.
However, while the union of these novel
technological phenomena seems intuitively appealing,
there has still been a dearth of coherent understanding
of the use of gamification in CS. Although singular
scattered empirical pieces on the topic exist, efforts
have not yet been made to collate and synthesize this
body of knowledge. Moreover, both CS and
gamification can take a variety of forms and it would
be short-sighted to assume that differing gamification
implementations would function similarly across
different CS approaches.
Figure 1. Search hits (Scopus, all fields, CS left axis, gamification right axis)
Citation: Morschheuser, B., Hamari, J., & Koivisto, J. (2016). Gamification in Crowdsourcing: A Review. In Proceedings of the 49th Annual Hawaii
International Conference on System Sciences (HICSS), Hawaii, USA, January 5-8, 2016.
Copyright: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Therefore, in this paper we conduct a review of
studies that have investigated the use of differing
gamification implementations across different types of
crowdsourcing initiatives. We review the results
reported in the analyzed literature, the research
methods used and the investigated motivations.
Furthermore, we examine how gamification has been
implemented to provide insights for designers of
gamified crowdsourcing systems.
2. Background
Crowdsourcing refers to outsourcing work, tasks or
problem solving to people online rather than to
employees or traditional suppliers [13, 24]. It has been
considered to be a particularly useful way to coordinate
work for tasks that can benefit from a collective
intelligence [37] or that are difficult to process by
computers and are therefore outsourced to humans
(also see “human computation” [1]).
Based on Geiger & Schader [15], crowdsourcing
can broadly be categorized into four categories (See
Figure 2). First, crowdsolving approaches use the
diversity of the crowd to find a huge number of
heterogeneous solutions to a given problem. The value
of this approach results directly from each isolated
contribution (non-emergent). Crowdsolving is often
used for very complex problems (e.g. Foldit, a game
based approach to optimize protein folding - see [6]) or
if no pre-definable solution exists (e.g. ideation
contests). Second, crowdcreation solutions aim to
create comprehensive (emergent) artefacts based on a
variety of heterogeneous contributions. Typical
examples include all kinds of user-generated content
(e.g. Youtube) or knowledge derived from
collaborative aggregation (e.g. Wikipedia). Third,
crowdrating systems commonly attempt to harness the
so-called “wisdom of crowds” [51] to perform
collective assessments or predictions. In this case, the
emergent value arises from a huge number of
homogeneous “votes” (e.g. NASA Clickworkers, at
which the clicks/votes of a crowd were used to identify
craters on asteroids [31]). Fourth, crowdprocessing
approaches rely on the crowd to perform large
quantities of homogeneous tasks. Identical
contributions are a quality attribute for the validity of
the work. The value is derived directly from each
isolated contribution (non-emergent) (e.g. Mechanical
Turk or Galaxy Zoo [38]).
Since an active crowd of participants is crucial for
successfulness of crowdsourcing, the motivation of the
crowdsourcees is of great importance. Although a
relatively large amount of research has been done in
the area of crowdsourcing, only a small portion of
studies actually investigates participants’ motivation
[57]. Existing studies have showed that there are
several reasons for people to participate in
crowdsourcing and related online work, ranging from
intrinsic to extrinsic motivations [21, 32, 50, 58, 59].
For example intrinsic motivation, created by tasks that
allow the participant to be creative and experience
autonomy, to develop own skills and feel competent, to
enjoy pastime or to achieve social recognition, can in
some cases dominate extrinsic motivation evoked by
financial payoffs or external social reasons [32].
Further, task characteristics [32, 59], task granularity
[58] or perceived motivational affordances [58] can
have an influence on the individual’s motivation.
Figure 2. Four archetypes of crowdsourcing systems based on Geiger & Schader [15]
Therefore, one major challenge in motivating
people to participate is to design a crowdsourcing
system that promotes and enables the formation of
positive motivations towards CS work as well as fits
the type of the activity. For instance, whereas some
crowdsourcing approaches aim for systematically
derived contributions, other crowdsourcing types may
call for incentive structures that promote creativity. In
other words, as the CS activities can differ
dramatically, so can the means by which to motivate
crowdsourcees in a given CS initiative.
In the area of incentive design in information
system field, one of the most popular developments in
recent years has commonly been titled as gamification
[18, 20]. Gamification refers to design that attempts to,
firstly, increase the intrinsic motivation of users or
participants to engage in a given activity or behavior
and, secondly, to increase or otherwise change the
given behavior. The term of gamification stems from
the notion that games if anything are a pinnacle form
of hedonic self-purposeful systems [19]. Most
gamification applications borrow design patterns from
(video) games and, consequently, aim to give rise to
similar experiences as games commonly do, e.g:
feelings of mastery, autonomy, flow, suspense etc. (see
e.g. [25]). If we consider gamification in the context of
CS, gamification can be seen as an attempt to redirect
crowdsourceesmotivations from purely rational gain-
seeking to self-purposeful, intrinsically motivated
activity: Transforming Homo Economicus into Homo
Ludens[17]. In other words, elements known from
games act as motivational affordances [25, 29, 56] for
the intrinsic motivations. Points, badges, leaderboards,
avatars, and stories are some of the most often used
motivational affordances in gamification [20]. Previous
literature has conceptualized gamification into a few
main aspects: 1) the design (affordances), 2) the
psychological mediators/outcomes of gamification, and
3) the (behavioral) outcomes of gamification [25].
Existing empirical works also suggest that contextual
factors [17] and factors related to user have an effect
[34].
3. Literature review process
Following the guidelines of Webster & Watson
[54] and Ellis [11], the analysis procedure started by a
literature search. We decided to use Scopus database as
our source of data, as it is the largest abstract and
citation database of scholarly literature [12]. Scopus
includes, for example, the AIS, ACM, IEEE and
Science Direct libraries among many others.
As in this study we are particularly focusing on the
use of gamification in crowdsourcing, the literature
search in the Scopus database was conducted using the
search query TITLE-ABS-KEY(GAMIF* AND
CROWD*). The search resulted in all Scopus entries
that include any permutation of the terms gamification
and crowdsourcing in the entry metadata (title, abstract
or keywords). We intentionally limited the search to
the metadata since searching for the terms in all the
text would result in a relatively large amount of false
positives as many papers refer to gamification and/or
crowdsourcing in passing. The search procedure was
undertaken in March 2015.
The search query resulted in 50 hits. These 50
papers were then screened for inclusion and relevance
using the following criteria: 1) The full paper can be
acquired, 2) paper is in English (and has been
published on an international venue), 3) the paper is
about gamification and crowdsourcing instead of the
terms just being mentioned in the metadata, 4) the
paper is not a duplicate reporting the same study in
several papers, and 5) the paper contains empirically
derived results.
Of the initial 50 hits, one paper was excluded due
to the full paper not being available, and one paper due
to not being in English. Furthermore, four papers were
excluded from the review due to criterion 3).
Moreover, one duplicate was found. [39] and [40]
describe the same experiment and report similar
results. Therefore, we have merged the information of
the two papers and handle it in the analyses as one
entity ([40]). Also [42] and [46] analyze the same case,
but these papers were not considered duplicates due to
the fact that different data was gathered, differing
methods were used and consequently, different results
were reported. Finally, 28 papers matching the criteria
were identified for the review.
In the second step (see [54]) of the literature
analysis, the included papers were coded. Two
researchers carried out this process independently.
After coding the two individual sheets were compared,
discussed and combined. Information of all the papers
pertaining to A) crowdsourcing (crowdsourcing type
(see [15]) and financial incentives), B) gamification
(affordances, psychological mediators/outcomes,
behavioral outcomes (see [20]) and scoring rules) and
C) results of empirical studies were gathered.
In the third step, the 28 empirical papers included
in the review were further categorized as either
containing results regarding the effects of gamification
or not containing any results about the effects of
gamification. In the latter case, the papers simply
described a gamified crowdsourcing implementation.
4. Results
4.1. How gamification is used?
The reviewed body of literature employed 11 types
of gamification affordances1, which indicates that
gamification is used in a variety of ways in CS (see
Table 1). However, points (in 22 cases) and
leaderboards (in 20 cases) were clearly the most
implemented gamification mechanics. Commonly,
these two affordances were combined to create
competition between the participants. Understandably,
our results indicate that points and scores are employed
in CS contexts where the task is more easily
enumerable such as crowdprocessing and crowdrating,
and which strive for a large number of homogenous
contributions. The richest employment of gamification
with the largest variety of affordances can be found in
solving-related CS work, whereas crowdprocessing
and crowdrating are more focused on simpler forms of
gamification such as points and leaderboards. CS types
of crowdcreating and crowdsolving differ from
crowdrating and crowdprocessing in that the
participation depends on a variety of heterogeneous
contributions. Crowdsourcing related to creative and
diverse contributions therefore might benefit from
more manifold gamification solutions.
CS types of creating and rating differ from solving
and processing in that the end-goal of the
crowdsourced work is an emergent value from the
collective of contributions. Therefore, it could be
assumed that designers of gamified CS systems with
emergent outcomes would rather use cooperative
gamification designs compared to designs of non-
emergent approaches. When analyzing the affordances
used between these types, no notable differences could
be found however. Competition-based designs with
points and leaderboards that encourage individual work
rather than cooperative work were used very often in
crowdprocessing, solving and rating approaches.
However, the scoring approaches differed based upon
how points were awarded and from which actions they
could be earned. In crowdprocessing approaches,
where the sheer number of contributions is more
important than quality [15], users were commonly
rewarded from general participation (e.g. number of
completed tasks [28], number of correct answers [27],
1 For the analysis how gamification is used in CS, we collected and
categorized the affordances mentioned in the reviewed studies. It is
noteworthy that we did not evaluate how a certain affordance was
implemented in any given study but instead relied on what was
reported in the reviewed papers and categorized the elements based
on the information provided by the authors. Neither did we compare
the affordances reported across the studies. Therefore, variance is
bound to exist within the reported affordance categories.
or number of visited locations [52]). Whereas in
crowdrating approaches, where the output is more
emergent, users were rewarded from the quality of the
contributions (e.g. from quality of contribution rated by
others [9], similarity/agreement with contributions of
other crowdsourcees [10, 16, 22, 47]). In crowdsolving
approaches both forms occurred equally (e.g. number
of completed tasks [40, 55], quality of contribution
rated by others [35, 53]). Unfortunately, the small
amount of studies investigating gamification in the
crowdcreation approaches limits the identification of a
clear pattern in their gamification implementations.
In addition to different kinds of point awarding
logics, the points and scoring affordances were
combined with further elements in diverse ways across
implementations; they were used in combination with
for instance, time limits (e.g. [22, 30]), they were used
as a basis for calculating the level of crowdsourcees
(e.g. [36, 47]), with the ability to compare them
between peers and teams (e.g. [36, 47]) as well as with
badges and missions to visualize specific goals (e.g. [2,
35, 42, 46, 53]).
In most of the studies the incentive system was
solely based on gamification (Table 1). Some studies
additionally employ financial rewards, e.g. a small
monetary compensation or a prize for the leaders on a
high score list, to motivate the participants. Although
studies suggest that extrinsic rewards (such as money)
can potentially crowd-out intrinsic motivation ([7, 8]),
[42] and [46] found in their experiment that
gamification in combination with financial rewards can
in fact increase the participation when compared to
gamification alone. However, the authors investigated
this only in the short-term and indicated that financial
rewards in comparison to gamification may reduce the
participation in the long-term.
Table 1: Incentive orchestration
Incentive
Literature
#
gamification
[2]1, [4], [9], [16], [23],
[27]*, [28], [30], [35],
[36], [40], [41], [43], [44],
[45], [47], [48], [49], [52],
[53], [55], ([42]*, [46]*)
21
gamification +
financial rewards
[3], [5], [10], [22], [33],
[42]*, [46]*, ([27]*)
7
1 References in bold refer to studies in which empirical
results about gamification have been reported.
* as experimental condition
Table 2: Gamification affordances per CS type
Crowdsourcing
Type / affordances
Processing
N = 7
Rating
N = 10
Solving
N = 8
Creating
N = 3
Frequency
Total = 28
points / score
[3], [27], [33],
[36], [44], [52]
[9], [10], [16],
[22], [30], [41],
[42], [46], [47]
[23], [35]1, [40],
[53], [55]
[45], [49]
22
leaderboards/
rankings
[3], [27], [28],
[33], [36], [52]
[4], [9], [10],
[16], [22], [42],
[46], [47]
[23], [35], [40],
[53], [55]
[2]
20
badges/
achievements
[28], [36], [52]
[41], [42], [46]
[40], [53]
[2], [49]
10
levels
[3], [36]
[9], [47]
[43], [55]
[49]
7
progress
[28], [36]
[35], [43], [53]
5
feedback
[3], [27]
[35], [40]
4
rewards
[42], [46]
[5]
3
storytelling
[44]
[48]
[49]
3
missions
[35], [48]
2
virtual territories
[40]
[49]
2
1 References in bold refer to studies in which empirical results about gamification have been reported.
Furthermore, [27] indicates that the output quality
of paid CS can be worse. Considering how
gamification is implemented in CS (see Table 2), it
seems that monetary rewards have been used in
implementations with simple gamification designs,
mainly together with points and leaderboards.
4.2. Does gamification work?
All of the empirical studies on the effectiveness of
gamification in CS report that gamification has a
positive impact on CS work (Table 3). Most studies
that directly compared a gamified and non-gamified
approach (e.g. [10, 16, 28, 33, 36, 40, 44, 53]) report
several positive effects, like the increase of (long-term
[36]) engagement [10, 28, 33, 36, 44, 53], quality of
the output [10, 16, 36] and reduction in cheating
compared to traditional paid CS [10]. However,
gamification does not necessarily lead to an increase in
participation. One study measured very small
differences compared to a control group without
gamification ([42]). In addition to the above studies
that employed direct comparisons, five studies reported
positive results based on the users’ perception of the
gamified crowdsourcing system [2, 9, 35, 47] or based
on the measured user engagement [45]. These, mostly
descriptively reported results, show no effects of
gamification per se, but can be seen as positive
indicators for the acceptance of gamification in context
of CS.
Nearly all of the analyzed papers measure the
effectiveness of the gamified system by measuring
behavioral outcomes such as participation or
willingness to contribute as the dependent variable. In
all empirical studies the quality or quantity of the
dependent variables were measured by collecting log
data or conducting a survey. Several studies were also
analyzing psychological outcomes. Table 4 gives an
overview to the literature in which results about
psychological outcomes were reported. The
psychological outcomes were not commonly measured
using comprehensive measurement instruments;
instead, they were mostly examined by simple
questionnaires, qualitative observations, or the
observations of how participants behaved was used as
a proxy for psychological aspects. Currently, not a
single study has used validated psychometric
measurement instruments. Only one study ([5]) seemed
to have used a validated measurement construct for the
experience of fun.
Table 3: The results of gamification on crowdsourced work in different types of studies
Results
Positive compared to
non-gamified approach
Perceived as
positive
Design studies
Frequency
Quantitative
- inferential
[10], [36], [44]
[5], [27], ([36])
5
- descriptive
[40]
[45]
2
Qualitative
[47]
[46]
2
Mixed methods
- inferential
[28], [33], [53]
[2]
[42], ([28])
5
- descriptive
[16]
[9], [35]
3
Total
8
5
4
17
Table 4: Outcome variables in the literature
Outcome
Literature
Frequency
Psychological
[2]1, [5], [9], [10], [28], [35], [40], [42], [44], [46], [49]
12
- motivation
[2], [10], [28], [33], [40], [42], [44], [46]
- attitude
[2], [28], [46]
- fun/enjoyment
[2], [5], [9], [35], [49]
Behavioral
[2], [5], [9], [10], [16], [22], [23], [27], [28], [30], [33], [35], [36], [40],
[42], [44], [45], [46], [47], [49], [52], [53], [55]
23
1 References in bold refer to studies in which empirical results about gamification have been reported.
5. Recommendations for gamifying
crowdsourcing systems
This review points to several recommendations for
CS developers for using gamification. In the review we
analyzed several types of studies that investigated the
use of gamification in CS: 1) studies in which
controlled experiments were conducted and detailed
gamification design results thus provided (see Table 3
col. “Design studies”), as well as 2) studies reporting
concrete implementations of gamified CS systems. In
this section we describe what kinds of
recommendations can be derived from the synthesis of
literature on gamification in CS.
Points / scores: Nearly all of the examined systems
use a metric (e.g points or scores) as a core element to
reward measureable events in the human-system
interaction. Due to this, we further analyzed the
scoring mechanism used in the papers. Table 5
summarizes the findings clustered along the
crowdsourcing types.
Rankings / leaderboards: The empirical findings
indicate that rankings seem to be very effective to
motivate certain users of a crowdsourcing community
to contribute a lot by [36]. However, several studies
show that the concrete design of a leaderboard has
effects on the participation (in context of
crowdprocessing [27, 36] and crowdrating [42, 46]).
Based on these findings, [27] recommend short-term
leaderboards, because “all-time” leaderboards can
demotivate low-ranked participants and novices, for
which the top seems impossible to reach. Studies by
[42, 46] showed that long-term leaderboards can lead
to demotivation and possible negative effects on the
overall outcome of the crowdsourcing. The design of a
leaderboard implementation seems therefore highly
context dependent. [36] notes that many crowdsourcing
approaches follow the 90-9-1” participation rule,
implying that only 1% of the users perform almost all
of the actions, and consequently, long-term
leaderboards might also be suitable for many CS
implementations.
Table 5: Score design patterns for CS types
Homogenous non-emergent tasks are easily enumerable. Therefore, most crowdprocessing
approaches reward the quantitative number of fulfilled tasks. This simple mechanism is usually
combined with further affordances like level systems and/or public leaderboards to achieve a
(self- or other-) competitive engagement [27, 36]. However, leaderboards should be used
carefully, empirical studies on the use of leaderboards in crowdprocessing systems showed
positive and negative results [27, 36].
In case of crowdrating the individual contributions represents a vote on a given topic. As an
aggregation of these votes a collective value emerges [15]. Therefore not only the quantity, but
also the quality is rewarded in most gamified crowdrating cases. Scoring mechanism, which set
the quality of a contribution in context of the emergent outcome (e.g. degree of agreement with
contributions of other crowdsourcees) are used to motivate users to emulate others and to “think
and act like the community”. Similar to processing this mechanism is often combined with
leaderboards (see Table 1) or time pressure [10, 22, 30] to create a competition-based setting.
Crowdsolving tasks strive for heterogeneous, non-emergent participations, which could be very
diverse and therefore hard to value by technical solutions. Based on the concrete problem, task
and implementation scoring mechanism that reward the quantitative participation or the quality
of the output could be suitable. This is very contextual and depends on the possibilities to
measure task fulfillment and/or task quality in a concrete use case. However, [5] provide first
empirical results about reward design in crowdsolving approaches. They showed in an
experiment that explicitly expressed gamification rewards before a crowdsolving task phase can
increase the quality of the CS work and the engagement of crowdsourcees. Furthermore,
engagement can be increased by implementing an open chance to achieve greater rewards
dependent upon the quality of their work.
On a general level, crowdcreating systems aim at producing collaborative values by diverse and
creative contributions. In such systems, gamification can be used to motivate users towards, for
example, cooperation and creativity. Since only few studies could be found on gamification in
crowdcreation systems, no actual design patterns based on data can yet be described. However,
as the approach aims at gathering diverse contributions, implementing gamification in various
forms instead of, for example, merely points and badges and promoting cooperation rather than
competition could potentially be beneficial for reaching the common goal
Level systems: The empirical findings of [36]
indicate that differences might exist between
gamification designs with level systems that motivate
by visualizing individual achievements and public
participation rankings, which encourage workers to
compare their effort with others. The results indicate
that social achievements seem to be slightly more
effective than personal level systems.
Manifold gamification approaches: Several
examples use rich gamification designs with a diverse
set of affordances (see Table 1). [42, 46] propose to
mix several motivational affordances for different
target groups to increase the overall outcome, which
could be particularly important in crowdcreating and
solving systems that benefit from the diversity of the
participants. However, the experiment of [36] indicates
that adding more motivational affordances does not
always increase the motivation, especially in
homogenous scenarios like crowdprocessing. Little
knowledge is available so far, to explain effectiveness
of affordances for specific user groups. Only [28]
examined different target groups and showed that
gamification does work for young and senior
crowdsourcees, whereby competition-based gamifica-
tion might be more effective for young rather than old
participants. Several studies [27, 28, 36, 42] argue for
the importance of intrinsic motivations like altruism or
curiosity. Sustainable gamification designs should
therefore be geared to user needs, suggesting more
diversity than just points and leaderboards.
6. Conclusions
There has been a large variety of literature
examining a wide array of different gamification
implementations in all of the four types of CS
initiatives. The literature seems to be unanimous;
gamification seems to indeed work with majority of
configurations and pairings with different CS types
(crowdprocessing, -rating, -solving, and -creating).
The empirical studies comparing gamified with non-
gamified approaches report an increase in engagement,
output quality or other positive effects.
The literature, however, at this early stage is still
quite scattered and not enough research has been
conducted to draw clear conclusions as to which
specific implementation would work better or worse in
certain situations. It is clear that contextual factors and
factors related to crowdsourcees play a role, but as to
what extent and how is still unclear. Nevertheless, it is
not an easy task to design gamification as also
witnessed by the studies in the review. When designing
an information system that attempts to affect human
motivations and behavior, developers will inevitably
end up with a complex design challenge.
What our study does show is that there are
differences as to how gamification has been employed
across different CS archetypes. Crowdsourcing
initiatives that provide more monotonous tasks most
commonly used mere points and other simpler
gamification implementations, whereas in CS
initiatives that seek for more diverse and creative
contributions have employed gamification in more
manifold ways employing a richer set of affordances
(see Table 5). Regardless, points and leaderboards
were clearly the most popular motivational affordances
used in all four forms of crowdsourcing systems to
create competition between the participants.
Several limitations should be noted both in the
scope of this review as well as in the reviewed body of
literature: 1) only few studies measured psychological
aspects with rigorous measurement, 2) only few studies
had carried out full experiments with control groups, 3)
many studies clump multiple gamification mechanics
in one and make it difficult to control from where the
effect stems from, 4) gamification designs promoting
cooperative behavior have been studied only in few
cases, 5) due to the novelty of the phenomena the body
of literature is limited and the topic has not yet been
frequently addressed in high quality journals, and 6)
the scope of this review was focused on studies
investigating gamification particularly. However, it is
possible that related research has been conducted also
under other conceptual developments such as serious
games, games-with-a-purpose, human-based
computation games or persuasive technology.
Conscious about these limitations, further studies on
gamification (and crowdsourcing) should attempt to
avoid them.
7. References
[1] L. von Ahn, "Human computation", In Proceedings
of the 46th Annual Design Automation Conference -
DAC ’09, 2009, San Francisco, IEEE, pp. 418419.
[2] A. Bowser, D. Hansen, Y. He, et al., "Using
gamification to inspire new citizen science volunteers",
In Proceedings of the First International Conference on
Gameful Design, Research, and Applications -
Gamification ’13, 2013, Stratford, ACM, pp. 1825.
[3] M. Brenner, N. Mirza, and E. Izquierdo, "People
recognition using gamified ambiguous feedback", In
Proceedings of the First International Workshop on
Gamification for Information Retrieval - GamifIR ’14,
2014, Amsterdam, ACM, pp. 2226.
[4] J. Chamberlain, "The annotation-validation (AV)
model: rewarding contribution using retrospective
agreement", In Proceedings of the First International
Workshop on Gamification for Information Retrieval -
GamifIR ’14, 2014, Amsterdam, ACM, pp. 1216.
[5] J. Choi, H. Choi, W. So, J. Lee, and J. You, "A
Study about Designing Reward for Gamified
Crowdsourcing System", In Proceedings of the 3rd
International Conference, DUXU 2014, Held as Part of
HCI International 2014, 2014, Heraklion, Springer
International Publishing, pp. 678687.
[6] S. Cooper, F. Khatib, A. Treuille, et al., "Predicting
protein structures with a multiplayer online game.",
Nature, 466(7307), 2010, pp. 756760.
[7] E.L. Deci, "Effects of externally mediated rewards
on intrinsic motivation", Journal of Personality and
Social Psychology, 18(1), 1971, pp. 105115.
[8] E.L. Deci, R. Koestner, and R.M. Ryan, "A meta-
analytic review of experiments examining the effects
of extrinsic rewards on intrinsic motivation",
Psychological bulletin, 125(6), 1999, pp. 627668.
[9] A. Dumitrache, L. Aroyo, C. Welty, R.-J. Sips, and
A. Levas, "Dr. Detective: combining gamification
techniques and crowdsourcing to create a gold standard
for the medical domain", In Proceedings of the 1st
International Workshop on Crowdsourcing the
Semantic Web (Crowd Sem2013), 2013, Sydney, pp.
1631.
[10] C. Eickhoff, C.G. Harris, A.P. de Vries, and P.
Srinivasan, "Quality through flow and immersion", In
Proceedings of the 35th international ACM SIGIR
conference - SIGIR ’12, 2012, Portland, ACM Press,
pp. 871880.
[11] P.D. Ellis, The essential guide to effect sizes:
Statistical power, meta-analysis, and the interpretation
of research results, Cambridge University Press,
Cambridge, 2010.
[12] Elsevier B.V., "Scopus", http://www.scopus.com/,
June 15, 2015.
[13] E. Estellés-Arolas and F. Gonzalez-Ladron-de-
Guevara, "Towards an integrated crowdsourcing
definition", Journal of Information Science, 38(2),
2012, pp. 189200.
[14] Gartner, "Gartner says by 2015, more than 50
percent of organizations that manage innovation
processes will gamify those processes.", http://www.
gartner. com/it/page.jsp?id=1629214, April 11, 2011.
[15] D. Geiger and M. Schader, "Personalized task
recommendation in crowdsourcing information
systems - Current state of the art", Decision Support
Systems, 65, 2014, pp. 316.
[16] J. Goncalves, S. Hosio, D. Ferreira, and V.
Kostakos, "Game of words: tagging places through
crowdsourcing on public displays", In Proceedings of
the 2014 conference on Designing interactive systems -
DIS ’14, 2014, Vancouver, ACM, pp. 705714.
[17] J. Hamari, "Transforming homo economicus into
homo ludens: A field experiment on gamification in a
utilitarian peer-to-peer trading service", Electronic
Commerce Research and Applications, 12(4), 2013,
pp. 236245.
[18] J. Hamari, K. Huotari, and J. Tolvanen,
"Gamification and economics", In Walz, S.P. and S.
Deterding, eds., The Gameful World: Approaches,
Issues, Applications, MIT Press, Cambridge, 2015, pp.
139–161.
[19] J. Hamari and J. Koivisto, "Why do people use
gamification services?", International Journal of
Information Management, 35(4), 2015, pp. 419431.
[20] J. Hamari, J. Koivisto, and H. Sarsa, "Does
Gamification Work? -- A Literature Review of
Empirical Studies on Gamification", In Proceedings of
the 47th Hawaii International Conference on System
Sciences - HICSS, 2014, Waikoloa, IEEE, pp. 3025
3034.
[21] J. Hamari, M. Sjöklint, and A. Ukkonen, "The
sharing economy: Why people participate in
collaborative consumption", Journal of the Association
for Information Science and Technology, forthcoming,
2015.
[22] C.G. Harris, "The Beauty Contest Revisited!:
Measuring Consensus Rankings of Relevance using a
Game", In Proceedings of the First International
Workshop on Gamification for Information Retrieval -
GamifIR ’14, 2014, Amsterdam, ACM, pp. 1721.
[23] J. He, M. Bron, and L. Azzopardi, "Studying User
Browsing Behavior Through Gamified Search Tasks",
In Proceedings of the First International Workshop on
Gamification for Information Retrieval - GamifIR ’14,
2014, Amsterdam, ACM, pp. 4952.
[24] J. Howe, "The Rise of Crowdsourcing", Wired,
14(6), 2006.
[25] K. Huotari and J. Hamari, "Defining
gamification", In Proceeding of the 16th International
Academic MindTrek Conference on - MindTrek ’12,
2012, Tampere, ACM Press, pp. 17.
[26] IEEE, "Everyone’s a Gamer IEEE Experts
Predict Gaming Will Be Integrated Into More than 85
Percent of Daily Tasks by 2020", http://www.ieee.org/
about/news/2014/25_feb_2014.html, April 14, 2014.
[27] P.G. Ipeirotis and E. Gabrilovich, "Quizz:
Targeted Crowdsourcing with a Billion (Potential)
Users", In Proceedings of the 23rd international
conference on World wide web - WWW ’14, 2014,
Seoul, ACM, pp. 143154.
[28] T. Itoko, S. Arita, M. Kobayashi, and H. Takagi,
"Involving senior workers in crowdsourced
proofreading", In Proceedings of the 8th International
Conference, UAHCI 2014, Held as Part of HCI
International 2014, 2014, Heraklion, Springer
International Publishing, pp. 106117.
[29] J.H. Jung, C. Schneider, and J. Valacich,
"Enhancing the Motivational Affordance of
Information Systems: The Effects of Real-Time
Performance Feedback and Goal Setting in Group
Collaboration Environments", Management Science,
56(4), 2010, pp. 724742.
[30] H. Kacorri, K. Shinkawa, and S. Saito,
"Introducing game elements in crowdsourced video
captioning by non-experts", In Proceedings of the 11th
Web for All Conference on - W4A ’14, 2014, Seoul,
ACM, pp. 14.
[31] B. Kanefsky, N.G. Barlow, and V.C. Gulick, "Can
Distributed Volunteers Accomplish Massive Data
Analysis Tasks?", In Proceedings of the 32th Annual
Lunar and Planetary Science Conference, 2001,
Houston.
[32] N. Kaufmann, T. Schulze, and D. Veit, "More
than fun and money. Worker Motivation in
Crowdsourcing - A Study on Mechanical Turk.", In
Proceedings of the 17th Americas Conference on
Information Systems - Amcis, 2011, Detroit, pp. 111.
[33] R. Kawajiri, M. Shimosaka, and H. Kahima,
"Steered crowdsensing: Incentive Design towards
Quality-Oriented Place-Centric Crowdsensing", In
Proceedings of the 2014 ACM International Joint
Conference on Pervasive and Ubiquitous Computing -
UbiComp ’14, 2014, Seattle, ACM, pp. 691701.
[34] J. Koivisto and J. Hamari, "Demographic
differences in perceived benefits from gamification",
Computers in Human Behavior, 35, 2014, pp. 179
188.
[35] J.J. Lee, P. Ceyhan, W. Jordan-Cooley, and W.
Sung, "GREENIFY: A Real-World Action Game for
Climate Change Education", Simulation & Gaming,
44(2-3), 2013, pp. 349365.
[36] T.Y. Lee, C. Dugan, W. Geyer, et al.,
"Experiments on motivational feedback for
crowdsourced workers", In Proceedings of the 7th
International Conference on Weblogs and Social
Media - ICWSM 2013, 2013, AAAI Press, pp. 341
350.
[37] J.M. Leimeister, "Collective Intelligence",
Business & Information Systems Engineering, 2(4),
2010, pp. 245248.
[38] C.J. Lintott, K. Schawinski, A. Slosar, et al.,
"Galaxy Zoo: Morphologies derived from visual
inspection of galaxies from the Sloan Digital Sky
Survey", Monthly Notices of the Royal Astronomical
Society, 389(3), 2008, pp. 11791189.
[39] Y. Liu, T. Alexandrova, and T. Nakajima,
"Gamifying intelligent environments", In Proceedings
of the 2011 international ACM workshop on
Ubiquitous meta user interfaces - Ubi-MUI ’11, 2011,
Scottsdale, ACM, pp. 712.
[40] Y. Liu, T. Alexandrova, T. Nakajima, and V.
Lehdonvirta, "Mobile Image Search via Local Crowd:
a User Study", In Proceedings of the 17th International
Conference on Embedded and Real-Time Computing
Systems and Applications (RTCSA 2011), 2011, Los
Alamitos, IEEE Computer Society, pp. 109112.
[41] A.D. Mason, G. Michalakidis, and P.J. Krause,
"Tiger Nation: Empowering citizen scientists", In
Proceeding of the 6th IEEE International Conference
on Digital Ecosystems and Technologies (DEST),
2012, IEEE, pp. 15.
[42] E. Massung, D. Coyle, K.F. Cater, M. Jay, and C.
Preist, "Using crowdsourcing to support pro-
environmental community activism", In Proceedings of
the SIGCHI Conference on Human Factors in
Computing Systems - CHI ’13, 2013, Paris, pp. 371
380.
[43] Y. Nagai, A. Hiyama, T. Miura, and M. Hirose,
"T-echo: Promoting intergenerational communication
through gamified social mentoring", In Proceedings of
the 8th International Conference - UAHCI 2014, Held
as Part of HCI International 2014, 2014, Heraklion,
Springer, pp. 582589.
[44] T. Nose and R. Hishiyama, "Analysis of self-
tagging during conversational chat in multilingual
gaming simulation", In 2nd International Conference
on Future Generation Communication Technologies,
FGCT 2013, 2013, London, IEEE, pp. 8186.
[45] D. Pothineni, P. Mishra, A. Rasheed, and D.
Sundararajan, "Incentive Design to Mould Online
Behavior: A Game Mechanics Perspective", In
Proceedings of the First International Workshop on
Gamification for Information Retrieval - GamifIR ’14,
2014, Amsterdam, ACM, pp. 2732.
[46] C. Preist, E. Massung, and D. Coyle, "Competing
or aiming to be average?: Normification as a means of
engaging digital volunteers", In Proceedings 17th
ACM Conference on Computer Supported Cooperative
Work and Social Computing, 2014, Baltimore, ACM,
pp. 12221233.
[47] S. Saito, T. Watanabe, M. Kobayashi, and H.
Takagi, "Skill development framework for micro-
tasking", In Proceedings of the 8th International
Conference, UAHCI 2014, Held as Part of HCI
International 2014, 2014, Heraklion, Springer, pp.
400409.
[48] M. Sakamoto and T. Nakajima, "Gamifying social
media to encourage social activities with digital-
physical hybrid role-playing", In Proceeding of the 6th
International Conference, SCSM 2014, Held as Part of
HCI International 2014, 2014, Heraklion, Springer, pp.
581591.
[49] L.Y. Sheng, "Modelling learning from Ingress
(Google’s augmented reality social game)", In
Proceedings of the 2013 IEEE 63rd Annual Conference
International Council for Education Media, ICEM,
2013, Singapore, IEEE, pp. 18.
[50] T. Straub, H. Gimpel, F. Teschner, and C.
Weinhardt, "How (not) to Incent Crowd Workers",
Business & Information Systems Engineering, 57(3),
2015, pp. 167179.
[51] J. Surowiecki, The wisdom of crowds, Anchor
Books, New York, 2005.
[52] A. Uzun, L. Lehmann, T. Geismar, and A.
Küpper, "Turning the OpenMobileNetwork into a live
crowdsourcing platform for semantic context-aware
services", In Proceedings of the 9th International
Conference on Semantic Systems - I-SEMANTICS
’13, 2013, Graz, ACM, pp. 8996.
[53] B. Vasilescu, A. Serebrenik, P. Devanbu, and V.
Filkov, "How social Q&A sites are changing
knowledge sharing in open source software
communities", In Proceedings of the 17th ACM
conference on Computer supported cooperative work
& social computing - CSCW ’14, 2014, Baltimore,
ACM, pp. 342354.
[54] J. Webster and R.T. Watson, "Analyzing the Past
to Prepare for the Future: Writing a Literature
Review", MIS Quarterly, 26(2), 2002, pp. xiiixxiii.
[55] D. Yakushin and J. Lee, "Cooperative robot
software development through the internet", In
Proceedings of the 2014 IEEE/SICE International
Symposium on System Integration (SII), 2014, Tokyo,
IEEE, pp. 577582.
[56] P. Zhang, "Motivational Affordances: Reasons for
ICT Design and Use", Communications of the ACM,
51(11), 2008, pp. 145147.
[57] Y. Zhao and Q. Zhu, "Evaluation on
crowdsourcing research: Current status and future
direction", Information Systems Frontiers, 16(3), 2014,
pp. 417434.
[58] Y. Zhao and Q. Zhu, "Effects of extrinsic and
intrinsic motivation on participation in crowdsourcing
contest", Online Information Review, 38(7), 2014, pp.
896917.
[59] H. Zheng, D. Li, and W. Hou, "Task Design,
Motivation, and Participation in Crowdsourcing
Contests", International Journal of Electronic
Commerce, 15(4), 2011, pp. 5788.
... The gamification of citizen science has arisen from a need to motivate a wider audience to engage [144]. To this end, much of citizen science and crowdsourcing has been gamified, and previous literature has reviewed its usage and effectiveness across the field [127,128]. ...
... A review of gamification in crowdsourcing is presented in [16], where the solutions proposed in the literature are classified by incentive strategies (gamification, monetary, combined), gamification elements (points/score, leaderboards/rankings, badges/achievements, levels, progress, feedback, rewards, storytelling, missions, virtual territories), results (quantitative, qualitative, mixed) and outcomes (motivation, attitude, fun/enjoyment). ...
... Other related task design and workflow improvements include gamification [54,125] and adding breaks or micro-diversions [22]. ...
Article
Quality improvement methods are essential to gathering high-quality crowdsourced data, both for research and industry applications. A popular and broadly applicable method is task assignment that dynamically adjusts crowd workflow parameters. In this survey, we review task assignment methods that address: heterogeneous task assignment, question assignment, and plurality problems in crowdsourcing. We discuss and contrast how these methods estimate worker performance, and highlight potential challenges in their implementation. Finally, we discuss future research directions for task assignment methods, and how crowdsourcing platforms and other stakeholders can benefit from them.
... In this regard, qualitative testimonials and survey data tell us that having to compete is a catalyst for better performance (Burguillo, 2010). Also, the testimonials of the most motivated students and learning and searching information aligned with the conclusions of Morschheuser et al. (2016) by which interteam competitions provide clear goals in groups and create clear barriers between groups with positive influences on the group members' individual performances. Table 2 shows statistical summaries for each one of the analyzed variables. ...
Article
Full-text available
Students demand more active and participating teaching innovation methods, and activities such as presentations are not enough to satisfy those demands. In this research, competitive debate is used as inter-team gamification with third year students from a Business School studying the Human Resources Management subject. Out of this experience, qualitative and quantitative data are obtained. Results reinforce the continuation of classroom competitive debate due to the evidence of its motivational, learning, and communication skills improvement, and knowledge acquisition effects. The possibility of application with actual professionals is seriously considered.
... Although each person's visual perspective is captured by his/her HMD through the camera attached to the device, the person submits an appropriate theme tag that is identified as a particular theme channel and that can be seen in his/her view as hashtags. The person whose visual perspectives are captured registers the theme tag of the current visual perspective into the platform manually based on a gamification strategy similar to [5,6,8]. ...
Article
Multi‐criteria decision analysis (MCDA) is well suited to address complex public policy problems but could benefit from new tools to involve many laypeople. Online information on specialized topics could be more engaging by including game elements. This paper reports an experiment that assessed a gamified interface to (1) inform laypeople about the objectives to consider in wastewater management decisions, (2) assist them in constructing range‐based preferences, and (3) provide a positive experience. We measured the effects with (1) a knowledge pre‐ and posttest, (2) the elicited weights and a range sensitivity index, and (3) an experience questionnaire based on self‐determination theory. Answers from 174 participants indicated that participants learnt about the objectives and constructed preferences in both the gamified and control treatments. However, in neither were weights sufficiently adjusted. Our gamification making the ranges salient did not help overcome this bias. Both treatments were experienced as neutral to positive, the gamified being more entertaining. We discuss implications: if gamification of tools for participatory decision‐making is to be promoted, it requires further research. Range insensitivity remains an unresolved bias in MCDA.
Article
Full-text available
Many recent breakthroughs in medical diagnostics and drug discovery arise from deploying machine learning algorithms to large-scale data sets. However, a significant obstacle to such approaches is that they depend on high-quality annotations generated by domain experts. This study develops and evaluates BioLumin, a novel immersive mixed reality environment that enables users to virtually shrink down to the microscopic level for navigation and annotation of 3D reconstructed images. We discuss how domain experts were consulted in the specification of a pipeline to enable automatic reconstruction of biological models for mixed reality environments, driving the design of a 3DUI system to explore whether such a system allows accurate annotation of complex medical data by non-experts. To examine the usability and feasibility of BioLumin, we evaluated our prototype through a multi-stage mixed-method approach. First, three domain experts offered expert reviews, and subsequently, nineteen non-expert users performed representative annotation tasks in a controlled setting. The results indicated that the mixed reality system was learnable and that non-experts could generate high-quality 3D annotations after a short training session. Lastly, we discuss design considerations for future tools like BioLumin in medical and more general scientific contexts.
Article
We present our interdisciplinary research on gamification as a communication tool in urban planning. Approaching the topic from a user-centered perspective, we combine knowledge from the domains of architecture and human-computer interaction and define domain-specific requirements and areas of interest. To substantiate our findings, we present a prototypical case study of a public participation application, developed in consultation with the urban planning project driver and evaluated on-location with members of the public. Our results highlight the opportunities in pursuing a new sub-field of gamification within urban planning and include concepts of game interfaces and spaces in this research. In our case study, we present a methodological approach to assessing gamification in urban planning participation and provide findings on how gamification changes the users’ relationship to participation as well as evidence that the “avatar” game element can affect the motivational affordances meaning of task, perceived competence, and social relatedness.
Article
Full-text available
This paper examines the relationship between motivational design and its longitudinal effects on crowdsourcing systems. In the context of a company internal web site that crowdsources the identification of Twitter accounts owned by company employees, we designed and investigated the effects of various motivational features including individual / social achievements and gamification. Our 6-month experiment with 437 users allowed us to compare the features in terms of both quantity and quality of the work produced by participants over time. While we found that gamification can increase workers' motivation overall, the combination of motivational features also matters. Specifically, gamified social achievement is the best performing design over a longer period of time. Mixing individual and social achievements turns out to be less effective and can even encourage users to game the system. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Article
Full-text available
Purpose – The rapid development of Web 2.0 and social media enables the rise of crowdsourcing. Crowdsourcing contest is a typical case of crowdsourcing and has been adopted by many organisations for business solution and decision making. From a participant's perspective, it is interesting to explore what motivates people to participate in crowdsourcing contest. The purpose of this paper is to investigate the category of motivation based on self-determination theory and synthesises various motivation factors in crowdsourcing contest. Meanwhile, perceived motivational affordances and task granularity are also examined as the moderate constructs. Design/methodology/approach – The paper builds a conceptual model to illustrate the relationships between various motivations (extrinsic and intrinsic) and participation effort under the moderating of perceived motivational affordances and task granularity. An empirical study is conducted to test the research model by surveying the Chinese participants of crowdsourcing contest. Findings – The results show that various motivations might play different roles in relating to participation effort expended in the crowdsourcing contest. Moreover, task granularity may positively moderate the relationship between external motivation and participation effort. The results also show that supporting of a participant's perceived motivational affordances might strengthen the relationship between the individual's motivation with an internal focus (intrinsic, integrated, identified and introjected motivation) and participation effort. Originality/value – Overall, the research has some conceptual and theoretical implications to the literature. This study synthesises various motivation factors identified by previous studies in crowdsourcing projects or communities as a form of motivation spectrum, namely external, introjected, identified, integrated and intrinsic motivation, which contributes to the motivation literatures. Meanwhile, the findings indicate that various motivations might play different roles in relating to participation effort expended in the crowdsourcing contest. Also, the study theoretically extends the crowdsourcing participation research to incorporate the effects of perceived motivational affordances in crowdsourcing contest. In addition, the study may yield some practical implications for sponsors, managers and designers in crowdsourcing contest.
Article
Full-text available
In recent years, technology has been increasingly harnessed for motivating and supporting people toward various individually and collectively beneficial behaviors. One of the most popular developments in this field has been titled gamification. Gamification refers to technologies that attempt to promote intrinsic motivations toward various activities, commonly, by employing design characteristic to games. However, a dearth of empirical evidence still exists regarding why people want to use gamification services. Based on survey data gathered from the users of a gamification service, we examine the relationship between utilitarian, hedonic and social motivations and continued use intention as well as attitude toward gamification. The results suggest that the relationship between utilitarian benefits and use is mediated by the attitude toward the use of gamification, while hedonic aspects have a direct positive relationship with use. Social factors are strongly associated with attitude, but show only a weak further association with the intentions to continue the use of a gamification service.
Conference Paper
Intergenerational social mentoring, a mentoring system on social medium between the elderly and the young, will be the one of the platforms for the elderly to make use of their potential. The elderly could have more chances to communicate their knowledge and experience accumulated through their life to the next generations, and the young could try more challenges under the wisdom of crowds. Such systems should 1) have senior-friendly interface, 2) support the rich context-aware communication, and 3) blur some intergenerational gaps. In this paper, we propose “T-echo”, a new trial system for intergenerational social mentoring. T-echo is based on the two concepts: “growing gamifictaion” and “calendar-based interface.” The field study 15 elderly joined showed that the calendar-notebook interface was friendly for the elderly and have rich contexts for mentoring. Furthermore, growing gamification could be a good mediator between the elderly and the young.
Conference Paper
The goal of this study is to understand the mechanism of gamification in crowdsourcing by investigating the ways of giving rewards. Perceived reward diversity is proposed as a construct to induce fun experience from participants based on previous studies about gamified crowdsourcing. With respect to system manipulation, explicating the anticipated level of rewards before task phase is conducted. The effect of explication on task outcome and psychological outcome is compared with control group. As a result, both perceived reward diversity and explicating the anticipated level of rewards significantly affect both quality and quantity of submitted answers, as well as feeling of fun during the task phase. The limitation and implication of the study is stated in the end.
Conference Paper
This paper proposes a new way to gamify the micro-crowdfunding service. Micro-crowdfunding is a crowdsourcing service to achieve a sustainable society based on a crowdfunding concept and an aging money concept. In this type of service, each activity to achieve a sustainable society is called a mission, and performing a mission is encouraged through social and economic incentives. A new approach described in this paper enhances the original strategies by using a game concept. The approach consists of two techniques. The first technique adopts several concepts from dramaturgy. The technique coordinates multiple missions and encourages people to complete them by providing a fictional goal that most people want to achieve. The second technique incorporates persuasive ambient mirrors that reflect people's current situation with visual and fictional expressions. The technique emotionally increases people's incentives by using operant conditioning. We also conduct a user study to validate the approach proposed in this paper.
Article
This article discusses opportunities for the use of crowdsourcing and gamification models in the development of software solutions and algorithms for humanoid robots. An Internet-based online competition is used for accumulation of novel software solutions from participants. The project provides remote access to an actual humanoid robot for developing and testing submitted software, and successful solutions are placed in competition for defining the best software. The name of this project is thus the 'League of Everybody' (LoE). This article describes our early achievements and future plans for LoE.
Article
Crowdsourcing gains momentum: In digital work places such as Amazon Mechanical Turk, oDesk, Clickworker, 99designs, or InnoCentive it is easy to distribute human work to hundreds or thousands of freelancers. In these crowdsourcing settings, one challenge is to properly incent worker effort to create value. Common incentive schemes are piece rate payments and rank-order tournaments among workers. Tournaments might or might not disclose a worker’s current competitive position via a leaderboard. Following an exploratory approach, we derive a model on worker performance in rank-order tournaments and present a series of real effort studies using experimental techniques on an online labor market to test the model and to compare dyadic tournaments to piece rate payments. Data suggests that on average dyadic tournaments do not improve performance compared to a simple piece rate for simple and short crowdsourcing tasks. Furthermore, giving feedback on the competitive position in such tournaments tends to be negatively related to workers’ performance. This relation is partially mediated by task completion and moderated by the provision of feedback: When playing against strong competitors, feedback is associated with workers quitting the task altogether and, thus, showing lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Overall, individual piece rate payments are most simple to communicate and implement while incenting performance is on par with more complex dyadic tournaments.