ArticlePDF Available

Abstract and Figures

Two parallel phenomena are gaining attention in human-computer interaction research: gamification and crowdsourcing. Because crowdsourcing's success depends on a mass of motivated crowdsourcees, crowdsourcing platforms have increasingly been imbued with motivational design features borrowed from games; a practice often called gamification. While the body of literature and knowledge of the phenomenon have begun to accumulate, we still lack a comprehensive and systematic understanding of conceptual foundations, knowledge of how gamification is used in crowdsourcing, and whether it is effective. We first provide a conceptual framework for gamified crowdsourcing systems in order to understand and conceptualize the key aspects of the phenomenon. The paper's main contributions are derived through a systematic literature review that investigates how gamification has been examined in different types of crowdsourcing in a variety of domains. This meticulous mapping, which focuses on all aspects in our framework, enables us to infer what kinds of gamification efforts are effective in different crowdsourcing approaches as well as to point to a number of research gaps and lay out future research directions for gamified crowdsourcing systems. Overall, the results indicate that gamification has been an effective approach for increasing crowdsourcing participation and the quality of the crowdsourced work; however, differences exist between different types of crowdsourcing: the research conducted in the context of crowdsourcing of homogenous tasks has most commonly used simple gamification implementations, such as points and leaderboards, whereas crowdsourcing implementations that seek diverse and creative contributions employ gamification with a richer set of mechanics.
Content may be subject to copyright.
1
Gamified Crowdsourcing: Conceptualization, Litera-
ture Review, and Future Agenda
Benedikt Morschheuser
Institute of Information Systems and Marketing, Karlsruhe Institute of Technology, Germany
Corporate Research, Robert Bosch GmbH, Germany
benedikt.morschheuser@kit.edu
Juho Hamari
Gamification Group, Tampere University of Technology, Finland
Gamification Group, University of Turku, Finland
juho.hamari@tut.fi
Jonna Koivisto
Gamification Group, Tampere University of Technology, Finland
jonna.koivisto@tut.fi
Alexander Maedche
Institute of Information Systems and Marketing, Karlsruhe Institute of Technology, Germany
alexander.maedche@kit.edu
Correspondence:
Benedikt Morschheuser
Karlsruhe Institute of Technology (KIT)
Institute of Information Systems and Marketing (IISM)
Fritz-Erler-Straße 23
76131 Karlsruhe
benedikt.morschheuser@kit.edu
T: +49 177 347 4435
Cite: Morschheuser, B., Hamari, J., Koivisto, J., & Maedche, A. (2017). Gamified crowdsourcing:
Conceptualization, literature review, and future agenda. International Journal of Human-Com-
puter Studies, 106, 26-43. doi:https://doi.org/10.1016/j.ijhcs.2017.04.005
2
ABSTRACT
Two parallel phenomena are gaining attention in human-computer interaction research: gamifica-
tion and crowdsourcing. Because crowdsourcing’s success depends on a mass of motivated
crowdsourcees, crowdsourcing platforms have increasingly been imbued with motivational design fea-
tures borrowed from games; a practice often called gamification. While the body of literature and
knowledge of the phenomenon have begun to accumulate, we still lack a comprehensive and systematic
understanding of conceptual foundations, knowledge of how gamification is used in crowdsourcing,
and whether it is effective. We first provide a conceptual framework for gamified crowdsourcing sys-
tems in order to understand and conceptualize the key aspects of the phenomenon. The paper’s main
contributions are derived through a systematic literature review that investigates how gamification has
been examined in different types of crowdsourcing in a variety of domains. This meticulous mapping,
which focuses on all aspects in our framework, enables us to infer what kinds of gamification efforts
are effective in different crowdsourcing approaches as well as to point to a number of research gaps and
lay out future research directions for gamified crowdsourcing systems. Overall, the results indicate that
gamification has been an effective approach for increasing crowdsourcing participation and the quality
of the crowdsourced work; however, differences exist between different types of crowdsourcing: the
research conducted in the context of crowdsourcing of homogenous tasks has most commonly used
simple gamification implementations, such as points and leaderboards, whereas crowdsourcing imple-
mentations that seek diverse and creative contributions employ gamification with a richer set of me-
chanics.
Keywords: gamification, crowdsourcing, literature review, research agenda, human computation, per-
suasive technology
3
1 INTRODUCTION
During recent years, modern ICT technologies have spawned two parallel phenomena: gamifica-
tion and crowdsourcing. Today, many different organizations employ crowdsourcing as a way to out-
source various tasks to be carried out by ‘the crowd’: a mass of people reachable through the Internet
(Howe, 2006). The rapid diffusion of these technologies can be seen both in practice and in academia
(Estellés-Arolas and González-Ladrón-de-Guevara, 2012; Hamari et al., 2014; IEEE, 2014; Seaborn
and Fels, 2015). As of December 2015, almost 3,000 crowdsourcing-related examples are listed at
crowdsourcing.org, a leading crowdsourcing industry portal. In parallel, business analysts have esti-
mated that at least 50% of all organizations that manage innovation processes have gamified some of
their processes by 2015 (Gartner, 2011). The primary general goals of crowdsourcing are either cost
savings or the possibility to handle tasks that would be difficult to perform without human support.
However, crowdsourcing relies on the existence of a reserve of people willing to take on tasks for free
or for little monetary compensation. Along this reasoning, crowdsourcing systems are increasingly
gamified (Hamari et al., 2014; Seaborn and Fels, 2015), that is, organizations seek to make the
crowdsourced work activity more like playing a game in order to provide other motives for working
than just monetary compensation. Such gamified crowdsourcing systems are increasing, and are a major
application area of gamification (Hamari et al., 2014).
However, while the new phenomenon seems intuitively appealing, there is little coherent under-
standing of the characteristic features of gamified crowdsourcing systems. Although there are singular
scattered empirical pieces on the topic, no efforts have yet been made to collate and synthesize this
body of knowledge. Further, both crowdsourcing and gamification can take a variety of forms, and it
would be myopic to assume that differing gamification implementations would function similarly
across different crowdsourcing approaches. This lack of comprehensive understanding of the phenom-
enon inhibits us from designing effective incentive systems for crowdsourcing and therefore to opti-
mally harness the potential of the crowd and to derive the most successful solutions and innovations.
In this paper, we provide a comprehensive review, overview, and future outlook on the usage and
study of gamification in crowdsourcing systems. We first provide an integrated conceptual framework
4
for gamified crowdsourcing systems (Figure 3), based on the extant literature on crowdsourcing (Geiger
and Schader, 2014; Prpić et al., 2015) and gamification (Hamari et al., 2014; Seaborn and Fels 2015).
This framework remedies existing conceptual hurdles and scantness in how gamification, crowdsourc-
ing, and their combinations are generally perceived, and acts both as a framework to direct this review
and as an anchor point for further studies. The primary contribution of the paper is a systematic literature
review of 110 papers that investigates how gamification is being studied and implemented in
crowdsourcing research. Specifically, we review the use of different forms of gamification in different
types of crowdsourcing, as well as the interplay of gamification and monetary rewards, the types of
work being crowdsourced, the types of crowdsourcees, the domains where gamification in crowdsourc-
ing have been applied, and empirical results of studies on the effectiveness of gamification in
crowdsourcing. This meticulous mapping enables us to 1) infer what kinds of gamification efforts are
effective in different kinds of crowdsourcing approaches, 2) derive recommendations for designers of
gamified crowdsourcing systems, and 3) outline a research agenda for future research.
2 CONCEPTUAL FOUNDATIONS
2.1 Crowdsourcing
Generally, crowdsourcing can be seen as an online, distributed problem-solving approach that
transforms problems and tasks into solutions by harnessing the potential of large groups of
crowdsourcees via the Web rather than traditional employees or suppliers (Brabham, 2008a; Doan et
al., 2011; Estellés-Arolas and González-Ladrón-de-Guevara, 2012; Howe, 2006; Nakatsu et al., 2014;
Pedersen et al., 2013; Prpić et al., 2015; Zuchowski et al., 2016). Via the rise of online collaboration
technologies and Web2.0, it has become fairly easy to reach large groups of people. Thus, the concept
of crowdsourcing has become increasingly popular (Gatautis and Vitkauskaite, 2014; Geiger and
Schader, 2014; Rouse, 2010; Zuchowski et al., 2016). There has been an increase in the number of
startups with crowdsourcing-based business models (Brabham, 2010, 2008b) and many companies have
begun to invest in internal and external crowdsourcing (Leimeister et al., 2009; Schlagwein and Bjørn-
Andersen, 2014; Zuchowski et al., 2016). Crowdsourcing is considered a particularly useful way to
5
coordinate work for tasks that can benefit from collective intelligence (Leimeister, 2010) or that are
hard to process by computers and are therefore outsourced to people (Von Ahn, 2009).
Figure 1. Four Archetypes of Crowdsourcing Systems (based on Geiger and Schader, 2014)
Following the conceptual works of Geiger and Schader (2014) and Prpić et al. (2015)1,
crowdsourcing systems can be categorized into four categories, depending on the characteristics of the
crowdsourced work (see Figure 1). First, crowdprocessing approaches rely on the crowd to perform
large quantities of homogeneous tasks. Identical contributions are a quality attribute of the work’s va-
lidity. The value is derived directly from each isolated contribution (non-emergent) (e.g. Mechanical
Turk or Galaxy Zoo) (Lintott et al., 2008). Second, crowdsolving approaches use the diversity of the
crowd to find a huge number of heterogeneous solutions to a given problem. The value of this approach
results directly from each isolated contribution (non-emergent). Crowdsolving is often used for very
complex problems (e.g. Foldit, a game-based approach to optimize protein folding) (Cooper et al., 2010)
or if no pre-definable solution exists (e.g. ideation contests). Third, crowdrating systems commonly
seek to harness the so-called wisdom of crowds (Surowiecki, 2005) to perform collective assessments
or predictions. In this case, the emergent value arises from a huge number of homogeneous ‘votes’ (e.g.
1 The frameworks of Geiger and Schader (2014) as well as Prpić et al. (2015) classify crowdsourcing into four categories that are comparable
at their core. For clarity, we employed Geiger and Schader’s (2014) terminology.
6
NASA Clickworkers, in which the clicks/votes of a crowd were used to identify craters on asteroids)
(Kanefsky et al., 2001). Fourth, crowdcreating solutions seek to create comprehensive (emergent) arti-
facts based on a variety of heterogeneous contributions. Typical examples include all kinds of user-
generated content (e.g. YouTube) or knowledge derived from collaborative aggregation (e.g. Wikipe-
dia).
2.2 Gamification
Since an active crowd of participants is crucial for successful crowdsourcing, the motivation of
crowdsourcees is crucial (Zhao and Zhu, 2014a). Although much research has been done in the area of
crowdsourcing, only a few studies have comprehensively investigated participants’ motivations (e.g.
Brabham, 2010, 2008b; Kaufmann et al., 2011; Zhao and Zhu, 2014b; Zheng et al., 2011) and incentive
design (e.g. Harris et al., 2015; Leimeister et al., 2009; Straub et al., 2015). Studies have shown that a
wide variety of reasons and motivations, ranging from intrinsic to extrinsic, lead people to participate
in crowdsourcing and related online work and economic coordination (Hamari et al., 2016; Kaufmann
et al., 2011; Straub et al., 2015; Zhao and Zhu, 2014b; Zheng et al., 2011). For instance, intrinsic moti-
vation caused by tasks that allow a participant to be creative and experience autonomy, to develop
own skills and feel competent, to enjoy a pastime, or to achieve social recognitioncan in some cases
be dominated by extrinsic motivation evoked by financial payoffs or external social reasons (Kaufmann
et al., 2011). Further, task characteristics (Kaufmann et al., 2011; Zheng et al., 2011), task granularity
(Nakatsu et al., 2014; Zhao and Zhu, 2014b), or perceived motivational affordances (Zhao and Zhu,
2014b) can further influence an individual’s motivation.
Thus, one major challenge in motivating people to participate is to design a crowdsourcing system
that promotes and enables the formation of positive motivations towards crowdsourcing work and fits
the type of the activity. For instance, while some crowdsourcing approaches aim for systematically
derived contributions, others may call for incentive structures that promote creativity. In other words,
since crowdsourcing activities can differ dramatically, so can the means to motivate crowdsourcees in
a crowdsourcing initiative.
7
In incentive design, an important part of human-computer interaction research, one of the most
popular developments in recent years has commonly been called gamification (Hamari et al., 2014;
Hamari et al., 2015; Seaborn and Fels 2015). Gamification refers to design that seeks to, first, increase
the motivation of users or participants to engage in an activity or behavior and, second, to increase or
otherwise change a given behavior. The concept of gamification stems from the notion that games are
a pinnacle form of hedonic self-purposeful systems (Hamari and Koivisto, 2015a). Most gamification
applications borrow design patterns from (video) games, and, consequently, aim to give rise to similar
experiences as games commonly do, for instance, feelings of mastery, autonomy, flow, or suspense (see
e.g. Huotari and Hamari, 2016; Seaborn and Fels, 2015). If we consider gamification in the context of
crowdsourcing, it can be seen as an attempt to redirect crowdsourcees’ motivations from purely rational
gain-seeking to self-purposeful, intrinsically motivated activity: “Transforming Homo Economicus into
Homo Ludens” (Hamari, 2013). Through this redirection of motivations, the goal is to influence
crowdsourcees’ behaviors (e.g. participation, concentration, work duration, engagement, or work qual-
ity) in the execution of the crowdsourced work. In other words, elements known from games act as
motivational affordances (Huotari and Hamari, 2016; Jung et al., 2010; Zhang, 2008) for intrinsic mo-
tivations. Points, badges, leaderboards, avatars, and stories are frequently used motivational affordances
in gamification (Hamari et al., 2014). The extant literature has conceptualized gamification into a few
key aspects: 1) the design (gamification affordances), 2) the psychological outcomes of gamification,
and 3) the behavioral outcomes of gamification (Huotari and Hamari, 2016) (Figure 2). As in classical,
non-gamified crowdsourcing systems, gamification can be combined with additional incentives, typi-
cally monetary rewards, for instance, piece rate payments or a tournament prize that might have addi-
tional effects on crowdsourcees’ motivations (Straub et al., 2015; Zhao and Zhu, 2014a). Existing em-
pirical works also suggest that contextual factors, such as the domain (Hamari, 2013), and aspects re-
lating to the user, have an effect (Koivisto and Hamari, 2014).
Gamification has thus far been researched in a variety of areas, such as health (Jones et al., 2014),
exercise (Hamari and Koivisto, 2014, 2015a, 2015b; Chen and Pu, 2014; Koivisto and Hamari, 2014),
education (Bonde et al., 2014; Christy and Fox, 2014; Domínguez et al., 2013; De-Marcos et al., 2014;
8
Denny, 2013; Morschheuser et al., 2014), commerce (Hamari, 2013, 2015), intra-organizational com-
munication and activities (Morschheuser et al., 2017, 2015), government services (Bista et al., 2014),
public engagement (Tolmie et al., 2013), environmental behavior (J. J. Lee et al., 2013; Lounis et al.,
2014), and marketing and advertising (Terlutter and Capella, 2013; Cechanowicz et al., 2013). A review
on empirical studies on gamification (Hamari et al., 2014) indicated that most gamification studies re-
ported positive effects from the gamification implementations. However, there is still a sizeable gap in
our knowledge on the effectiveness of gamification in crowdsourcing, how the results pertaining to
gamification differ across domains, which gamification strategies have been used in which environ-
ments and towards which kinds of goals. Even though crowdsourcing systems are one of the most re-
searched application areas of gamification (Hamari et al., 2014), the literature is currently fragmented,
and no comprehensive conceptualization of gamified crowdsourcing systems exist.
Figure 2. Abstract conceptualization of gamification according to Hamari et al. (2014); Huotari
and Hamari (2016)
2.3 An integrated conceptual framework for gamified crowdsourcing systems
To map the existing literature on gamified crowdsourcing, conceptualizations are needed to guide
the mapping so that all the key aspects can be accounted for. Thus, by building on existing work on
crowdsourcing (Geiger and Schader, 2014; Pedersen et al., 2013; Zuchowski et al., 2016) and gamifi-
cation (Hamari et al., 2014) above, we suggest an integrated conceptual framework (as depicted in
Figure 3). The framework represents all core aspects of gamified crowdsourcing systems outlined above
and provides structure to investigate the phenomenon holistically, along its key components. Our liter-
ature review is guided by this framework and investigates both the empirical results on the effectiveness
of gamification in crowdsourcing, as well as the variety of concrete manifestations of gamified
crowdsourcing systems in the current literature, with a focus on incentive orchestrations of gamification
affordances and additional (i.e. monetary) rewards that could lead to several motivational and behav-
ioral outcomes.
9
Figure 3. Conceptual Framework of Gamified Crowdsourcing Systems
3 RESEARCH METHODOLOGY
Following the guidelines of Webster and Watson (2002), Boell and Cecez-Kecmanovic (2015),
and Ellis (2010), we began the literature review with a literature search. We used the Scopus database
as our source of data, since it indexes all other potentially relevant databases, for instance, ACM, IEEE,
Springer, and the DBLP Computer Science Bibliography. Since all these individual databases differ in
their search functions and algorithms, focusing the search on only one database has ensured that the
procedure is replicable, rigorous, and transparent (Boell and Cecez-Kecmanovic, 2015).
The literature search in the Scopus database was conducted in October 2016 using the search query
TITLE-ABS-KEY(GAMIF* AND CROWD*). The results included any permutation of the terms gam-
ification and crowdsourcing in the entry metadata (title, abstract, or keywords). We intentionally limited
the search to the metadata, since searching for the terms in all the text would result in a relatively large
amount of false positives, since many papers refer to gamification and/or crowdsourcing in passing. We
did not restrict the search to specific outlets or disciplines, for two reasons. First, crowdsourcing is a
socio-technical approach and is therefore applied in various contexts. Second, due to the novelty of the
10
gamification phenomena, most of the studies have not yet found their way into high-quality journals
and are published in peer-reviewed conferences instead.
The Scopus search query resulted in 145 hits. These hits contained 16 conference reviews and
summaries, which have been excluded since they provide no self-contained research contribution. Fur-
ther, a preliminary conference paper version of the present study was ignored resulting in a repertoire
of 128 hits (for a full list, see the Appendix). We then screened these papers for inclusion and relevance,
using the following criteria: 1) the full paper can be acquired; 2) the paper is in English (and has been
published by an international venue); 3) gamification and crowdsourcing must have a significant/rele-
vant role in the paper instead of just being mentioned in the metadata; 4) the paper is not a duplicate
that reports the same study in several papers. This screening process was performed by all of the authors
as a team. As a result of this screening, one paper was excluded due to the full paper not being available,
and another for not being in English. Further, we excluded 14 papers from the review, since gamified
crowdsourcing was not actually relevant in these papers’ content. Moreover, in two cases, duplicates
were found. For instance, Y. Liu, Alexandrova, and Nakajima (2011) and Y. Liu, Alexandrova,
Nakajima et al. (2011) describe the same experiment and report similar results. Thus, we merged the
information of the two papers and handled them in the analyses as one entity. Finally, 110 papers were
chosen for inclusion in the literature review.
In the next step of the literature analysis, we coded the included papers (Webster and Watson,
2002). First, we gathered information of all the papers pertaining to 1) bibliometric information (au-
thors, years, publication venues, publications types, disciplines), 2) the type of study (conceptual, em-
pirical, research-in-progress), and 3) domain. Using our framework presented in Figure 3, we collected
4) the different characteristics of gamified crowdsourcing systems, including the work type, the
crowdsourcing type, gamification affordances and mechanisms used, the incentive orchestration, and
the type of crowdsourcees. Finally, 5) we accumulated the results of empirical studies on the psycho-
logical and behavioral outcomes of gamified crowdsourcing systems and gamification’s overall effec-
tiveness in crowdsourcing. Based on the coded literature data, we analyzed the results in accordance
with Webster and Watson (2002) and compounded the data into frequency tables.
11
4 RESULTS
4.1 Bibliometric information
As a first step in the analysis, we examined the bibliometric data of the 110 included papers. The
first study to combine both gamification and crowdsourcing was already published in 2011. While three
papers were published in 2012, research on the concepts began to increase in 2013 (15 papers). Up to
October 2016, when the search was conducted, the number of papers has been constantly growing
(2014: 29 papers; 2015: 41 papers; first half of 2016: 21 papers). The vast majority of these publications
are conference papers and workshop papers (Table 1), which is in line with the novelty of the perspec-
tive; the reviewed studies were largely exploratory and preliminary works on the topic. However, an
increasing number of high-quality journal publications and book chapters can be recognized (2014: 1
paper; 2015: 21 papers; first half of 2016: 11 papers).
Table 1. Publication Types of the Reviewed Papers
Publication type
Frequency
%
Full conference paper
59
53.6
Workshop paper / poster
22
20.0
Journal article / article in press
21
19.1
Short conference paper
5
4.5
Book chapter
3
2.7
Total
110
100
Concerning the disciplines under which research on the topic was conducted, 84 of the studies had
been published in venues and journals related to HCI and computer science. In addition, 9 papers were
published on information retrieval-related forums. The rest were published in venues relating to eco-
nomics (2), engineering (2), cartography (2), IT education (2), communication (1), innovation manage-
ment (1), electronics (1), librarianship (1), musicology (1), physics (1), media production (1), bioinfor-
matics (1), and social science (1).
12
4.2 Descriptive information
Beyond bibliometric information, we analyzed the frequency of types of the studies in the body of
literature. As reported in Table 2, of the 110 reviewed studies, 63 were empirical. Of these, 37 papers
studied the effects of gamification in crowdsourcing, while 25 studies empirically investigated other
aspects relating to crowdsourcing and gamification. Beyond the empirical studies, 29 papers merely
included preliminary descriptions of a future study or a description of a gamified crowdsourcing system.
The body of literature contained 18 conceptual papers.
Table 2. Study Types
Type of study
Frequency
%
Empirical stud-
ies with results
on how gamifi-
cation works in
crowdsourcing
37
33.6
Empirical stud-
ies with no re-
sults on how
gamification
works in
crowdsourcing
26
23.6
(Preliminary)
description of a
study or a sys-
tem; no empiri-
cal results
29
26.4
Conceptual,
frameworks
18
16.4
Total
110
100
Regardless of the wide spectrum of the domains in which research on crowdsourcing is being
conducted, the entire body of literature indicates that crowdsourcing is always information-intensive
13
and relates to some form of information processing or retrieval: solving, creating, processing, and rat-
ing. Gamified crowdsourcing is often applied to elicit information about an environment. Such studies
commonly contain gathering, recognizing and classifying biological (Ansari et al., 2013; Bowser et al.,
2013; Prestopnik and Tang, 2015) and environment-related data (Mason et al., 2012), as well as pro-
moting environmental behavior (J. J. Lee et al., 2013; Massung et al., 2013; Preist et al., 2014). We also
identified that gamified crowdsourcing is popular in the context of digital cartography and navigation.
The latter type of studies featured, for instance, the creation of digital maps based on user-reported data,
the gathering of location-based sensory data (Kawajiri et al., 2014; Wang et al., 2015), location meas-
urements (Uzun et al., 2013), geospatial information (Goncalves et al., 2014), and (indoor) navigation
information (Bockes et al., 2015; Reinsch et al., 2013). Furthermore, as reported in Table 3, the domains
of language-related information (e.g. proofreading, translation, etc.), innovation, and software develop-
ment (e.g. the development of code fragments or requirement elicitation) were also among the most
common contexts for gamified crowdsourcing. A rising trend during the past few years in gamified
crowdsourcing has been the gathering of datasets for machine learning approaches. Overall, the appli-
cation of gamified crowdsourcing is far-reaching and involves a variety of contexts, from information
retrieval for entertainment purposes (Bainbridge, 2015; Pinto and Viana, 2015), to the solving of phys-
ical problems (Sørensen et al., 2016).
Table 3. Domains
Domain
Papers
Frequency
General crowdsourcing
(no specific domain)
Ahmed and Mueller, 2014; Brenner et al., 2014; Carlier et al., 2016; Choi et
al., 2014; Dai et al., 2016; Dergousoff and Mandryk, 2015; Eickhoff et al.,
2012; Feyisetan et al., 2015; Hantke et al., 2015; Harris, 2014; He et al., 2014;
Ipeirotis and Gabrilovich, 2014; Kacorri et al., 2014; Kacorri et al., 2015;
Katmada et al., 2016; Kurita et al., 2016; T. Y. Lee et al., 2013; Panchariya et
al., 2015; Nakatsu and Iacovou, 2014; Nose and Hishiyama, 2013; Roeng-
samut et al., 2015; Runge et al., 2015; Saito et al., 2014; Sakamoto et al.,
2016; Simperl, 2015; Stannett et al., 2013; Vasilescu et al., 2014; Yu et al.,
2015
28
Environment, nature,
ecological behavior
Ansari et al., 2013; Bowser et al., 2013; Fedorov et al., 2016; J. J. Lee et al.,
2013; Lessel et al., 2015; Mason et al., 2012; Massung et al., 2013; Netek and
Panek, 2016; Preist et al., 2014; Prestopnik and Tang, 2015; Supendi and Pri-
hatmanto, 2015; Supriadi and Prihatmanto, 2015
12
Cartography, navigation
Bockes et al., 2015; Goncalves et al., 2014; Kawajiri et al., 2014; Martella et
al., 2015; McCartney et al., 2015; Moreno et al., 2015; Reinsch et al., 2013;
Simões and De Amicis, 2016; Talasila et al., 2016; Uzun et al., 2013; Wang et
al., 2015; Wu and Luo, 2014
12
14
Language
AlRouqi and Al-Khalifa, 2014; Benjamin, 2016; Chamberlain, 2014; Itoko et
al., 2014; Kobayashi et al., 2015; Packham and Suleman, 2015; Ustalov, 2015
7
Machine learning
Deng et al., 2016; Fava et al., 2015; Inaba et al., 2015; Nunzio et al., 2016;
Riegler et al., 2015; Rosani et al., 2015
6
Software development
Biegel et al. 2014; LaToza et al., 2013; Snijders et al., 2014, 2015; Yakushin
and Lee, 2014; Xie et al., 2015
6
Innovation
Armisen and Majchrzak, 2015; Brandtner et al., 2014; Cherinka et al., 2013;
Lauto and Valentin, 2016; Roth et al., 2015
5
Health, medical, neuro-
science
Bentzien et al. 2013; Dumitrache et al., 2013; Silva and Lopes, 2016; Su-
sumpow et al. 2014; Tinati et al., 2016
5
Education
Roa-Valverde, 2014; Marasco et al., 2015; Sheng, 2013
3
Politics
Dos Santos et al., 2015; Mahnič, 2014; Reid, 2013
3
Work
Machnik et al., 2015; Pothineni et al., 2014; Smith and Kilty, 2014
3
Entertainment
Bainbridge, 2015; Burnett et al., 2012; Pinto and Viana, 2015
3
Finance, funding
Altmeyer et al., 2016; Sakamoto and Nakajima, 2014
2
Tourism
Y. Liu, Alexandrova, Nakajima et al., 2011; Sigala, 2015; Simões et al., 2015
3
Energy
Cao et al., 2015; Hammais et al., 2014
2
Mobility, transportation
Brito et al., 2015; De Franga et al., 2015
2
Accessibility, disability
Prandi et al., 2016, 2015
2
Fashion
Melenhorst et al., 2015
1
Marketing
Mizuyama and Miyashita, 2016
1
Physics
Sørensen et al., 2016
1
Astronomy
Greenhill et al., 2016
1
Mentoring
Nagai et al., 2014
1
Behavioral research
Cucari et al., 2016
1
Total
110
4.3 Empirical research papers
Of the 110 papers included in the review, 63 studies were identified as empirical research papers
(Table 2). In the next sections, we report findings from the 63 empirical studies. For clarity on the two
empirical results types, in the following tables, we marked the citations to studies with empirical results
about the effectiveness of gamification in crowdsourcing in bold, while studies that did not directly
15
investigate effectiveness of gamification are not bolded. Nearly all these papers contained detailed in-
formation about the implementation of gamification in a concrete crowdsourcing system. Thus, we were
able to investigate both the empirical results that allowed us to draw conclusions about the effectiveness
of gamified crowdsourcing, but also the characteristics of the considered systems in the literature along
the components described in Figure 3.
4.4 Characteristics of gamified crowdsourcing systems in the literature
The core of every crowdsourcing system is the work that is outsourced to the crowd. A wide variety
of activities could be found in the analyzed papers. Therefore, we clustered the crowdsourced work
based on the participants’ core activities in several categories shown in Table 4. Most of the analyzed
approaches with detailed information about the crowdsourced work try to encourage people to do com-
putational work, which otherwise pose challenges for computers without human guidance (Von Ahn,
2009). These include the recognition of objects on images, such as animals, plant species, or waste
(Carlier et al., 2016; Deng et al., 2016; Lessel et al., 2015), proofreading of text scanned with OCR
technology (Kobayashi et al., 2015), relevance assessment of different images (Harris, 2014), video
transcription (Saito et al., 2014), or the annotation of medical texts (Dumitrache et al., 2013). Further-
more, we found that many of the identified approaches sought to encourage people to report different
kinds of location-based information. Usually, these cases are mobile apps or distributed stationary in-
stallations. Also, work that can easily be virtually disseminated in digital communities such as the
answering of user-generated questions or the provision of feedbackare popular usage cases of gami-
fied crowdsourcing. Only a few studies considered creative creation work, such as ideation or complex
optimization tasks that draw on the collective intelligence of a crowd.
Table 4. Types of Crowdsourced Work
Work type
Papers
#
Recognizing, identifying, and tag-
ging work
image recognition, object recognition, fea-
ture recognition, character recognition, in-
formation recognition
Altmeyer et al., 2016; Brenner et al., 2014; Carlier et al., 2016; Deng et
al., 2016; Dergousoff and Mandryk, 2015; Feyisetan et al., 2015; Itoko
et al., 2014; Kobayashi et al., 2015; Kurita et al., 2016; Lessel et al.,
2015; Mason et al., 2012; Riegler et al., 2015; Roengsamut et al., 2015;
Rosani et al., 2015; Runge et al., 2015
15
Reporting location-based
information
Bowser et al., 2013; Brito et al., 2015; De Franga et al., 2015; Goncalves
et al., 2014; Kawajiri et al., 2014; Y. Liu, Alexandrova, Nakajima et
al., 2011*; Martella et al., 2015; Massung et al., 2013; Prandi et al.,
14
16
location tagging, reporting of location-based
information, on-location experience, taking
location-based photos
2016; Preist et al., 2014; Sheng, 2013; Simões and De Amicis, 2016; Ta-
lasila et al., 2016; Uzun et al., 2013
Answering questions/sharing
knowledge
answering user-generated questions, provid-
ing feedback, knowledge-sharing in commu-
nities
Ipeirotis and Gabrilovich, 2014; Inaba et al., 2015; Y. Liu, Alexan-
drova, Nakajima et al., 2011*; Machnik et al., 2015; Pothineni et al.,
2014; Vasilescu et al., 2014
6
Creative creation work
idea creation, algorithm development, re-
quirements elicitation
Bentzien et al., 2013; Choi et al., 2014; Dos Santos et al., 2015; Lauto and
Valentin, 2016; Snijders et al., 2015; Yakushin and Lee, 2014
6
Text annotation work
text annotation, medical text annotation, bio-
logical data annotation
Cao et al., 2015; Chamberlain, 2014; Dumitrache et al., 2013; Nose and
Hishiyama, 2013; Ustalov, 2015
5
Assessment work
relationship building, relevance assessment,
classification work, decision-making
Eickhoff et al., 2012; Harris, 2014; Melenhorst et al., 2015; Prestopnik
and Tang, 2015; Yu et al., 2015
5
Searching for and/or optimization
of tasks
document searching, searching for digital
profiles, finding optimal solutions
He et al., 2014; T. Y. Lee et al., 2013; Nunzio et al., 2016; Sørensen et
al., 2016; Tinati et al., 2016
5
Transcription work
video captioning
Kacorri et al., 2014, 2015; Saito et al., 2014
3
Translation work
translating sentences
Packham and Suleman, 2015
1
N/A
no clear work description provided, user-
generated tasks, social activities
Cucari et al., 2016; J. J. Lee et al., 2013; Nagai et al., 2014; Sakamoto
and Nakajima, 2014
4
References in bold refer to studies in which empirical results about gamification have been reported.
* Mentioned twice, because the core task of that crowdsourcing system is the answering of location-based questions.
By analyzing the value creation (emergent or non-emergent solution) and the contribution type
(homogeneous or heterogeneous contribution) according to our framework (Figure 3) and Geiger and
Schader (2014), we found that most cases in the reviewed literature can be classified as gamified
crowdprocessing systems (homogenous tasks, non-emergent outcome). Cases with gamified
crowdsolving and crowdrating were also present. However, very few cases described gamified crowd-
creating systems (see Table 5).
We identified 12 categories of gamification affordances (design elements, known from video
games) in the reviewed body of literature (see Table 5). Points (in 53 cases) were clearly the most
reported gamification components and usually provided the basis for other affordances. Commonly,
points were combined with leaderboards (in 45 cases) to create competition between participants. Points
17
were also combined with further elements in diverse ways across implementations; they were used in
combination with, for instance, time limits (e.g. Harris, 2014; Kacorri et al., 2014), they were used as a
basis for calculating the level of crowdsourcees in a level system (e.g. T. Y. Lee et al., 2013; Saito et
al., 2014), with the ability to compare them between team members and peers (e.g. T. Y. Lee et al.,
2013; Saito et al., 2014), as well as with badges and missions to visualize specific goals (e.g. Bowser
et al., 2013; J. J. Lee et al., 2013; Massung et al., 2013; Preist et al., 2014; Vasilescu et al., 2014).
Looking at the relative shares of affordances reported in all the reviewed papers, we found the
largest variety of affordances in studies that investigated solving-related crowdsourcing work, while
papers on crowdprocessing and crowdrating reported simpler forms of gamification such as simple
combinations of points and leaderboards. Crowdsourcing types of crowdcreating and crowdsolving dif-
fer from crowdrating and crowdprocessing in that the participation at crowdsourcing work depends on
a variety of heterogeneous contributions. Our review showed that studies in the areas of crowdcreating
and crowdsolving reported the use of more manifold sets of gamification affordances. These approaches
employed not only points and leaderboards, but also, for instance, storytelling, missions, and avatars.
Especially crowdsourcing approaches that sought heterogeneous location-based information or sought
to solve complex problems based on creative and diverse contributions often applied rich gamification
designs. For instance, Tinati et al. (2016) applied points, badges, progress statistics, virtual teams, and
leaderboards to engage users to find patterns in 3-D maps of neuro-scans, while Prandi et al. (2016)
created an augmented reality with zombies and virtual weapons as a playground for creating a user-
generated map of heterogeneous accessibility barriers.
Since most studies provided comprehensive information on the applied game mechanics and rules,
we also analyzed and classified the gamification approaches along their applied goal structures
(Morschheuser et al., 2017) into competitive, cooperative, and individualistic gamification designs (Ta-
ble 6). Crowdsourcing types of creating and rating differ from solving and processing in that the end
goal of the crowdsourced work is the emergent value from all the contributions. Therefore, it could be
assumed that designers of gamified crowdsourcing systems with emergent outcomes would rather use
cooperative gamification designs compared to designs of non-emergent approaches. However, when
analyzing the goal structures used in these types, no notable differences could be found. Competition-
18
based designs with points and leaderboards that encourage individual work rather than cooperative work
were used very often in all four crowdsourcing types. However, the scoring approaches differed based
on how points were awarded and from which actions they could be earned. In crowdprocessing ap-
proaches, where the sheer number of contributions is often more important than quality (Geiger and
Schader, 2014), users were commonly rewarded for general participation (e.g. number of completed
tasks (Itoko et al., 2014), number of correct answers (Ipeirotis and Gabrilovich, 2014), or the number
of visited locations (Uzun et al., 2013)). While in crowdrating approaches, where the output is more
emergent, users were also rewarded for the quality of their contributions (e.g. the quality of contribu-
tions rated by others (Dumitrache et al., 2013), or similarity/agreement with other crowdsourceescon-
tributions (Eickhoff et al., 2012; Goncalves et al., 2014; Harris, 2014; Saito et al., 2014)). Such scoring
mechanisms, which depend on the extent of agreement with other crowdsourcees’ contributions, seem
to be suitable for motivating users to emulate others and to “think and act like the community”. In
crowdsolving approaches, both forms occurred equally (e.g. the number of completed tasks (Y. Liu,
Alexandrova, Nakajima et al., 2011; Yakushin and Lee, 2014), and the quality of contributions rated by
others (J. J. Lee et al., 2013; Vasilescu et al., 2014)). Unfortunately, the small amount of studies inves-
tigating gamification in the crowdcreating approaches limits the identification of a clear pattern in their
gamification implementations.
Table 5. Gamification Affordances per Crowdsourcing Type
Crowdsourcing
type/affordances
Processing
(N = 27)
Rating
(N = 12)
Solving
(N = 17)
Creating
(N = 7)
Frequency
(total 63)
Points/Scores
Brenner et al., 2014; Car-
lier et al., 2016; Cao et
al., 2015; Cucari et al.,
2016; Deng et al., 2016;
Dergousoff and Man-
dryk, 2015; Feyisetan et
al., 2015; Inaba et al.,
2015; Ipeirotis and Ga-
brilovich, 2014; Kawa-
jiri et al., 2014; Koba-
yashi et al., 2015; Kurita
et al., 2016; T. Y. Lee et
al., 2013; Melenhorst et
al., 2015; Nose and
Hishiyama, 2013; Pack-
ham and Suleman, 2015;
Prestopnik and Tang,
2015; Riegler et al., 2015;
Roengsamut et al., 2015;
Rosani et al. 2015; Runge
Altmeyer et al.,
2016; Dumitra-
che et al., 2013;
Eickhoff et al.,
2012; Goncalves
et al., 2014; Har-
ris, 2014; Ka-
corri et al., 2014;
Kacorri et al.,
2015; Lessel et
al., 2015; Mason
et al., 2012;
Massung et al.,
2013; Preist et
al., 2014; Saito
et al., 2014
Choi et al., 2014;
Dos Santos et al.,
2015; De Franga
et al., 2015; He
et al., 2014;
Lauto and Valen-
tin, 2016; J. J.
Lee et al., 2013;
Y. Liu, Alexan-
drova,
Nakajima et al.,
2011; Nunzio et
al., 2016; Simões
and De Amicis,
2016; Sørensen
et al., 2016; Ti-
nati et al., 2016;
Vasilescu et al.,
2014; Yakushin
and Lee, 2014
Brito et al.,
2015; Mar-
tella et al.,
2015;
Pothineni
et al., 2014;
Prandi et al.,
2016; Sheng,
2013; Snijders
et al., 2015
54
19
et al., 2015; Talasila et
al., 2016; Uzun et al.,
2013
Leaderboards/
Rankings
Brenner et al., 2014; Cao
et al., 2015; Cucari et al.,
2016; Dergousoff and
Mandryk, 2015; Fey-
isetan et al., 2015*;
Inaba et al., 2015; Ipei-
rotis and Gabrilovich,
2014*; Itoko et al., 2014;
Kawajiri et al., 2014;
Kobayashi et al., 2015;
T. Y. Lee et al., 2013*;
Machnik et al., 2015;
Melenhorst et al., 2015;
Packham and Suleman,
2015; Riegler et al., 2015;
Roengsamut et al., 2015;
Rosani et al. 2015; Ta-
lasila et al., 2016; Uzun
et al., 2013
Altmeyer et al.,
2016; Chamber-
lain, 2014; Du-
mitrache et al.,
2013; Eickhoff
et al., 2012;
Goncalves et al.,
2014; Harris,
2014; Kacorri et
al., 2015; Lessel
et al., 2015;
Massung et al.,
2013; Preist et
al., 2014; Saito
et al., 2014
Bentzien et al.,
2013; De Franga
et al., 2015; Dos
Santos et al.,
2015; He et al.,
2014; Lauto and
Valentin, 2016; J.
J. Lee et al.,
2013; Y. Liu,
Alexandrova,
Nakajima et al.,
2011; Nunzio et
al., 2016; Tinati
et al., 2016; Us-
talov, 2015; Va-
silescu et al.,
2014; Yakushin
and Lee, 2014
Bowser et al.,
2013; Mar-
tella et al.,
2015; Snijders
et al., 2015
45
Badges/
Achievements
Cao et al., 2015; Fey-
isetan et al., 2015*; Itoko
et al., 2014; Kobayashi et
al., 2015; T. Y. Lee et al.,
2013*; Melenhorst et al.,
2015; Talasila et al.,
2016; Uzun et al., 2013
Altmeyer et al.,
2016; Mason et
al., 2012; Mas-
sung et al.,
2013; Preist et
al., 2014
De Franga et al.,
2015; Y. Liu,
Alexandrova,
Nakajima et al.,
2011; Tinati et
al., 2016; Va-
silescu et al.,
2014
Bowser et al.,
2013; Mar-
tella et al.,
2015; Sheng,
2013
19
Levels
Brenner et al., 2014; Feyi-
setan et al., 2015*; T. Y.
Lee et al., 2013*; Riegler
et al., 2015; Roengsamut
et al., 2015; Talasila et
al., 2016; Yu et al., 2015
Dumitrache et
al., 2013; Saito
et al., 2014
De Franga et al.,
2015; Nagai et
al., 2014; Nunzio
et al., 2016;
Yakushin and
Lee, 2014
Martella et
al., 2015;
Sheng, 2013
15
Progress
Cao et al., 2015; Fey-
isetan et al., 2015*; Itoko
et al., 2014; T. Y. Lee et
al., 2013*
J. J. Lee et al.,
2013; Nagai et
al., 2014; Tinati
et al., 2016; Va-
silescu et al.,
2014
Brito et al.,
2015
9
Feedback
Brenner et al., 2014; Deng
et al., 2016; Feyisetan et
al., 2015*; Ipeirotis and
Gabrilovich, 2014*;
Melenhorst et al., 2015
Kacorri et al.,
2015;
J. J. Lee et al.,
2013; Y. Liu,
Alexandrova,
Nakajima et al.,
2011
8
Virtual objects/
resources (e.g.
weapons, materi-
als)
Dergousoff and Man-
dryk, 2015; Prestopnik
and Tang, 2015* ; Ta-
lasila et al., 2016
Lauto and Valen-
tin, 2016; Nunzio
et al., 2016;
Simões and De
Amicis, 2016
Prandi et al.,
2016*;
Snijders et al.,
2015
8
Storytelling
Nose and Hishiyama,
2013; Prestopnik and
Tang, 2015*
Sakamoto and
Nakajima, 2014;
Simões and De
Amicis, 2016
Brito et al.,
2015; Prandi
et al., 2016*;
Sheng, 2013
7
Virtual territories
Talasila et al., 2016
Y. Liu, Alexan-
drova,
Nakajima et al.,
2011; Simões
Brito et al.,
2015; Mar-
tella et al.,
2015; Prandi
7
20
and De Amicis,
2016
et al., 2016*;
Sheng, 2013
Teams
Saito et al.,
2014; Kacorri et
al., 2014; Ka-
corri et al., 2015
Bentzien et al.,
2013; Tinati et
al., 2016; Us-
talov, 2015
6
Missions
Cucari et al., 2016
J. J. Lee et al.,
2013; Sakamoto
and Nakajima,
2014
3
Avatars/Virtual
characters
Dergousoff and Man-
dryk, 2015; Talasila et
al., 2016
De Franga et al.,
2015; Nagai et
al., 2014
4
References in bold refer to studies in which empirical results about gamification have been reported.
* In this paper the affordance is used as experimental condition in a comparison of different gamification affordances.
Table 6. Gamification Design Approaches per Crowdsourcing Type
Crowdsourcing type/design approach
Processing
Rating
Solving
Creating
Frequency
Competitive
16 (+2)*
9
10
3
38 (+2)
Cooperative / Intergroup competition
2
2
5
3
12
Individualistic
4 (+2)*
1
-
1
6 (+2)
Not clear (due to missing details)
3
-
2
-
5
* Two papers compared an individual with a competitive approach and found that competitions seem to be more effective.
In most of the studies, the incentives were solely based on gamification (Table 7). Some studies
additionally employed financial rewards, for instance, a small monetary task-based compensation or a
prize for the leaders on a high-score list, to motivate participants.
Table 7. Incentive Orchestration
Incentive
Literature
#
Gamification
Altmeyer et al., 2016; Bentzien et al., 2013; Bowser et al., 2013; Cao et al., 2015; Chamberlain,
2014; Cucari et al., 2016; De Franga et al., 2015; Dergousoff and Mandryk, 2015; Dumitrache
et al., 2013; Goncalves et al., 2014; He et al., 2014; Itoko et al., 2014; Kacorri et al., 2014,
2015; Kobayashi et al., 2015; Kurita et al., 2016; Lauto and Valentin, 2016; J. J. Lee et al., 2013;
T. Y. Lee et al., 2013; Lessel et al., 2015; Y. Liu, Alexandrova, Nakajima et al., 2011; Martella
et al., 2015; Mason et al., 2012; Nagai et al., 2014; Nose and Hishiyama, 2013; Nunzio et al.,
2016; Pothineni et al., 2014; Prestopnik and Tang, 2015; Roengsamut et al., 2015; Rosani et al.,
2015; Runge et al., 2015; Saito et al., 2014; Sakamoto and Nakajima, 2014; Sheng, 2013; Simões
and De Amicis, 2016; Snijders et al., 2015; Sørensen et al., 2016; Tinati et al., 2016; Ustalov,
2015; Uzun et al., 2013; Vasilescu et al., 2014; Yakushin and Lee, 2014; Yu et al., 2015
43
Gamification +
monetary re-
wards
Brenner et al., 2014; Brito et al., 2015; Choi et al., 2014; Deng et al., 2016; Dos Santos et al.,
2015; Harris, 2014; Inaba et al., 2015; Kawajiri et al., 2014; Melenhorst et al., 2015; Riegler et
al., 2015
10
Gamification +
other rewards
Machnik et al., 2015 (reward: access to specific information)
1
21
Both as an ex-
perimental con-
dition
Carlier et al., 2016; Eickhoff et al., 2012; Feyisetan et al., 2015; Ipeirotis and Gabrilovich,
2014; Massung et al., 2013; Packham and Suleman, 2015; Prandi et al., 2016; Preist et al.,
2014; Talasila et al., 2016
9
References in bold refer to studies in which empirical results about gamification have been reported.
As seen in Table 8, most studies combining crowdsourcing and gamification were not targeted to
any specific types of crowds but rather described implementations that are agnostic as to who the
crowdsourcees should be. However, interestingly a few implementations were designed with a specific
crowdsourcee segment in mind. For instance, Yakushin and Lee (2014) crowdsourced the development
of algorithms for humanoid robots to a network of specialists in a competitive way, while for instance,
T. Y. Lee et al. (2013) motivated employees to search for and identify Twitter accounts. These examples
demonstrate that gamification is usable in a variety of usage cases with different target groups. How-
ever, to date, we have seen little research into whether there are differences between user groups or
which affordances should be used to support different motivations of crowdworkers. However, first
empirical studies suggest that the effectiveness of gamification may differ according to crowdsourcees’
personal characteristics, such as the contributorsages (Itoko et al., 2014; Kobayashi et al., 2015). Based
on Eickhoff et al. (2012) and Itoko et al. (2014), gamification has great potential for young and senior
crowdsourcees, although competition-based gamification might be more effective with young partici-
pants.
Table 8. Crowdsourcees
Participants
#
Unspecified crowd
(all other empirical papers)
44
Students
Bowser et al., 2013; Kawajiri et al., 2014; J. J. Lee et al., 2013; Nunzio et al., 2016; Tala-
sila et al., 2016
5
Experts
Cao et al., 2015; Dumitrache et al., 2013; Mason et al., 2012; Melenhorst et al., 2015; U-
stalov, 2015
5
Researchers
Yakushin and Lee, 2014
1
Employees
Lauto and Valentin, 2016; T. Y. Lee et al., 2013; Machnik et al., 2015; Pothineni et al.,
2014; Snijders et al., 2015
5
The elderly
Nagai et al., 2014
1
Citizens
Dos Santos et al., 2015; Goncalves et al., 2014
2
References in bold refer to studies in which empirical results about gamification have been reported.
22
4.5 Psychological and behavioral outcomes
Finally, we examined the psychological and behavioral outcomes described in the empirical papers
and associated with the use of gamification affordances. The psychological outcomes were not com-
monly measured using comprehensive measurement instruments; they were mostly examined via sim-
ple questionnaires or qualitative observations, or the observations of how participants behaved was used
as a proxy for psychological aspects. Currently, only four studies used validated psychometric meas-
urement instruments (Kobayashi et al., 2015; Melenhorst et al., 2015; Prestopnik and Tang, 2015;
Runge et al., 2015). Table 9 provides an overview of the literature in which results about psychological
outcomes were reported.
In most studies, the behavioral outcomes of gamification are related to the participation of
crowdsourcees in a specific task (Figure 3). Several studies that directly compared a gamified and non-
gamified approach (Table 10) report positive outcomes, such as increases in (long-term) participation
(e.g. Eickhoff et al., 2012; Kawajiri et al., 2014; T. Y. Lee et al., 2013), output quality (Eickhoff et al.,
2012; Goncalves et al., 2014; T. Y. Lee et al., 2013), and reduction in cheating compared to traditional
paid crowdsourcing (Eickhoff et al., 2012). However, gamification does not necessarily lead to an in-
crease in participation. Massung et al. (2013) measured very small differences compared to a control
group without gamification, while Packham and Suleman (2015) found that simple gamification ap-
proaches (points and leaderboards) cannot replace financial incentives in crowdprocessing. Overall,
three studies reported more negative effects than positive (Table 10). In addition to the above studies
that employed direct comparisons, 10 studies reported positive results based on users’ perceptions of
the gamified crowdsourcing system (Bowser et al., 2013; Dumitrache et al., 2013; J. J. Lee et al., 2013;
Saito et al., 2014) or based on the measured user engagement (Pothineni et al., 2014). Thesemostly
descriptively reported results showed no effects of gamification per se, but can be seen as positive
indicators for the acceptance of gamification in the context of crowdsourcing (Table 10).
23
Some studies even compared different gamification designs and provided first empirical results
for designing gamified crowdsourcing approaches in order to achieve positive psychological and be-
havioral outcomes (Table 10). For instance, Choi et al. (2014) showed in an experiment that explicitly
expressed gamification rewards before the task phase can increase the quality of crowdsourcing work
and crowdsourcees’ engagement levels. The empirical findings of T. Y. Lee et al. (2013) indicate that
social achievements seem to be a bit more effective than individual ones (see also Feyisetan et al., 2015;
Runge et al., 2015). The authors examine this by comparing the effects of public participation rankings
that encourage workers to compare their efforts with others and level systems that motivate via the
visualization of individual achievements. Ipeirotis and Gabrilovich (2014) showed that the concrete
design of a leaderboard or ranking can have significant effects on the participation. Based on their
findings, the authors recommend to use all-timeleaderboards prudently, since they may demotivate
low-ranked participants and newcomers. Massung et al. (2013) and Preist et al. (2014) showed demoti-
vating effects of leaderboards and possible negative effects on the overall outcome; they propose a set
of design principles for designers of gamified crowdsourcing systems and suggest mixing several mo-
tivational affordances for different target groups to increase the overall outcome. However, T. Y. Lee
et al. (2013) and Dumitrache et al. (2013) indicate that adding more motivational affordances does not
always increase motivation and that to date we have too little knowledge to be able to explain effec-
tiveness of affordances for a specific user group (Itoko et al., 2014). Prestopnik and Tang (2015) high-
lighted the effects of storytelling in gamified crowdsourcing. By comparing two gamified
crowdprocessing approaches, the researchers identified that storytelling can transform perceptions of a
crowdsourcing task from work-related to play-related.
Taken together, these three categories of empirical studies on the effectiveness of gamification in
crowdsourcing, more than 90% of the analyzed studies reported positive or predominantly positive out-
comes of gamification in crowdsourcing (Table 10). Most cases reported positive effects on quantitative
contributions (Table 11). However, qualitative and long-term effects could also be achieved, which
strongly depends on the context and concrete implementation of gamification affordances.
Table 9. Psychological Outcomes Reported in the Literature
24
Psychological
outcome
Literature
#
Motivation
Altmeyer et al., 2016; Bowser et al., 2013; Eickhoff et al., 2012; Itoko et al., 2014; Kawajiri et
al., 2014; Kobayashi et al., 2015; Y. Liu, Alexandrova, Nakajima et al., 2011; Machnik et al.,
2015; Massung et al., 2013; Nose and Hishiyama, 2013; Preist et al., 2014; Prestopnik and
Tang, 2015; Roengsamut et al., 2015; Runge et al., 2015; Tinati et al., 2016
15
Attitudes
Bowser et al., 2013; Dergousoff and Mandryk, 2015; Itoko et al., 2014; Kobayashi et al., 2015;
Martella et al., 2015; Preist et al., 2014; Prestopnik and Tang, 2015; Roengsamut et al., 2015;
Runge et al., 2015; Tinati et al., 2016
10
Fun/Enjoyment
Altmeyer et al., 2016; Bowser et al., 2013; Choi et al., 2014; Dumitrache et al., 2013; Kobayashi
et al., 2015; J. J. Lee et al., 2013; Melenhorst et al., 2015; Prandi et al., 2016; Prestopnik and
Tang, 2015; Roengsamut et al., 2015; Runge et al., 2015; Sheng, 2013; Tinati et al., 2016
13
Engagement
Altmeyer et al., 2016; Bowser et al., 2013; Y. Liu, Alexandrova, Nakajima et al., 2011; Snijders
et al., 2015
4
Other (e.g. ap-
peal, interest,
immersion)
Cucari et al., 2016; Kobayashi et al., 2015; Melenhorst et al., 2015; Prestopnik and Tang, 2015;
4
References in bold refer to studies in which empirical results about gamification have been reported.
Table 10. Results on Gamified Crowdsourcing
Results
Compared a gamified approach
with a non-gamified one
No comparison (inter-
views, user feedback,
perceptions, time series
analysis, influence of con-
text factors)
Comparisons between different
gamification designs
#
Quantitative
- inferential
Eickhoff et al., 2012; Nose and
Hishiyama, 2013; Dergousoff
and Mandryk, 2015
Melenhorst et al., 2015
Choi et al., 2014; Ipeirotis and
Gabrilovich, 2014; T. Y. Lee et
al., 2013; Runge et al., 2015
8
Quantitative
- descriptive
Carlier et al., 2016*; De Franga
et al., 2015; Dumitrache et al.,
2013*; Kobayashi et al., 2015;
Y. Liu, Alexandrova, Nakajima
et al., 2011; Simões and De
Amicis, 2016; Sørensen et al.,
2016; Talasila et al., 2016
Pothineni et al., 2014;
Roengsamut et al., 2015
Feyisetan et al., 2015; Packham
and Suleman, 2015*
12
Qualitative
Kacorri et al., 2015; Martella et
al., 2015
Machnik et al., 2015;
Saito et al., 2014; Tinati
et al., 2016
Preist et al., 2014; Prestopnik
and Tang, 2015
7
Mixed
- inferential
Altmeyer et al., 2016; Vasilescu
et al., 2014
Bowser et al., 2013; Itoko
et al., 2014
Kawajiri et al., 2014; Massung
et al., 2013; Prandi et al., 2016
7
Mixed
- descriptive
Goncalves et al., 2014
J. J. Lee et al., 2013;
Snijders et al., 2015
3
Total
More positive (14) / negative (2)
More positive (10)
More positive (10) / negative (1)
37
* Studies that reported negative effects of gamification, for instance compared to paid crowdsourcing or non-gamified approaches
Table 11. Positive Effects of Gamification in Crowdsourcing Reported in the Literature
Outcomes
Literature
#
25
Positive effects on the
quantitative contribu-
tion / willingness to
contribute
Altmeyer et al., 2016; Bowser et al., 2013; De Franga et al., 2015; Dergousoff and Man-
dryk, 2015; Eickhoff et al., 2012; Feyisetan et al., 2015; Ipeirotis and Gabrilovich, 2014;
Itoko et al., 2014; Kawajiri et al., 2014; Kobayashi et al., 2015; J. J. Lee et al., 2013; T.
Y. Lee et al., 2013; Y. Liu, Alexandrova, Nakajima et al., 2011; Martella et al., 2015;
Massung et al., 2013; Nose and Hishiyama, 2013; Pothineni et al., 2014; Prandi et al.,
2016; Preist et al., 2014; Prestopnik and Tang, 2015; Roengsamut et al., 2015; Simões
and De Amicis, 2016; Snijders et al., 2015; Talasila et al., 2016; Tinati et al., 2016; Va-
silescu et al., 2014
26
Positive effects on the
qualitative contribution
Dergousoff and Mandryk, 2015; Eickhoff et al., 2012; Feyisetan et al., 2015; Goncalves
et al., 2014; Ipeirotis and Gabrilovich, 2014; Kawajiri et al., 2014; Kobayashi et al.,
2015; T. Y. Lee et al., 2013; Massung et al., 2013; Prestopnik and Tang, 2015; Runge et
al., 2015; Simões and De Amicis, 2016; Sørensen et al., 2016
13
Positive effects on con-
tinued work / long-term
engagement
Itoko et al., 2014; Kawajiri et al., 2014; Kobayashi et al., 2015; T. Y. Lee et al., 2013;
Massung et al., 2013; Prestopnik and Tang, 2015
6
5 DISCUSSION
In this study, we have provided a comprehensive review and overview of the use of gamification
in crowdsourcing in the current body of literature. Following an integrated conceptual framework (Fig-
ure 3), we analyzed characteristic features of gamified crowdsourcing systems. Especially, we reviewed
the use of different forms of gamification in different types of crowdsourcing (crowdprocessing,
crowdsolving, crowdrating, and crowdcreating), as well as the interplay between gamification and ad-
ditional monetary rewards, the types of work that have been crowdsourced, the types of crowdsourcees,
and the domains in which gamification in crowdsourcing has been applied. Furthermore, we investi-
gated the results of empirical studies on the psychological and behavioral outcomes of gamification in
crowdsourcing systems. This meticulous mapping enabled us to discuss recommendations for designing
gamified crowdsourcing systems as well as limitations, emerging issues, and future research directions.
5.1 Recommendations for designing gamified crowdsourcing systems
We form recommendations by triangulating from the results in the body of the reviewed literature
and the results of this review. One of the overall primary findings of our review is that gamification
positively affects crowdsourcing work, either in the form of increased crowdsourcee motivations or
contributions. Thus, it is less important to investigate whether gamification works as a whole; instead,
we need to delve deeper to explore which specific design choices are successful in the various
crowdsourcing types.
26
The reviewed literature indicates that gamified crowdsourcing systems that process homogeneous,
easily enumerable tasks, such as in crowdrating or crowdprocessing, most commonly implement simple
points-based and leaderboard-based game designs (Table 5). Generally, these homogeneous tasks are
simple, repetitive, and are quick to complete. Therefore, using rich game designs, such as full-fledged
games, could be redundant and excessive (T. Y. Lee et al., 2013; Dumitrache et al., 2013). Empirical
studies (Table 10) found that the use of simple gamification approaches is efficient and therefore cost-
effective for crowdrating or crowdprocessing tasks (e.g. Eickhoff et al., 2012; Feyisetan et al., 2015).
On the other hand, our review indicates that studies in the contexts of crowdsolving and crowdcreating
made more manifold uses of affordances. Since such heterogeneous tasks commonly vary in complexity
and require a wide spectrum of skills sets, manifold gamification designs that provide the opportunity
to engage broad target groups in the short and/or long term might be helpful. Therefore, we recommend
considering task characteristics and especially the task complexity when designing gamification ap-
proaches for crowdsourcing systems.
Our overview indicates that points and leaderboards are the most used gamification affordances
(Table 5). However, the differences are in the details. Points, which are the core of most gamification
designs, have been implemented in different forms across all four crowdsourcing types. In crowdpros-
essing approaches, points are commonly given as a reward for the quantity of fulfilled tasks. Crowdsolv-
ing and crowdrating approaches use scoring mechanisms that reward the quality or quantity of a con-
tribution or a combination of both. Points are simple, flexible, and very malleable, and as can often be
extended via the introduction of further gamification affordances on top of them. For instance, studies
of crowdprocessing and crowdrating often apply time pressure (Eickhoff et al., 2012; Harris, 2014;
Kacorri et al., 2014) or leaderboards (Ipeirotis and Gabrilovich, 2014; Runge et al., 2015) to create
(self- or other-)competitive engagement. On the other hand, crowdcreating may benefit from mecha-
nisms that reward cooperative and collaborative behavior. Several examples use rich gamification de-
signs with a diverse set of affordances (see Table 5). Massung et al. (2013) and Preist et al. (2014)
propose mixing several motivational affordances for different target groups to increase the overall out-
come. On the other hand, the experiment by T. Y. Lee et al. (2013) indicates that adding more motiva-
tional affordances in a crowdprocessing case does not always increase motivation. These examples
27
show that many different facets, such as context-specific and task-specific constraints, target group
characteristics, or a specific goal behavior and outcome may influence the gamification design. Alt-
hough points (especially points that reward quantitative participation) and leaderboards are the most
commonly implemented gamification affordances, we recommend not implementing these elements
too hastily. Rather, we recommend considering the results of extant empirical studies, which has been
presented in this review (Table 10), and theoretical frameworks on the design of game mechanics for
crowdsourcing work (Von Ahn 2008), in order to incentivize right activities in the right form.
Since there have only been a few studies on gamification in crowdcreating systems, reliable rec-
ommendations are more difficult to provide directly based on results alone. However, as designers of
crowdcreating systems are typically seeking to gather comprehensive artifacts based on heterogeneous
contributions, implementing gamification in various forms that is able to engage broad and heteroge-
neous target groups should be considered, instead of, for instance, merely points and badges. The
crowdcreating approach requires crowdsourcees to undertake creative tasks. Therefore, too narrowly
defined goals may reduce creativity and thus the output of the work. Further, promoting cooperation or
a combination of cooperation and competition, rather than competition alone, could potentially be ben-
eficial for reaching a shared output or goal (Tauer and Harackiewicz, 2004). Studies on similar areas
have for instance found that crowdsourcing systems with emergent outcomes can benefit from collab-
orative features (Blohm et al., 2010) and that strong cooperation can positively affect the outcome of
crowdsourced ideation (Bullinger et al., 2010). Thus, we recommend implementing cooperative gami-
fication approaches (Morschheuser et al., 2017) and affordances such as virtual teams and shared goals
that might promote cooperative behaviors. We also encourage practitioners who seek to employ crowd-
creating to experiment with a variety of gamification designs in order to identify an effective fit of
design choices.
Empirical findings indicate that leaderboards/rankings seem to be very effective in motivating cer-
tain crowdsourcing community users to increase their level of contribution (T. Y. Lee et al., 2013).
However, several studies show that the concrete design of a leaderboard affects participation (cf. in the
context of crowdprocessing (Ipeirotis and Gabrilovich, 2014; T. Y. Lee et al., 2013) and crowdrating
28
(Massung et al., 2013; Preist et al., 2014)). Based on these findings, short-term leaderboards are recom-
mended (Ipeirotis and Gabrilovich 2014), becauseall-timeleaderboards can demotivate low-ranked
participants and novices, for whom reaching the top will seem impossible. Studies by Massung et al.
(2013) and Preist et al. (2014) showed that long-term leaderboards can lead to demotivation and can
have possible negative effects on the overall outcome of the crowdsourcing (Straub et al., 2015). The
design of a leaderboard implementation seems, therefore, highly context-dependent. However, Koba-
yashi et al. (2015), T. Y. Lee et al. (2013), and Tinati et al. (2016) note that many crowdsourcing ap-
proaches follow the 90-9-1’ participation rule, implying that only 1% of the users perform almost all
of the actions, and consequently, long-term leaderboards that motivate the 1% might therefore also be
suitable for some crowdsourcing implementations. In contrast to rankings that generally encourage
workers to compare their efforts with others, level systems could be used that motivate by visualizing
individual achievements. Empirical findings of T. Y. Lee et al. (2013) indicate that differences might
exist between these two types of gamification. The results highlight that social achievements seem to
be slightly more effective than individual-level systems. Thus, affordances with social factors such as
rankings or public visualizations of individual achievements, should be preferred if the context allows
the use of such motivational affordances.
Very few studies have considered the moderating effects of personal factors of crowdsourcees.
Itoko et al. (2014) showed that while gamification generally does work for a wide spectrum of age
groups, competition-based gamification might be more effective for young rather than older partici-
pants. Further, Koivisto and Hamari (2014) find that social factors and cooperation are generally more
important aspects for females in gamification. Several studies (Ipeirotis and Gabrilovich, 2014; Itoko
et al., 2014; T. Y. Lee et al., 2013; Massung et al., 2013) indicate that for instance altruism may explain
personal differences in cooperative behavior, while for instance curiosity may make users more inter-
ested in the novel nature of gamification. Moreover, gamification-related literature suggests that users
can have very different approaches towards games and how they interact with them. For instance, some
users may be more motivated by seeking to reach achievements, and others by immersion-related de-
signs (Ermi and yrä, 2005; Hamari and Tuunanen, 2014; Yee, 2006). Thus, sustainable gamification
designs should also consider personal factors as well as orientation to work and games.
29
Finally, Table 7 shows that, in some cases, gamified crowdsourcing systems use a combination of
gamification and financial incentives. Considering how gamification is implemented in crowdsourcing
(see Table 5), it appears that monetary rewards have been used in implementations that employ simpler
gamification designs, mainly in combination with points and leaderboards. Although studies suggest
that extrinsic rewards (such as money) can potentially decrease intrinsic motivation (Deci, 1971; Deci
et al., 1999), Massung et al. (2013) and Preist et al. (2014) found in their experiment that gamification
in combination with financial rewards can in fact increase participation when compared to gamification
alone. However, the authors investigated this phenomenon only in a short-term scenario and indicated
that financial rewards, in comparison to gamification, may reduce participation in the long term. Fur-
ther, Ipeirotis and Gabrilovich (2014) indicate that the output quality of paid crowdsourcing can be
worse, since payments might wipe out intrinsic motivation to accomplish tasks with high quality. There-
fore, monetary incentives should be implemented cautiously in combination with gamification.
5.2 Limitations, emerging issues, and future research directions
Our results provided a structured overview that helps to identify current issues and gaps for future
research. We addressed this by providing a research agenda that covers methodological, theoretical, and
thematic directions for future research, as well as by pinpointing empirical and design research gaps.
5.2.1 Methodological agenda
Although 37 of the reviewed studies contained empirical findings on the effects of gamification in
crowdsourcing and our analyses show that while gamification is a viable and beneficial approach for
motivating crowdsourcees, our understanding of how different affordances affect motivational and be-
havioral outcomes in crowdsourcing is still in its infancy. A common methodological issue in the cur-
rent body of literature is that very few studies have used properly validated psychometric measurement
instruments when gauging changes in crowdsourceesmotivations. Due to this methodological short-
coming, the individual effects of gamification affordances on psychological and behavioral outcomes
are comparable only on an abstract level. Moreover, many empirical studies reported only descriptive
statistics (Table 10), while several studies did not isolate and measure separately the effects of different
30
gamification mechanics. Consequently, current research provides scattered, particular insights regard-
ing the complex interaction of all factors that affect crowdsourcees motivations in gamified
crowdsourcing systems. Thus, we call for careful and systematic empirical mapping of the effects of
affordances, psychological outcomes, and behavioral outcomes, as well as the differences between var-
ious gamification designs.
Agenda point 1: Further studies should isolate gamification effects by using isolated experiment
groups for different gamification affordances, to survey psychological outcomes with validated meas-
urements, and to apply statistical methods that go beyond the description of data.
Most of the reviewed empirical literature only examined the effects of gamification in crowdsourc-
ing in a short timeframe (< 4 weeks). Likewise, many empirical findings relied on a small sample size
(N < 40). The reasons might lie in the novelty of the phenomenon and the fact that many studies inves-
tigated the effectiveness of prototypes or concepts (e.g. Nagai et al., 2014; Preist et al., 2014; Massung
et al., 2013; Saito et al., 2014). Very few researchers applied experimental designs that were able to
control the influences of novelty effects (e.g. Kawajiri et al., 2014), which are deemed a characteristic
of many gamification approaches (Koivisto and Hamari 2014). While small studies can provide quick
insights into the phenomenon, additional large longitudinal studies are needed to ensure the reliability
and generalizability of the results. Furthermore, long-term studies could identify and control for the
influences of novelty or saturation effects (cf. T. Y. Lee et al., 2013), which have seen little attention in
the current literature.
Agenda point 2: Future research should include larger sample sizes and should conduct longitudinal
studies to provide rigorous and generalizable results that extend the current literature.
Most of the reviewed literature with empirical results reported quantitative results (Table 10).
Since gamification is deeply rooted in psychology, we need qualitative research that goes beyond the
measurement of simple perceptions if we are to understand mechanisms and triggers that evoke engage-
ment and motivation in gamified crowdsourcing (e.g. Massung et al., 2013; Preist et al., 2014; Pres-
topnik and Tang, 2015). Qualitative findings may also be able to inform quantitative research into the
31
antecedents of participation intentions. However, currently, most of the interview-based studies were
very superficial and provide few deep insights into the manifold ways in which crowdsourcees perceive
gamification and its effects on their work. Furthermore, most existing qualitative studies provide mainly
findings from people who participated in gamified crowdsourcing and therefore have positive feelings
towards the overall topic. However, knowledge of the reasons why people stop participating in gamified
crowdsourcing and the perceptions of users who are critical towards participating could help one to
design more successful gamified crowdsourcing systems. As the current literature has mainly reported
positive results, some publication bias may loom in the body of literature.
Agenda point 3: Future qualitative research in gamified crowdsourcing should seek to capture all
different facets of the phenomenon. Qualitative research should provide in-depth results that cover not
only the positive perceptions, but also the reasons why people stop participating.
Our review identified only very few studies considering the influence of user characteristics (Eick-
hoff et al., 2012; Itoko et al., 2014). However, previous research suggests that the perceptions towards
and effectiveness of a gamification approach strongly depends on users, their characteristics, and their
individual goals (Hamari, 2013, Kobayashi et al., 2015, Koivisto and Hamari, 2014). The impacts of
personal characteristics and player types (Hamari and Tuunanen, 2014) as moderators of psychological
and behavioral effects as well as the differences between various types of crowdsourcees (e.g. students,
employees, or citizens) (Table 8) require further scrutiny. In this context, differences between the so-
called power contributors and free-riders could also provide new insights into the design of effective
gamified crowdsourcing systems for different target groups (T. Y. Lee et al., 2013; Levina and Arriaga,
2014; Zhao and Zhu, 2014a).
Agenda point 4: Future research should systematically investigate differences between different types
of crowdsourcees, and should consider including the potential influences of user characteristics as a
moderator in research models on the effectiveness of gamified crowdsourcing.
32
5.2.2 Theoretical agenda
Most of the reviewed studies with empirical results on gamification in crowdsourcing focused on
the effectiveness of gamification. Most of these studies lacked theory to ground the research, were
rudimentary, or were disconnected from the applied work. By paying attention to these theoretical lim-
itations, future research could provide valuable contributions to better understand and explain gamifi-
cation in crowdsourcing. We recommend borrowing theoretical perspectives (Whetten, 1989) from psy-
chology, philosophy, or marketing to serve as a basis for study design and to explain psychological
effects and behavioral outcomes. Especially, we recommend drawing on Csíkszentmihályi’s (1990)
theory of flow and self-determination theory (Ryan and Deci, 2000), when investigating the motiva-
tional effects of gamification affordances. These two theoretical perspectives are frequently used to
investigate motivational effects in crowdsourcing (Zhao and Zhu, 2014b; Zheng et al., 2011) and gam-
ification (Hamari and Koivisto, 2014; Hamari et al., 2016), since they provide insights into inducing
and achieving intrinsic motivation. Considering gamification elements as motivational affordances
(Huotari and Hamari 2016) that are designed to stimulate motivational needs, goals achievement, and
help people to achieve their personal goals, goal-setting theory and the affordance concept provide
essential foundations (cf. Huotari and Hamari, 2016; Jung et al., 2010; Morschheuser et al., 2017).
Finally, to understand the effects of gamification and gamification rewards on attitudes and behavioral
outcomes, we recommend that researchers draw on the theory of planned behavior (Ajzen, 1991) and
self-efficacy theory (Bandura, 1977), which are often applied in general gamification research (Hamari
and Koivisto, 2015a, 2015b).
Agenda point 5: Future research should increasingly employ theory from (motivational) psychology
to justify research activities, operationalize research, and interpret results.
5.2.3 Thematic agenda
Previous research on the motivation of crowdsourcees has primarily analyzed motivations in non-
gamified crowdsourcing platforms with financial incentives. Commonly, the findings have indicated
that users are driven by a mixture of intrinsic motivation and monetary rewards (Brabham, 2010, 2008b;
33
Kaufmann et al., 2011; Leimeister et al., 2009; Zhao and Zhu, 2014a; Zheng et al., 2011). Our overview
demonstrated that 68% of the analyzed gamified crowdsourcing cases used only gamification to incen-
tivize crowdworkers (Table 7). This indicated that gamification could not only be used in addition to
financial rewards to increase positive experiences (e.g. engagement or enjoyment); rather, it provides a
cost-effective opportunity to entirely replace financial incentives. Some studies demonstrated the com-
plex interplays between financial and gamified incentive structures (Massung et al., 2013; Preist et al.,
2014). To date, it is unclear for which crowdsourcing system type, crowdsourcee type, and task type
the use of gamification is more beneficial compared to financial incentives, or when the combination
of the two is the best approach. Future research should compare different incentive mechanisms (see
Straub et al., 2015; Harris et al., 2015) and should consider contextual factors and user characteristics.
Furthermore, the economic value of gamification also requires further research. Future research could
examine the development costs in relation to the effects of gamification, to evaluate the value and to
provide insights into gamification-based business models.
Agenda point 6: Research into gamified crowdsourcing should explore optimal incentive orchestra-
tions for different crowdsourcing contexts and should provide insights into the overall cost efficiency
of gamified crowdsourcing.
The findings summarized in Table 6 demonstrate that cooperative approaches, such as gamified
crowdcreating systems, are currently receiving less attention from scholars compared to the other sys-
tem types. This is surprising, since several popular crowdcreating examples, such as Google Ingress,
Dell’s Ideastorm, or Threadless (Kavaliova et al., 2016) have implemented various gamification ap-
proaches. Further, notably, all reviewed empirical studies that have measured the effects of gamification
on participation have analyzed the effects on the intention of an individual to participate, but have ne-
glected that crowdsourcees can form groups with collective intentions (Tsai and Bagozzi, 2014). Stud-
ies have shown that collective intentions play a key role in cooperative crowdsourcing (A. X. L. Shen
et al., 2009; X.-L. Shen et al., 2014). Finally, we identified that social factors, which have been identi-
fied as an essential aspect of gamification (Hamari and Koivisto, 2015a) and could gauge cooperation
(such as trust, reciprocity, and sense of community), have been neglected in the literature.
34
Future research that continues ideas from previous studies about virtual teams (Jarvenpaa and
Leidner, 1998; Powell et al., 2004), collective intentions in virtual communities (Tsai and Bagozzi,
2014), cooperative games design (Morschheuser et al., 2017), and social factors of gamification (Ha-
mari and Koivisto, 2015b) could provide new insights into the effects of gamification on collective
intentions, relationships between crowdworkers, social identities, or collaborative behavior. Future re-
search could utilize established social psychological theories that have evaluated the effects of compe-
tition, cooperation, and the combination of the two on enjoyment or performance as a basis for exam-
ining the motivational effects of different goal structures in gamification approaches (Tauer and
Harackiewicz, 2004; Morschheuser et al., 2017). In this context, the use of cooperative gamification
approaches such as virtual teams, cooperative missions, or shared goals that empower the formation of
groups and collective intentions could be analyzed to expand the mainly competition-focused gamifi-
cation conceptions and that help to design effective gamified crowdsourcing communities.
Agenda point 7: Future research should seek to investigate the design and effects of cooperative gam-
ification and consider social factors in crowd communities.
Crowdsourcing as a problem-solving concept is a multifaceted phenomenon and can be applied in
various contexts. Marginal differences can be found in the reviewed studies regarding the domain in
which the systems are applied (Table 3), the crowd characteristics (Table 4, Table 8), and the media
(e.g. mobile apps (Bowser et al., 2013; Uzun et al., 2013), website (Choi et al., 2014; T. Y. Lee et al.,
2013; Y. Liu, Alexandrova, Nakajima et al., 2011), or local installations (Goncalves et al., 2014)). Fu-
ture research is needed to understand how contextual factors affect gamified crowdsourcing systems.
Optimally, studies could apply one gamified crowdsourcing system in a variety of contexts. Since this
would be a rather sizeable undertaking, we might have to wait for the accumulating literature to cover
more ground.
Agenda point 8: Research is needed to understand how contextual factors, such as the domain, the
media, and crowd characteristics affect gamified crowdsourcing systems.
35
Our overview indicated that gamification implementations differ in the context of crowdsolving,
crowdrating, and crowdprocessing approaches (Table 5). Finally, we identified different recommenda-
tions for designers of gamified crowdsourcing systems. Further work is needed to evaluate and extend
these recommendations and to study the potentials of different design approaches. Especially manifold
designs with for instance avatars, storytelling, or virtual teams provide opportunities for future research.
Furthermore, advanced gamification approaches that automatically consider user characteristics
and context characteristics should be examined. Building on the results of Itoko et al. (2014) and Koi-
visto and Hamari (2014), individual adaptive incentive orchestrations might increase effectiveness, ac-
ceptance, and long-term motivations. Such adaptive gamification design that goes beyond the current
rewards mechanisms used in gamification could utilize recent developments of individualization in
crowdsourcing (Geiger and Schader, 2014) and games design (Prakash et al., 2009). Finally, recent
technology trends such as virtual realities (Prandi et al., 2016), connected everything, artificial intelli-
gence, and sharing economies are influencing current developments in game design and crowdsourcing.
These trends also provide new spaces for gamified crowdsourcing systems that should be studied.
Agenda point 9: Future research should expand the design space used in current gamified crowdsourc-
ing systems and should consider novel trends in games design and crowdsourcing.
5.2.4 Future research
In this review of applied research and theoretical papers, we were particularly interested in the use
of gamification in crowdsourcing systems. However, it is possible that related research has been con-
ducted, also under other conceptual developments such as serious games, games-with-a-purpose, per-
vasive games, human-based computation, or persuasive technology. Some of these related research ar-
eas might be investigating similar phenomena, but were not included in this study. Therefore, future
efforts could compare these approaches and their contributions to gamified crowdsourcing. Relatedly,
we conducted the literature searches intentionally with a set of keywords to find particularly studies on
gamification and crowdsourcing. In our view, our selection of search keywords and data sources was
successful for the review’s intended breadth. The choice of a systematic literature study is the reason
36
for some of these limitations (Boell and Cecez-Kecmanovic, 2015). However, in our view, the benefits
of a structured summary and a clear aggregation of previous findings outweighed the disadvantages in
our case. Future efforts could go beyond these limitations and could extend our findings.
6 CONCLUSIONS
Along with the emergence of the interwoven phenomena of gamification and crowdsourcing, gam-
ified crowdsourcing systems have drawn scientific attention and have led to a continuously rising num-
ber of research publications. In this review, we sought to provide a comprehensive conceptualization
and a structured overview that compared the different characteristics of gamified crowdsourcing sys-
tems, examined the results on the effectiveness of gamification in crowdsourcing, and highlighted start-
ing points for future research. We found a wide array of different gamification implementations in dif-
ferent types of crowdsourcing in the literature. However, the literature seems to be unanimous; gamifi-
cation does seem to work with a majority of configurations and can positively affect the motivations of
crowdsourcees, their participation, and output quality. Depending on the type of crowdsourcing (crowd-
creating, crowdsolving, crowdprocessing, and crowdrating), we identified patterns in the use of gami-
fication affordances. In the context of crowdsourcing initiatives that provide homogenous and often
more monotonous tasks such as crowdprocessing and crowdrating, authors commonly report the use of
simple forms of gamification such as points and leaderboards (Table 5). Conversely, crowdsourcing
studies with crowdcreating and crowdsolving work that seek diverse and creative contributions employ
gamification in more manifold ways with a richer set of mechanics. Generally, gamification is used to
promote a kind of competition between the participants rather than a collaborative experience. Mone-
tary rewards could be used as an addition in gamified crowdsourcing systems, but most of the analyzed
cases did not apply supplementary financial incentives. However, at this early stage, the literature is
still fairly fragmented, and too little research has been conducted to draw clear conclusions on which
specific implementations would work better or worse in certain situations. It is clear that contextual
factors and factors related to crowdsourcees play a role, but to what extents and how are still unclear.
These and further aspects that would help us to understand and design successful gamified crowdsourc-
ing systems provide much room for future research.
37
7 ACKNOWLEDGMENTS
This work was supported by the Robert Bosch GmbH, the Finnish Funding Agency for Technology
and Innovation (TEKES - project numbers 40111/14, 40107/14 and 40009/16) and participating part-
ners, as well as Satakunnan korkeakoulusäätiö and its collaborators. The authors also wish to thank
the editors, reviewers and proofreaders for their time and effort.
8 REFERENCES
Ahmed, N., Mueller, K., 2014. Gamification as a paradigm for the evaluation of visual analytics sys-
tems, in: Proceedings of the 5th Workshop on Beyond Time and Errors: Novel Evaluation
Methods for Visualization. ACM, Paris, France, pp 7886. doi:10.1145/2669557.2669574
Ajzen, I., 1991. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50, 179211.
doi:10.1016/0749-5978(91)90020-T
AlRouqi, H., Al-Khalifa, H.S., 2014. Making Arabic PDF books accessible using gamification, in:
Proceedings of the 11th Web for All Conference on - W4A ’14. ACM Press, Seoul, Republic of
Korea, pp. 14. doi:10.1145/2596695.2596712
Altmeyer, M., Lessel, P., Krüger, A., 2016. Expense control: A gamified, semi-automated, crowd-
based approach for receipt capturing, in: Proceedings of the 21st International Conference on
Intelligent User Interfaces - IUI ’16. ACM Press, New York, USA, pp. 3142.
doi:10.1145/2856767.2856790
Ansari, S., Kleiman, R., Binder, J., Hayes, W., Hoeng, J., Iskandar, A., Rhrissorrakrai, K., Norel, R.,
O’Neel, B., Peitsch, M., Poussin, C., Talikka, M., Schlage, W., Stolovitzky, G., DiFabio, A.,
Pratt, D., Boue, S., 2013. On crowd-verification of biological networks. Bioinform. Biol. In-
sights 7, 307325. doi:10.4137/BBI.S12932
Armisen, A., Majchrzak, A., 2015. Tapping the innovative business potential of innovation contests.
Bus. Horiz. 58, 389399. doi:10.1016/j.bushor.2015.03.004
Bainbridge, D., 2015. And we did it our way: A case for crowdsourcing in a digital library for musi-
cology, in: Proceedings of the 2nd International Workshop on Digital Libraries for Musicology
- DLfM ’15. ACM Press, Knoxville, TN, USA, pp. 18. doi:10.1145/2785527.2785529
Bandura, A., 1977. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 84,
191215. doi:10.1037/0033-295X.84.2.191
Benjamin, M., 2016. Problems and procedures to make wordnet data (retro)fit for a multilingual dic-
tionary, in: Proceedings of the 8th Global WordNet Conference (GWC). Bucharest, pp. 2733.
Bentzien, J., Muegge, I., Hamner, B., Thompson, D.C., 2013. Crowd computing: Using competitive
dynamics to develop and refine highly predictive models. Drug Dis. Tod. 18, pp. 472478.
doi:10.1177/1354856507084420
Biegel, B., Beck, F., Lesch, B., Diehl, S., 2014. Code tagging as a social game, in: Proceedings of the
30th International Conference on Software Maintenance and Evolution (ICSME’14). IEEE,
Victoria, Canada, pp. 411415. doi:10.1109/ICSME.2014.64
38
Bista, S.K., Nepal, S., Paris, C., Colineau, N., 2014. Gamification for online communities: A case
study for delivering government services. Int. J. Coop. Inf. Syst. 23.
Blohm, I., Leimeister, J.M., Bretschneider, U., 2010. Does collaboration among participants lead to
better ideas in IT-based idea competitions? An Empirical Investigation, in: Proceedings of the
43rd Hawaii International Conference on System Sciences HICSS. IEEE, Honolulu, HI,
USA, pp. 1–10.
Bockes, F., Edel, L., Ferstl, M., Schmid, A., 2015. Collaborative landmark mining with a gamification
approach, in: Proceedings of the 14th International Conference on Mobile and Ubiquitous Mul-
timedia - MUM ’15. ACM Press, Linz, Austria, pp. 364367. doi:10.1145/2836041.2841209
Boell, S.K., Cecez-Kecmanovic, D., 2015. On being “systematic” in literature reviews in IS. J. Inf.
Technol. 30, 161173.
Bonde, M.T., Makransky, G., Wandall, J., Larsen, M. V, Morsing, M., Jarmer, H., Sommer, M.O.A.,
2014. Improving biotech education through gamified laboratory simulations. Nat. Biotechnol.
32, 6947. doi:10.1038/nbt.2955
Bowser, A., Hansen, D., He, Y., Boston, C., Reid, M., Gunnell, L., Preece, J., 2013. Using gamifica-
tion to inspire new citizen science volunteers, in: Proceedings of the 1st International Confer-
ence on Gameful Design, Research, and ApplicationsGamification’13. ACM, Stratford, On-
tario, Canada, pp. 1825. doi:10.1145/2583008.2583011
Brabham, D.C., 2010. Moving the crowd at threadless. Information, Commun. Soc. 13, 11221145.
Brabham, D.C., 2008a. Moving the crowd at iStockphoto: The composition of the crowd and motiva-
tions for participation in a crowdsourcing application by. First Monday 13.
Brabham, D.C., 2008b. Crowdsourcing as a model for problem solving: An introduction and cases.
Converg. Int. J. Res. into New Media Technol. 14, 7590. doi:10.1177/1354856507084420
Brandtner, P., Auinger, A., Helfert, M., 2014. Principles of human computer interaction in Crowdsourc-
ing to foster motivation in the context of Open Innovation, in: Proceedings of HCIB 2014.
Springer, Heraklion, Crete, Greece, pp. 585596. doi:10.1007/978-3-319-07293-7_57
Brenner, M., Mirza, N., Izquierdo, E., 2014. People recognition using gamified ambiguous feedback,
in: Proceedings of the First International Workshop on Gamification for Information Retrieval -
GamifIR ’14. ACM, Amsterdam, Netherlands, pp. 2226. doi:10.1145/2594776.2594781
Brito, J., Vieira, V., Duran, A., 2015. Towards a framework for gamification design on crowdsourcing
systems: The G.A.M.E. approach, in: Proceedings of the 12th International Conference on In-
formation Technology - New Generations. IEEE, Las Vegas, Nevada, USA, pp. 445450.
doi:10.1109/ITNG.2015.78
Bullinger, A.C., Neyer, A.-K., Rass, M., Moeslein, K.M., 2010. Community-based innovation con-
tests: Where competition meets cooperation. Creat. Innov. Manag. 19, 290303.
doi:10.1111/j.1467-8691.2010.00565.x
Burnett, D., Lochrie, M., Coulton, P., 2012. “CheckinDJ” using check-ins to crowdsource music pref-
erences, in: Proceeding of the 16th International Academic MindTrek Conference on - Mind-
Trek ’12. ACM Press, Tampere, Finland, pp. 5154. doi:10.1145/2393132.2393143
Cao, H.-A., Wijaya, T.K., Aberer, K., Nunes, N., 2015. A collaborative framework for annotating en-
ergy datasets, in: Proceedings of the International Conference on Big Data. IEEE, Santa Clara,
CA, USA, pp. 27162725. doi:10.1109/BigData.2015.7364072
39
Carlier, A., Salvador, A., Cabezas, F., Giro-i-Nieto, X., Charvillat, V., Marques, O., 2016. Assess-
ment of crowdsourcing and gamification loss in user-assisted object segmentation. Multimed.
Tools Appl. 23. doi:10.1007/s11042-015-2897-6
Cechanowicz, J., Gutwin, C., Brownell, B., Goodfellow, L., 2013. Effects of gamification on partici-
pation and data quality in a real-world market research domain, in: Proceedings of the First In-
ternational Conference on Gameful Design, Research, and Applications - Gamification ’13.
ACM Press, New York, New York, USA, pp. 5865. doi:10.1145/2583008.2583016
Chamberlain, J., 2014. The annotation-validation (AV) model: rewarding contribution using retro-
spective agreement, in: Proceedings of the First International Workshop on Gamification for
Information Retrieval - GamifIR ’14. ACM, Amsterdam, Netherlands, pp. 1216.
doi:10.1145/2594776.2594779
Chen, Y., Pu, P., 2014. HealthyTogether: exploring social incentives for mobile fitness applications,
in: Proceedings of the Second International Symposium of Chinese CHI on - Chinese CHI ’14.
pp. 2534. doi:10.1145/2592235.2592240
Cherinka, R., Miller, R., Prezzama, J., 2013. Emerging trends, technologies and approaches impacting
innovation, in: Proceedings of the 6th International Multi-Conference on Engineering and
Technological Innovation - IMETI 2013. Orlando, Florida, USA, pp. 9297.
Choi, J., Choi, H., So, W., Lee, J., You, J., 2014. A study about designing reward for gamified
crowdsourcing system, in: Proceedings of the 3rd International Conference, DUXU 2014, Held
as Part of HCI International 2014. Springer International Publishing, Heraklion, Crete, Greece,
pp. 678687. doi:10.1007/978-3-319-07626-3-64
Christy, K.R., Fox, J., 2014. Leaderboards in a virtual classroom: A test of stereotype threat and social
comparison explanations for women’s math performance. Comput. Educ. 78, 6677.
doi:10.1016/j.compedu.2014.05.005
Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., Leaver-Fay, A., Baker, D., Popo-
vić, Z., 2010. Predicting protein structures with a multiplayer online game. Nature 466, 756
760. doi:10.1038/nature09304
Csikszentmihalyi, M., 1990. Flow: The psychology of optimal experience. Harper and Row, New
York, NY, USA.
Cucari, G., Leotta, F., Mecella, M., Vassos, S., 2016. Collecting human habit datasets for smart
spaces through gamification and crowdsourcing, in: De Gloria, A., Veltkamp, R. (Eds.), Pro-
ceedings of the 4th Games and Learning Alliance Conference (GALA), Lecture Notes in Com-
puter Science. Springer International Publishing, Rome, Italy, pp. 208217. doi:10.1007/978-3-
319-40216-1_22
Dai, W., Wang, Y., Jin, Q., Ma, J., 2016. An integrated incentive framework for mobile crowdsourced
sensing. Tsinghua Sci. Technol. 21, 146156. doi:10.1109/TST.2016.7442498
De Franga, F.A., Vivacqua, A.S., Campos, M.L.M., 2015. Designing a gamification mechanism to
encourage contributions in a crowdsourcing system, in: Proceedings of the 19th International
Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, Calabria,
Italy, pp. 462466. doi:10.1109/CSCWD.2015.7231003
De-Marcos, L., Domínguez, A., Saenz-de-Navarrete, J., Pagés, C., 2014. An empirical study compar-
ing gamification and social networking on e-learning. Comput. Educ. 75, 8291.
doi:10.1016/j.compedu.2014.01.012
40
Deci, E.L., 1971. Effects of externally mediated rewards on intrinsic motivation. J. Pers. Soc. Psychol.
18, 105115.
Deci, E.L., Koestner, R., Ryan, R.M., 1999. A meta-analytic review of experiments examining the ef-
fects of extrinsic rewards on intrinsic motivation. Psychol. Bull. 125, 627668.
doi:10.1037/0033-2909.125.6.627
Deng, J., Krause, J., Stark, M., Fei-Fei, L., 2016. Leveraging the wisdom of the crowd for fine-
grained recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38, 666676.
doi:10.1109/TPAMI.2015.2439285
Denny, P., 2013. The effect of virtual achievements on student engagement, in: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems - CHI ’13. ACM Press, New
York, New York, USA, p. 763. doi:10.1145/2470654.2470763
Dergousoff, K., Mandryk, R.L., 2015. Mobile gamification for experiment data collection: Leverag-
ing the freemium model, in: Proceedings of the 33rd Annual ACM Conference on Human Fac-
tors in Computing Systems (CHI 2015). ACM, Seoul, Republic of Korea, pp. 10651074.
Doan, A., Ramakrishnan, R., Halevy, A.Y., 2011. Crowdsourcing systems on the World-Wide Web.
Commun. ACM 54, 8696. doi:10.1145/1924421.1924442
Domínguez, A., Saenz-de-Navarrete, J., De-Marcos, L., Fernández-Sanz, L., Pagés, C., Martínez-
Herráiz, J.-J., 2013. Gamifying learning experiences: Practical implications and outcomes.
Comput. Educ. 63, 380392. doi:10.1016/j.compedu.2012.12.020
Dos Santos, A.C., Zambalde, A.L., Veroneze, R.B., Botelho, G.A., de Souza Bermejo, P.H., 2015.
Open innovation and social participation: A case study in public security in Brazil, in: Proceed-
ings of the 4th International Conference on Electronic Government and the Information Sys-
tems Perspective (EGOVIS 2015). Springer International Publishing, Valencia, Spain, pp. 163
176. doi:10.1007/978-3-319-22389-6_12
Dumitrache, A., Aroyo, L., Welty, C., Sips, R.-J., Levas, A., 2013. Dr. Detective: Combining gamifi-
cation techniques and crowdsourcing to create a gold standard for the medical domain, in: Pro-
ceedings of the 1st International Workshop on Crowdsourcing the Semantic Web (Crowd
Sem2013). Sydney, Australia, pp. 1631.
Eickhoff, C., Harris, C.G., de Vries, A.P., Srinivasan, P., 2012. Quality through flow and immersion,
in: Proceedings of the 35th International ACM SIGIR Conference - SIGIR ’12, SIGIR ’12.
ACM Press, Portland, Oregon, USA, pp. 871880. doi:10.1145/2348283.2348400
Ellis, P.D., 2010. The essential guide to effect sizes: Statistical power, meta-analysis, and the interpre-
tation of research results. Cambridge University Press, Cambridge, UK.
Ermi, L., Mäyrä, F., 2005. Fundamental components of the gameplay experience: Analysing immer-
sion, in: Proceedings of DiGRA 2005 Conference: Changing Views Worlds in Play. Vancou-
ver, British Columbia, Canada.
Estellés-Arolas, E., González-Ladrón-de-Guevara, F., 2012. Towards an integrated crowdsourcing
definition. J. Inf. Sci. 38, 189200. doi:10.1177/0165551512437638
Fava, D., Signoles, J., Lemerre, M., Schäf, M., Tiwari, A., 2015. Gamifying program analysis, in:
Proceeding of the 20th International Conference on Logic for Programming, Artificial Intelli-
gence, and Reasoning (LPAR). Springer International Publishing, Suva, Fiji, pp. 591605.
doi:10.1007/978-3-662-48899-7_41
41
Fedorov, R., Fraternali, P., Pasini, C., 2016. SnowWatch: A multi-modal citizen science application,
in: Proceedings of the 16th International Conference on Web Engineering (ICWE). Springer
International Publishing, Lugano, Switzerland, pp. 538541. doi:10.1007/978-3-319-38791-
8_43
Feyisetan, O., Simperl, E., Van Kleek, M., Shadbolt, N., 2015. Improving paid microtasks through
gamification and adaptive furtherance incentives, in: Proceedings of the 24th International Con-
ference on World Wide Web - WWW ’15. ACM Press, Florence, Italy, pp. 333343.
doi:10.1145/2736277.2741639
Gartner, 2011. Gartner says by 2015, more than 50 percent of organizations that manage innovation
processes will gamify those processes. http://www.gartner.com/it/page.jsp?id=1629214 (ac-
cessed 7.6.12).
Gatautis, R., Vitkauskaite, E., 2014. Crowdsourcing application in marketing activities. Procedia -
Soc. Behav. Sci. 110, 12431250. doi:10.1016/j.sbspro.2013.12.971
Geiger, D., Schader, M., 2014. Personalized task recommendation in crowdsourcing information sys-
tems - Current state of the art. Decis. Support Syst. 65, 316. doi:10.1016/j.dss.2014.05.007
Goncalves, J., Hosio, S., Ferreira, D., Kostakos, V., 2014. Game of words: tagging places through
crowdsourcing on public displays, in: Proceedings of the 2014 Conference on Designing Inter-
active Systems - DIS ’14. ACM, Vancouver, BC, Canada, pp. 705714.
doi:10.1145/2598510.2598514
Greenhill, A., Holmes, K., Woodcock, J., Lintott, C., Simmons, B.D., Graham, G., Cox, J., Ohlsson,
E., Masters, K., 2016. Playing with science: Exploring how game activity motivates users par-
ticipation on an online citizen science platform. Aslib J. Inf. Manag. 68, 306325.
doi:10.1108/AJIM-11-2015-0182
Hamari, J., 2015. Why do people buy virtual goods? Attitude toward virtual good purchases versus
game enjoyment. Int. J. Inf. Manage. 35, 299308.
Hamari, J., 2013. Transforming homo economicus into homo ludens: A field experiment on gamifica-
tion in a utilitarian peer-to-peer trading service. Electron. Commer. Res. Appl. 12, 236245.
doi:10.1016/j.elerap.2013.01.004
Hamari, J., Koivisto, J., 2015a. Why do people use gamification services? Int. J. Inf. Manage. 35,
419431. doi:10.1016/j.ijinfomgt.2015.04.006
Hamari, J., Koivisto, J., 2015b. “Working out for likes”: An empirical study on social influence in ex-
ercise gamification. Comput. Human Behav. 50, 333347. doi:10.1016/j.chb.2015.04.018
Hamari, J., Koivisto, J., 2014. Measuring flow in gamification: Dispositional Flow Scale-2. Comput.
Human Behav. 40, 133143. doi:10.1016/j.chb.2014.07.048
Hamari, J., Koivisto, J., Sarsa, H., 2014. Does gamification work? A literature review of empirical
studies on gamification, in: Proceedings of the 47th Hawaii International Conference on System
Sciences - HICSS. IEEE, Waikoloa, HI, pp. 30253034. doi:10.1109/HICSS.2014.377
Hamari, J., Shernoff, D.J., Rowe, E., Coller, B., Asbell-Clarke, J., Edwards, T., 2016. Challenging
games help students learn: An empirical study on engagement, flow and immersion in game-
based learning. Comput. Human Behav. 54, 170179. doi:10.1016/j.chb.2015.07.045
Hamari, J., Sjöklint, M., Ukkonen, A., 2016. The sharing economy: Why people participate in collab-
orative consumption. J. Assoc. Inf. Sci. Technol. 67, 2047-2059. doi:10.1002/asi.23552
42
Hamari, J., Tuunanen, J., 2014. Player Types: A Meta-synthesis. Trans. Digit. Games Res. Assoc. 1,
2953.
Hammais, E., Ketamo, H., Koivisto, A., 2014. Mapping the energy: A gamified online course, in: Pro-
ceedings of the 8th European Conference on Games-Based Learning. acpi, Berlin, Germany,
pp. 176181.
Hantke, S., Eyben, F., Appel, T., Schuller, B., 2015. iHEARu-PLAY: Introducing a game for
crowdsourced data collection for affective computing, in: Proceedings of the International Con-
ference on Affective Computing and Intelligent Interaction (ACII). IEEE, Xian, China, pp.
891897. doi:10.1109/ACII.2015.7344680
Harris, C.G., 2015. The effects of pay-to-quit incentives on crowdworker task quality, in: Proceedings
of the 18th ACM Conference on Computer-Supported Cooperative Work and Social Compu-
ting CSCW’15. Vancouver, Canada. doi:10.1145/2675133.2675185
Harris, C.G., 2014. The beauty contest revisited: Measuring consensus rankings of relevance using a
game, in: Proceedings of the First International Workshop on Gamification for Information Re-
trieval - GamifIR ’14. ACM, Amsterdam, Netherlands, pp. 1721.
doi:10.1145/2594776.2594780
He, J., Bron, M., Azzopardi, L., 2014. Studying user browsing behavior through gamified search
tasks, in: Proceedings of the First International Workshop on Gamification for Information Re-
trieval - GamifIR ’14. ACM, Amsterdam, Netherlands, pp. 4952.
Howe, J., 2006. The Rise of Crowdsourcing. Wired 14.
Huotari, K., Hamari, J., 2016. A definition for gamification: anchoring gamification in the service
marketing literature. Electron. Mark. 26. doi:10.1007/s12525-015-0212-z
Inaba, M., Iwata, N., Toriumi, F., Hirayama, T., Enokibori, Y., Takahashi, K., Mase, K., 2015. Statis-
tical response method and learning data acquisition using gamified crowdsourcing for a non-
task-oriented dialogue agent, in: Proccedings of 7th International Conference on Agents and
Artificial Intelligence (ICAART), Lecture Notes in Computer Science. Springer International
Publishing, Lisbon, Portugal, pp. 119136. doi:10.1007/978-3-319-25210-0_8
IEEE, 2014. Everyone’s a gamer IEEE experts predict gaming will be integrated into more than 85
percent of daily tasks by 2020. http://www.ieee.org/about/ news/2014/25_feb_2014.html (ac-
cessed 4.1.14).
Ipeirotis, P.G., Gabrilovich, E., 2014. Quizz: Targeted crowdsourcing with a billion (potential) users,
in: Proceedings of the 23rd International Conference on World Wide Web - WWW ’14. ACM,
Seoul, Korea, pp. 143154. doi:10.1145/2566486.2567988
Itoko, T., Arita, S., Kobayashi, M., Takagi, H., 2014. Involving senior workers in crowdsourced
proofreading, in: Proceedings of the 8th International Conference, UAHCI 2014, Held as Part
of HCI International 2014. Springer International Publishing, Heraklion, Crete, Greece, pp.
106117. doi:10.1007/978-3-319-07446-7_11
Jarvenpaa, S.L., Leidner, D.E., 1998. Communication and trust in global virtual teams. J. Comput.
Commun. 3, 00. doi:10.1111/j.1083-6101.1998.tb00080.x
Jones, B.A., Madden, G.J., Wengreen, H.J., 2014. The FIT game: preliminary evaluation of a gamifi-
cation approach to increasing fruit and vegetable consumption in school. Prev. Med. (Baltim).
doi:10.1016/j.ypmed.2014.04.015
43
Jung, J.H., Schneider, C., Valacich, J., 2010. Enhancing the motivational affordance of information
systems: The effects of real-time performance feedback and goal setting in group collaboration
environments. Manage. Sci. 56, 724742. doi:10.1287/mnsc.1090.1129
Kacorri, H., Shinkawa, K., Saito, S., 2015. CapCap: An output-agreement game for video captioning,
in: Proceedings of the Annual Conference of the International Speech Communication Associa-
tion, INTERSPEECH. ISCA, Dresden, Germany, pp. 28142818.
Kacorri, H., Shinkawa, K., Saito, S., 2014. Introducing game elements in crowdsourced video cap-
tioning by non-experts, in: Proceedings of the 11th Web for All Conference on - W4A ’14.
ACM, Seoul, Korea, pp. 14. doi:10.1145/2596695.2596713
Katmada, A., Satsiou, A., Kompatsiaris, I., 2016. Incentive mechanisms for crowdsourcing platforms,
in: Proceedings of the 3rd International Conference on Internet Science (INSCI). Springer,
Florence, Italy, pp. 318. doi:10.1007/978-3-319-45982-0_1
Kavaliova, M., Virjee, F., Maehle, N., Kleppe, I.A., Nisar, T., 2016. Crowdsourcing innovation and
product development: Gamification as a motivational driver. Cogent Bus. Manag. 3, 1128132.
doi:10.1080/23311975.2015.1128132
Kanefsky, B., Barlow, N.G., Gulick, V.C., 2001. Can distributed volunteers accomplish massive data
analysis tasks?, in: Proceedings of the 32th Annual Lunar and Planetary Science Conference.
Houston, Texas.
Kaufmann, N., Schulze, T., Veit, D., 2011. More than fun and money. Worker motivation in
crowdsourcing - a study on mechanical turk, in: Proceedings of the 17th Americas Conference
on Information Systems - Amcis. Detroit, Michigan, USA, pp. 111.
Kawajiri, R., Shimosaka, M., Kahima, H., 2014. Steered crowdsensing: Incentive design towards
quality-oriented place-centric crowdsensing, in: Proceedings of the 2014 ACM International
Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’14 Adjunct. ACM, Se-
attle, Washington, USA, pp. 691701. doi:10.1145/2632048.2636064
Kobayashi, M., Arita, S., Itoko, T., Saito, S., Takagi, H., 2015. Motivating multi-generational crowd
workers in social-purpose work, in: Proceedings of the 18th ACM Conference on Computer
Supported Cooperative Work - CSCW ’15. ACM Press, Vancouver, BC, Canada, pp. 1813
1824. doi:10.1145/2675133.2675255
Koivisto, J., Hamari, J., 2014. Demographic differences in perceived benefits from gamification.
Comput. Human Behav. 35, 179188. doi:10.1016/j.chb.2014.03.007
Kurita, D., Roengsamut, B., Kuwabara, K., Huang, H.-H., 2016. Knowledge base refinement with
gamified crowdsourcing, in: Proceedings of the 8th Asian Conference on Intelligent Infor-
mation and Database Systems (ACIIDS). Springer, Da Nang, Vietnam, pp. 3342.
doi:10.1007/978-3-662-49381-6_4
LaToza, T.D., Ben Towne, W., van der Hoek, A., Herbsleb, J.D., 2013. Crowd development, in: Pro-
ceedings of the 6th International Workshop on Cooperative and Human Aspects of Software
Engineering (CHASE). IEEE, San Francisco, CA, USA, pp. 8588.
doi:10.1109/CHASE.2013.6614737
Lauto, G., Valentin, F., 2016. How preference markets assist new product idea screening. Ind. Manag.
Data Syst. 116, 603619. doi:10.1108/IMDS-07-2015-0320
44
Lee, J.J., Ceyhan, P., Jordan-Cooley, W., Sung, W., 2013. GREENIFY: A real-world action game for
climate change education. Simul. Gaming 44, 349365. doi:10.1177/1046878112470539
Lee, T.Y., Dugan, C., Geyer, W., Ratchford, T., Rasmussen, J., Shami, N.S., Lupushor, S., 2013. Ex-
periments on motivational feedback for crowdsourced workers, in: Proceedings of the 7th Inter-
national Conference on Weblogs and Social Media - ICWSM 2013. AAAI Press, pp. 341350.
Leimeister, J.M., 2010. Collective intelligence. Bus. Inf. Syst. Eng. 2, 245248. doi:10.1007/s12599-
010-0114-8
Leimeister, J.M., Huber, M., Bretschneider, U., Krcmar, H., 2009. Leveraging crowdsourcing: Acti-
vation-supporting components for IT-based ideas competition. J. Manag. Inf. Syst. 26, 197
224. doi:10.2753/MIS0742-1222260108
Lessel, P., Altmeyer, M., Krüger, A., 2015. Analysis of recycling capabilities of individuals and
crowds to encourage and educate people to separate their garbage playfully, in: Proceedings of
the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15. ACM
Press, Seoul, Korea, pp. 10951104. doi:10.1145/2702123.2702309
Levina, N., Arriaga, M., 2014. Distinction and status production on user-generated content platforms:
Using Bourdieu’s theory of cultural production to understand social dynamics in online fields.
Inf. Syst. Res. 25, 468488. doi:10.1287/isre.2014.0535
Lintott, C.J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M.J., Nichol,
R.C., Szalay, A., Andreescu, D., Murray, P., Vandenberg, J., 2008. Galaxy Zoo: Morphologies
derived from visual inspection of galaxies from the Sloan Digital Sky Survey. Mon. Not. R. As-
tron. Soc. 389, 11791189. doi:10.1111/j.1365-2966.2008.13689.x
Liu, Y., Alexandrova, T., Nakajima, T., 2011a. Gamifying intelligent environments, in: Proceedings
of the 2011 International ACM Workshop on Ubiquitous Meta User Interfaces - Ubi-MUI ’11.
ACM, Scottsdale, Arizona, USA, pp. 712. doi:10.1145/2072652.2072655
Liu, Y., Alexandrova, T., Nakajima, T., Lehdonvirta, V., 2011b. Mobile image search via local
crowd: A user study, in: Proceedings of the 17th IEEE International Conference on Embedded
and Real-Time Computing Systems and Applications (RTCSA 2011). IEEE Computer Society,
Los Alamitos, CA, USA, pp. 109112. doi:10.1109/RTCSA.2011.10
Lounis, S., Pramatari, K., Theotokis, A., 2014. Gamification is all about fun: The role of incentive
type and community collaboration, in: ECIS 2014 Proceedings. pp. 114.
Machnik, M., Riegler, M., Sen, S., 2015. Crowdpinion: Motivating people to share their momentary
opinion, in: Proceedings of the 2nd GamifIR’15 Workshop. CEUR WS, Vienna, Austria, pp. 1
8.
Mahnič, N., 2014. Gamification of politics: Start a new game! Teor. Praksa 51, 143161.
Marasco, E., Behjat, L., Rosehart, W., 2015. Enhancing EDA education through gamification, in: Pro-
ceedings of the International Conference on Microelectronics Systems Education (MSE). IEEE,
Pittsburgh, PA, USA, pp. 2527. doi:10.1109/MSE.2015.7160009
Martella, R., Kray, C., Clementini, E., 2015. A gamification framework for volunteered geographic
information, in: Bacao, F., Santos, M.Y., Painho, M. (Eds.), Agile 2015. Springer International
Publishing, pp. 7389. doi:10.1007/978-3-319-16787-9_5
45
Mason, A.D., Michalakidis, G., Krause, P.J., 2012. Tiger Nation: Empowering citizen scientists, in:
Proceeding of the 6th IEEE International Conference on Digital Ecosystems and Technologies
(DEST). IEEE, pp. 15. doi:10.1109/DEST.2012.6227943
Massung, E., Coyle, D., Cater, K.F., Jay, M., Preist, C., 2013. Using crowdsourcing to support pro-
environmental community activism, in: Proceedings of the SIGCHI Conference on Human Fac-
tors in Computing Systems - CHI ’13. Paris, France, pp. 371380.
doi:10.1145/2470654.2470708
McCartney, E.A., Craun, K.J., Korris, E., Brostuen, D.A., Moore, L.R., 2015. Crowdsourcing the na-
tional map. Cartogr. Geogr. Inf. Sci. 42, 5457. doi:10.1080/15230406.2015.1059187
Melenhorst, M., Novak, J., Micheel, I., Larson, M., Boeckle, M., 2015. Bridging the utilitarian-he-
donic divide in crowdsourcing applications, in: Proceedings of the 4th International Workshop
on Crowdsourcing for Multimedia - CrowdMM ’15. ACM Press, Brisbane, Australia, pp. 914.
doi:10.1145/2810188.2810191
Mizuyama, H., Miyashita, E.E., 2016. Product X: An output-agreement game for product perceptual
mapping, in: Proceedings of the 19th ACM Conference on Computer Supported Cooperative
Work - CSCW ’16. ACM Press, San Francisco, CA, USA, pp. 353356.
doi:10.1145/2818052.2869123
Moreno, N., Savage, S., Leal, A., Cornick, J., Turk, M., Höllerer, T., 2015. Motivating crowds to vol-
unteer neighborhood data, in: Proceedings of the 18th ACM Conference Companion on Com-
puter Supported Cooperative Work CSCW’15. ACM Press, Vancouver, BC, Canada, pp.
235238. doi:10.1145/2685553.2699015
Morschheuser, B., Maedche, A., Walter, D., 2017. Designing cooperative gamification: conceptual-
ization and prototypical implementation, in: Proceedings of the 20th ACM Conference on
Computer-Supported Cooperative Work and Social Computing CSCW’17. Portland, USA,
pp. 24102421. doi:10.1145/2998181.2998272
Morschheuser, B., Henzi, C., Alt, R., 2015. Increasing intranet usage through gamification insights
from an experiment in the banking industry, in: Proceedings of the 48th Hawaii International
Conference on System Sciences HICSS. IEEE, Kauai, HI, pp. 635642.
doi:10.1109/HICSS.2015.83
Morschheuser, B., Rivera-Pelayo, V., Mazarakis, A., Zacharias, V., 2014. Interaction and reflection
with quantified self and gamification: an experimental study. J. Lit. Technol. 15, 136156.
Nagai, Y., Hiyama, A., Miura, T., Hirose, M., 2014. T-echo: Promoting intergenerational communica-
tion through gamified social mentoring, in: Proceedings of the 8th International Conference,
UAHCI 2014, Held as Part of HCI International 2014. Springer, Heraklion, Crete, Greece, pp.
582589. doi:10.1007/978-3-319-07509-9_55
Nakatsu, R., Grossman, E., Iacovou, C., 2014. A taxonomy of crowdsourcing based on task complex-
ity. J. Inf. Sci. 40, 823834. doi:10.1177/0165551514550140
Nakatsu, R., Iacovou, C., 2014. An investigation of user interface features of crowdsourcing applica-
tions, in: Proceedings of the 1th International Conference, HCIB 2014, Held as Part of HCI In-
ternational 2014. Springer, Heraklion, Crete, Greece, pp. 410418. doi:10.1007/978-3-319-
07293-7_40
46
Netek, R., Panek, J., 2016. Framework See-Think-Do as a tool for crowdsourcing support - case study
on crisis management. ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLI-B6,
1316. doi:10.5194/isprsarchives-XLI-B6-13-2016
Nose, T., Hishiyama, R., 2013. Analysis of self-tagging during conversational chat in multilingual
gaming simulation, in: 2nd International Conference on Future Generation Communication
Technologies, FGCT 2013. IEEE, London, UK, pp. 8186. doi:10.1109/FGCT.2013.6767188
Nunzio, G.M.D., Maistro, M., Zilio, D., 2016. Gamification for machine learning: The classification
game, in: Hopfgartner, F., Kazai, G., Kruschwitz, U., Meder, M. (Eds.), Proceedings of the 3rd
GamifIR’16 Workshop. CEUR WS, Pisa, Italy, pp. 4552.
Packham, S., Suleman, H., 2015. Crowdsourcing a text corpus is not a game, in: Proceedings of the
7th International Conference on Asia-Pacific Digital Libraries, ICADL 2015. Springer Interna-
tional Publishing, Seoul, Korea, pp. 225234. doi:10.1007/978-3-319-27974-9_23
Panchariya, N.S., DeStefano, A.J., Nimbagal, V., Ragupathy, R., Yavuz, S., Herbert, K.G., Hill, E.,
Fails, J.A., 2015. Current developments in Big Data and sustainability sciences in mobile citi-
zen science applications, in: Proccedings of the 1st International Conference on Big Data Com-
puting Service and Applications. IEEE, San Francisco, CA, USA, pp. 202212.
doi:10.1109/BigDataService.2015.64
Pedersen, J., Kocsis, D., Tripathi, A., Tarrell, A., Weerakoon, A., Tahmasbi, N., Xiong, J., Deng, W.,
Oh, O., De Vreede, G.J., 2013. Conceptual foundations of crowdsourcing: A review of IS re-
search, in: Proceedings of the 46th Hawaii International Conference on System Sciences -
HICSS. IEEE, pp. 579588. doi:10.1109/HICSS.2013.143
Pinto, J.P., Viana, P., 2015. Using the crowd to boost video annotation processes, in: Proceedings of
the 12th European Conference on Visual Media Production - CVMP ’15. ACM Press, London,
UK. doi:10.1145/2824840.2824853
Pothineni, D., Mishra, P., Rasheed, A., Sundararajan, D., 2014. Incentive design to mould online be-
havior: A game mechanics perspective, in: Proceedings of the First International Workshop on
Gamification for Information Retrieval - GamifIR ’14. ACM, Amsterdam, Netherlands, pp. 27
32. doi:10.1145/2594776.2594782
Powell, A., Piccoli, G., Ives, B., 2004. Virtual teams: A review of current literature and directions for
future research. DATA BASE Adv. Inf. Syst. 35, 636. doi:10.1145/968464.968467
Prakash, E., Brindle, G., Jones, K., Zhou, S., Chaudhari, N.S., Wong, K.-W., 2009. Advances in
games technology: Software, models, and intelligence. Simul. Gaming 40, 752801.
doi:10.1177/1046878109335120
Prandi, C., Nisi, V., Salomoni, P., Nunes, N.J., 2015. From gamification to pervasive game in map-
ping urban accessibility, in: Proceedings of the 11th CHItaly. ACM Press, Rome, Italy, pp.
126129. doi:10.1145/2808435.2808449
Prandi, C., Roccetti, M., Salomoni, P., Nisi, V., Nunes, N.J., 2016. Fighting exclusion: A multimedia
mobile app with zombies and maps as a medium for civic engagement and design. Multimed.
Tools Appl. 129. doi:10.1007/s11042-016-3780-9
Preist, C., Massung, E., Coyle, D., 2014. Competing or aiming to be average? Normification as a
means of engaging digital volunteers, in: Proceedings 17th ACM Conference on Computer
Supported Cooperative Work and Social Computing. ACM, Baltimore, MD, USA, pp. 1222
1233. doi:10.1145/2531602.2531615
47
Prestopnik, N.R., Tang, J., 2015. Points, stories, worlds, and diegesis: Comparing player experiences
in two citizen science games. Comput. Human Behav. 52, 492506.
doi:10.1016/j.chb.2015.05.051
Prpić, J., Shukla, P.P., Kietzmann, J.H., McCarthy, I.P., 2015. How to work a crowd: Developing
crowd capital through crowdsourcing. Bus. Horiz. 58, 7785. doi:10.1016/j.bushor.2014.09.005
Reid, E.F., 2013. Crowdsourcing and gamification techniques in Inspire (AQAP online magazine), in:
Proceedings of the International Conference on Intelligence and Security Informatics. IEEE,
Seattle, Washington, USA, pp. 215220. doi:10.1109/ISI.2013.6578822
Reinsch, T., Wang, Y., Knechtel, M., Ameling, M., Herzig, P., 2013. CINA - A crowdsourced indoor
navigation assistant, in: Proceedings of the 6th International Conference on Utility and Cloud
Computing. IEEE, Dresden, Germany, pp. 500505. doi:10.1109/UCC.2013.97
Riegler, M., Eg, R., Calvet, L., Lux, M., Halvorsen, P., Griwodz, C., 2015. Playing around the eye
tracker: A serious game based dataset, in: Proceedings of the 2nd GamifIR’15 Workshop.
CEUR WS, Vienna, Austria, pp. 17.
Roa-Valverde, A.J., 2014. Combining gamification, crowdsourcing and semantics for leveraging lin-
guistic open data, in: Proceedings of CEUR Workshop. Riva del Garda, Italy.
Roengsamut, B., Kuwabara, K., Huang, H.-H., 2015. Toward gamification of knowledge base con-
struction, in: Proceedings of the International Symposium on Innovations in Intelligent Systems
and Applications (INISTA). IEEE, Madrid, Spain, pp. 17. doi:10.1109/INISTA.2015.7276721
Rosani, A., Boato, G., De Natale, F.G.B., 2015. EventMask: A game-based framework for event-sali-
ency identification in images. IEEE Trans. Multimed. 17, 13591371.
doi:10.1109/TMM.2015.2441003
Roth, S., Schneckenberg, D., Tsai, C.-W., 2015. The ludic drive as innovation driver: Introduction to
the gamification of innovation. Creat. Innov. Manag. 24, 300306. doi:10.1111/caim.12124
Rouse, A.C., 2010. A preliminary taxonomy of crowdsourcing, in: Proceedings of the 21st Australa-
sian Conference on Information Systems ACIS 2010. Brisbane, Qld.
Runge, N., Wenig, D., Zitzmann, D., Malaka, R., 2015. Tags you don’t forget: Gamified tagging of
personal images, in: Proceedings of the International Conference on Entertainment Computing
(ICEC). Springer International Publishing, Trondheim, Norway, pp. 301314. doi:10.1007/978-
3-319-24589-8_23
Ryan, R.M., Deci, E.L., 2000. Self-determination theory and the facilitation of intrinsic motivation,
social development, and well-being. Am. Psychol. 55, 6878. doi:10.1037/0003-066X.55.1.68
Saito, S., Watanabe, T., Kobayashi, M., Takagi, H., 2014. Skill development framework for micro-
tasking, in: Proceedings of the 8th International Conference, UAHCI 2014, Held as Part of HCI
International 2014. Springer, Heraklion, Crete, Greece, pp. 400409. doi:10.1007/978-3-319-
07440-5_37
Sakamoto, M., Nakajima, T., Akioka, S., 2016. Gamifying collective human behavior with gameful
digital rhetoric. Multimed. Tools Appl. 143. doi:10.1007/s11042-016-3665-y
Sakamoto, M., Nakajima, T., 2014. Gamifying social media to encourage social activities with digital-
physical hybrid role-playing, in: Proceeding of the 6th International Conference, SCSM 2014,
Held as Part of HCI International 2014. Springer, Heraklion, Crete, Greece, pp. 581591.
doi:10.1007/978-3-319-07632-4-55
48
Schlagwein, D., Bjørn-Andersen, N., 2014. Organizational learning with crowdsourcing: The revela-
tory case of LEGO. J. Assoc. Inf. Syst. 15, 754778.
Seaborn, K., Fels, D.I., 2015. Gamification in theory and action: A survey. Int. J. Hum. Comput. Stud.
74, 1431. doi:http://dx.doi.org/10.1016/j.ijhcs.2014.09.006
Shen, A.X.L., Lee, M.K.O., Cheung, C.M.K., Chen, H., 2009. An investigation into contribution I-
Intention and We-Intention in open web-based encyclopedia: Roles of joint commitment and
mutual agreement, in: Proceeding of the 13th International Conference on Information Systems.
AIS, Phoenix, Arizona, USA, pp. 17.
Shen, X.-L., Lee, M.K.O., Cheung, C.M.K., 2014. Exploring online social behavior in crowdsourcing
communities: A relationship management perspective. Comput. Human Behav. 40, 144151.
doi:10.1016/j.chb.2014.08.006
Sheng, L.Y., 2013. Modelling learning from Ingress (Google’s augmented reality social game), in:
Proceedings of the 2013 IEEE 63rd Annual Conference International Council for Education
Media, ICEM. IEEE, Singapore, pp. 18. doi:10.1109/CICEM.2013.6820152
Sigala, M., 2015. Gamification for crowdsourcing marketing practices: Applications and benefits in
tourism, in: Garrigos-Simon, F.J., Gil-Pechuán, I., Estelles-Miguel, S. (Eds.), Advances in
Crowdsourcing. Springer International Publishing, pp. 129145. doi:10.1007/978-3-319-18341-
1_11
Silva, A.C., Lopes, C.T., 2016. Health translations: A crowdsourced, gamified approach to translate
large vocabulary databases, in: Proceedings of 11th Iberian Conference on Information Systems
and Technologies (CISTI). IEEE, Gran Canaria, Spain, pp. 14.
doi:10.1109/CISTI.2016.7521479
Simões, B.., Aksenov, P.., Santos, P.., Arentze, T.., De Amicis, R.., 2015. C-space: Fostering new cre-
ative paradigms based on recording and sharing “casual” videos through the internet, in: Pro-
ceedings of the International Conference on Multimedia and Expo Workshops (ICMEW).
IEEE, Torino, Italy, pp. 14.
Simões, B., De Amicis, R., 2016. Gamification as a key enabling technology for image sensing and
content tagging, in: Giuseppe De Pietro, Gallo, L., Howlett, R.J., Jain, L.C. (Eds.), Intelligent
Interactive Multimedia Systems and Services 2016. Springer International Publishing, pp. 503
513. doi:10.1007/978-3-319-39345-2_44
Simperl, E., 2015. How to use crowdsourcing effectively: Guidelines and examples. Lib. Q. 25, 18
39.
Smith, R., Kilty, L.A., 2014. Crowdsourcing and gamification of enterprise meeting software quality,
in: Proceedings of the 7th International Conference on Utility and Cloud Computing. IEEE,
London, UK, pp. 611613. doi:10.1109/UCC.2014.95
Snijders, R., Dalpiaz, F., Brinkkemper, S., Hosseini, M., Ali, R., Ozum, A., 2015. REfine: A gamified
platform for participatory requirements engineering, in: Proceedings of the 1st International
Workshop on Crowd-Based Requirements Engineering (CrowdRE). IEEE, pp. 16.
doi:10.1109/CrowdRE.2015.7367581
Snijders, R., Dalpiaz, F., Hosseini, M., Shahri, A., Ali, R., 2014. Crowd-centric requirements engi-
neering, in: Proceedings of the 7th International Conference on Utility and Cloud Computing.
IEEE, London, UK, pp. 614615. doi:10.1109/UCC.2014.96
49
Sørensen, J.J.W.H., Pedersen, M.K., Munch, M., Haikka, P., Jensen, J.H., Planke, T., Andreasen,
M.G., Gajdacz, M., Mølmer, K., Lieberoth, A., Sherson, J.F., 2016. Exploring the quantum
speed limit with computer games. Nature 532, 210213. doi:10.1038/nature17620
Stannett, M., Legg, C., Sarjant, S., 2013. Massive ontology interface, in: Proceedings of the 14th An-
nual conference on Computer-Human Interaction (SIGCHI). ACM, Christchurch, New Zea-
land. doi:10.1145/2542242.2542251
Straub, T., Gimpel, H., Teschner, F., Weinhardt, C., 2015. How (not) to incent crowd workers. Bus.
Inf. Syst. Eng. 57, 167179. doi:10.1007/s12599-015-0384-2
Supendi, K., Prihatmanto, A.S., 2015. Design and implementation of the assesment of publik officers
web base with gamification method, in: Proceedings of the 4th International Conference on In-
teractive Digital Media (ICIDM). IEEE, Bandung, Indonesia. doi:10.1109/IDM.2015.7516353
Supriadi, I., Prihatmanto, A.S., 2015. Design and implementation of Indonesia united portal using
crowdsourcing approach for supporting conservation and monitoring of endangered species, in:
Proceedings of the 4th International Conference on Interactive Digital Media (ICIDM). IEEE,
Bandung, Indonesia. doi:10.1109/IDM.2015.7516354
Surowiecki, J., 2005. The wisdom of crowds. Anchor Books, New York.
Susumpow, P., Pansuwan, P., Sajda, N., Crawley, A.W., 2014. Participatory disease detection through
digital volunteerism: How the doctorme application aims to capture data for faster disease de-
tection in Thailand, in: Proceedings of the 23rd International Conference on World Wide Web
(WWW’14). ACM, Seoul, Korea. doi:10.1145/2567948.2579273
Talasila, M., Curtmola, R., Borcea, C., 2016. Crowdsensing in the wild with aliens and micropay-
ments. IEEE Pervasive Comput. 15, 6877. doi:10.1109/MPRV.2016.18
Tauer, J.M., Harackiewicz, J.M., 2004. The effects of cooperation and competition on intrinsic moti-
vation and performance. J. Pers. Soc. Psychol. 86, 849861. doi:10.1037/0022-3514.86.6.849
Terlutter, R., Capella, M.L., 2013. The gamification of advertising: Analysis and research directions
of in-game advertising, advergames, and advertising in social network games. J. Advert. 42,
95112. doi:10.1080/00913367.2013.774610
Tinati, R., Luczak-Roesch, M., Simperl, E., Hall, W., 2016. Because science is awesome, in: Proceed-
ings of the 8th ACM Conference on Web Science - WebSci ’16. ACM Press, Hannover, Ger-
many, pp. 4554. doi:10.1145/2908131.2908151
Tolmie, P., Chamberlain, A., Benford, S., 2013. Designing for reportability: Sustainable gamification,
public engagement, and promoting environmental debate. Pers. Ubiquitous Comput. 112.
doi:10.1007/s00779-013-0755-y
Tsai, H., Bagozzi, R.P., 2014. Contribution behavior in virtual communities: Cognitive, emotional,
and social influences. Manag. Inf. Syst. Q. 38, 143163.
Ustalov, D., 2015. Towards crowdsourcing and cooperation in linguistic resources, in: Proccedings of
the 9th Russian Summer School in Information Retrieval (RuSSIR 2015). Springer Interna-
tional Publishing, Saint Petersburg, Russia, pp. 348358. doi:10.1007/978-3-319-25485-2_14
Uzun, A., Lehmann, L., Geismar, T., Küpper, A., 2013. Turning the OpenMobileNetwork into a live
crowdsourcing platform for semantic context-aware services, in: Proceedings of the 9th Inter-
national Conference on Semantic Systems - I-SEMANTICS ’13. ACM, Graz, Austria, pp. 89
96. doi:10.1145/2506182.2506194
50
Vasilescu, B., Serebrenik, A., Devanbu, P., Filkov, V., 2014. How social Q&A sites are changing
knowledge sharing in open source software communities, in: Proceedings of the 17th ACM
Conference on Computer Supported Cooperative Work & Social Computing - CSCW ’14.
ACM, Baltimore, MD, USA, pp. 342354. doi:10.1145/2531602.2531659
Von Ahn, L., 2009. Human computation, in: Proceedings of the 46th Annual Design Automation
Conference - DAC ’09. IEEE, San Francisco, CA, USA, pp. 418419.
doi:10.1145/1629911.1630023
Von Ahn, L., 2008. Designing games with a purpose. Commun. of t. ACM. 51, 5867.doi:
10.1145/1378704.1378719
in: Proceedings of the 46th Annual Design Automation Conference - DAC ’09. IEEE, San Francisco,
CA, USA, pp. 418419. doi:10.1145/1629911.1630023
Wang, Y., Jia, X., Jin, Q., Ma, J., 2015. QuaCentive: A quality-aware incentive mechanism in mobile
crowdsourced sensing (MCS). J. Supercomput. 118. doi:10.1007/s11227-015-1395-y
Webster, J., Watson, R.T., 2002. Analyzing the past to prepare for the future: Writing a literature re-
view. MIS Q. 26, xiiixxiii.
Whetten, D.A., 1989. What constitutes a theoretical contribution? Acad. Manag. J. 14, 490495.
doi:10.5465/AMR.1989.4308371
Wu, F.-J., Luo, T., 2014. WiFiScout: A crowdsensing WiFi advisory system with gamification-based
incentive, in: Proceedings of the 11th International Conference on Mobile Ad Hoc and Sensor
Systems (MASS). IEEE, Philadelphia, USA, pp. 533534. doi:10.1109/MASS.2014.32
Xie, T., Bishop, J., Horspool, R.N., Tillmann, N., Halleux, J. De, 2015. Crowdsourcing code and pro-
cess via Code Hunt, in: Proceedings of the 2nd International Workshop on CrowdSourcing in
Software Engineering. IEEE, Florence, Italy, pp. 1516. doi:10.1109/CSI-SE.2015.10
Yakushin, D., Lee, J., 2014. Cooperative robot software development through the internet, in: Pro-
ceedings of the 2014 IEEE/SICE International Symposium on System Integration (SII). IEEE,
Tokyo, Japan, pp. 577582. doi:10.1109/SII.2014.7028103
Yee, N., 2006. Motivations for play in online games. Cyberpsychol. Behav. 9, 772775.
doi:10.1089/cpb.2006.9.772
Yu, H., Lin, H., Lim, S.F., Lin, J., Shen, Z., Miao, C., 2015. Empirical analysis of reputation-aware
task delegation by humans from a multi-agent game, in: Bordini, E., Weiss, Y. (Eds.), Proceed-
ings of the 14th International Joint Conference on Autonomous Agents and Multiagent Systems
(AAMAS). IFAAMAS, Istanbul, Turkey, pp. 16871688.
Zhang, P., 2008. Motivational Affordances: Reasons for ICT design and use. Commun. ACM 51,
145147. doi:10.1145/1400214.1400244
Zhao, Y., Zhu, Q., 2014a. Evaluation on crowdsourcing research: Current status and future direction.
Inf. Syst. Front. 16, 417434. doi:10.1007/s10796-012-9350-4
Zhao, Y., Zhu, Q., 2014b. Effects of extrinsic and intrinsic motivation on participation in crowdsourc-
ing contest. Online Inf. Rev. 38, 896917. doi:10.1108/OIR-08-2014-0188
Zheng, H., Li, D., Hou, W., 2011. Task design, motivation, and participation in crowdsourcing con-
tests. Int. J. Electron. Commer. 15, 5788. doi:10.2753/JEC1086-4415150402
51
Zuchowski, O., Posegga, O., Schlagwein, D., Fischbach, K., 2016. Internal crowdsourcing: Concep-
tual framework, structured review and research agenda. J. Inf. Technol. 31, 166184. doi:
10.1057/jit.2016.14
52
Author biographies
Benedikt Morschheuser is working as a researcher at the Robert Bosch GmbH and the Karlsruhe
Institute of Technology (KIT). His work focuses on the use of gamification in collaborative environ-
ments, especially crowdsourcing systems and online communities. Most of his scientific contributions
are empirical papers on the effects of gamification in different contexts and the designs of gamified
systems. https://issd.iism.kit.edu/21_126.php
Juho Hamari is a Professor of Gamification (Associate & tenure-track) and leads the Gamification
Group across Tampere University of Technology, University of Turku and University of Tampere. Dr.
Hamari has authored several seminal scholarly articles on games and gamification from perspective of
consumer behavior, human-computer interaction and information systems science. His research has
been published in a variety of prestigious venues such as Organization Studies, JASIST, IJIM, Com-
puters in Human Behavior, Internet Research, Electronic Commerce Research and Applications, Sim-
ulation & Gaming as well as in books published by e.g. MIT Press. http://juhohamari.com
Jonna Koivisto is a researcher at the Gamification Group / Tampere University of Technology. Her
research focuses especially on motivations and behavior online. She has authored several seminal em-
pirical works on gamification as well as spearheaded efforts to synthesize existing academic literature
on the topic. In addition to gamification, her research interests include consumer behavior in games as
well as new online business models. Koivisto’s research has been published in internationally respected
scholarly journals and conferences, as well as in the popular media. http://jonnakoivisto.com
Alexander Maedche is Full Professor of Information Systems at the Karlsruhe Institute of Technology
(KIT) and Managing Director of the Institute of Enterprise Systems at the University of Mannheim,
Germany. His research focuses on designing user-centered and intelligent digital service systems. He
has published more than 100 papers in journals and conferences, such as the Journal of the AIS, IEEE
Internet Computing, and Information and Software Technology. https://issd.iism.kit.edu
... The authors used a game-based tool that made the task easy to complete by including points and a leaderboard on the platform. This method of so-called gamification is frequently used by crowdsourcing platforms and has been shown to increase the engagement of the crowd and improve the quality of the crowdsourced work [37]. Bittel et al [38] used a hybrid crowd-ML approach to create the largest publicly available data set of annotated endoscopic images. ...
... To collect nonexpert image annotations, we used the commercially available platform DiagnosUs (Centaur Labs) [44] through a collaboration agreement. Users can sign up to the app and participate in competitions, which increases engagement and improves accuracy [37]. Users are recruited via a referral system or advertisements on social media. ...
Article
Background Dermoscopy is commonly used for the evaluation of pigmented lesions, but agreement between experts for identification of dermoscopic structures is known to be relatively poor. Expert labeling of medical data is a bottleneck in the development of machine learning (ML) tools, and crowdsourcing has been demonstrated as a cost- and time-efficient method for the annotation of medical images. Objective The aim of this study is to demonstrate that crowdsourcing can be used to label basic dermoscopic structures from images of pigmented lesions with similar reliability to a group of experts. Methods First, we obtained labels of 248 images of melanocytic lesions with 31 dermoscopic “subfeatures” labeled by 20 dermoscopy experts. These were then collapsed into 6 dermoscopic “superfeatures” based on structural similarity, due to low interrater reliability (IRR): dots, globules, lines, network structures, regression structures, and vessels. These images were then used as the gold standard for the crowd study. The commercial platform DiagnosUs was used to obtain annotations from a nonexpert crowd for the presence or absence of the 6 superfeatures in each of the 248 images. We replicated this methodology with a group of 7 dermatologists to allow direct comparison with the nonexpert crowd. The Cohen κ value was used to measure agreement across raters. Results In total, we obtained 139,731 ratings of the 6 dermoscopic superfeatures from the crowd. There was relatively lower agreement for the identification of dots and globules (the median κ values were 0.526 and 0.395, respectively), whereas network structures and vessels showed the highest agreement (the median κ values were 0.581 and 0.798, respectively). This pattern was also seen among the expert raters, who had median κ values of 0.483 and 0.517 for dots and globules, respectively, and 0.758 and 0.790 for network structures and vessels. The median κ values between nonexperts and thresholded average–expert readers were 0.709 for dots, 0.719 for globules, 0.714 for lines, 0.838 for network structures, 0.818 for regression structures, and 0.728 for vessels. Conclusions This study confirmed that IRR for different dermoscopic features varied among a group of experts; a similar pattern was observed in a nonexpert crowd. There was good or excellent agreement for each of the 6 superfeatures between the crowd and the experts, highlighting the similar reliability of the crowd for labeling dermoscopic images. This confirms the feasibility and dependability of using crowdsourcing as a scalable solution to annotate large sets of dermoscopic images, with several potential clinical and educational applications, including the development of novel, explainable ML tools.
... el mundo empresarial (team-building y reclutamiento de empleados entre otros), la salud pública (para inculcar buenas prácticas a los pacientes, por ejemplo) o la evaluación de servicios públicos (como la recogida de basura o el estado de las calles) (Torres Toukoumidis et al., 2017;Koivisto y Hamari, 2019;Navarro Mateos, Pérez López y Femia, 2021). Sin embargo, es en el ámbito de la educación donde más se ha profundizado esta línea de investigación (Morschheuser et al., 2017). ...
Article
Full-text available
A pesar del éxito creciente de la gamificación en la universidad, resulta sorprendente que todavía no existan estudios sobre esta innovación docente en el campo de las políticas públicas. Este artículo pretende colmar este vacío. Después de recordar los fundamentos del concepto de gamificación, sus beneficios y sus limitaciones, este artículo pone de relieve una serie de actividades desarrolladas previamente: el uso de ladrillos Lego, un concurso de Kahoot, un rally urbano, un juego de supervivencia con huertos, un juego de escape, una competición de SimCity y una simulación basada en Diplomacy. Este artículo debe ser considerado como una hoja de ruta que debe ser adaptada al contexto de cada docente. Se aboga por un uso moderado y razonado de la gamificación. Esta corriente debe ser considerada por lo que es: una simple herramienta al servicio del aprendizaje, y no como un fin en sí misma.
... Es desafiante imaginar que un/a ciudadano/a usa la app de forma continua, sin que exista un beneficio directo. Otras aplicaciones móviles han experimentado con elementos de gamification (Thiel y Fröhlich, 2016;Morschheuser et al., 2017), como la introducción de niveles de competencia de usuarios (desde novato a experto), o el ranking de usuarios por sus contribuciones por ciudad, la entrega de premios (como tasas o camisetas) y cupones para mejores precios en una tienda (online) si un usuario muestra un cierto nivel de actividad. En qué medida estos métodos pueden ser útil para retener usuarios de STRIDE.App queda en duda. ...
Article
Full-text available
RESUMEN: La caminabilidad de calles o barrios puede ser evaluada de dos formas: por medio de modelos, usando bases de datos geográficas sobre el uso de suelo, tal como lo hace Walkscore.com, o a través de encuestas focalizadas en peatones. Ambas evaluaciones de caminabilidad ayudan a planificar barrios con mejor accesibilidad, diseñar intervenciones urbanistas y también podría entregar perspectivas del comportamiento peatonal a los investigadores y planificadores. Para facilitar los últimos dos propósitos, se desarrolla una aplicación móvil, llamada STRIDE.App, que permite a los peatones compartir sus experiencias de caminata. El enfoque del desarrollo de la aplicación ha sido una interfaz sencilla de utilizar, que logra compartir con bastante rapidez la percepción del transeúnte, al calificar la ubicación actual en tres posibles alternativas para caminar: 1. buena, 2. con algunos problemas o 3. mala-por medio de los colores verde, amarillo y rojo respectivamente. La información sobre los peatones, tales como edad, género y su capacidad física, es considerada clave para conocer "quien" ha tenido una experiencia positiva, regular o negativa. La app móvil fue usada en 4 casos de estudios en 2 continentes, comprobando su utilidad, pero también destacando los desafíos con respecto a la representatividad de los datos, y límites en el entendimiento de los contextos locales. Palabras clave caminabilidad, herramientas de recolección de datos, experiencias de caminata, seguridad d