Content uploaded by Rob Dunne
Author content
All content in this area was uploaded by Rob Dunne on Oct 06, 2017
Content may be subject to copyright.
To Sign Up, or not to Sign Up? Maximizing Citizen Science
Contribution Rates through Optional Registration
Caroline Jay1, Robert Dunne2, David Gelsthorpe3, Markel Vigo1
1School of Computer Science, 2Research IT, 3Manchester Museum
University of Manchester
Manchester, UK
[caroline.jay, rob.dunne, david.gelsthorpe, markel.vigo]@manchester.ac.uk
ABSTRACT
Many citizen science projects ask people to create an account
before they participate – some require it. What effect does the
registration process have on the number and quality of contri-
butions? We present a controlled study comparing the effects
of mandatory registration with an interface that enables peo-
ple to participate without registering, but allows them to sign
up to ‘claim’ contributions. We demonstrate that removing
the requirement to register increases the number of visitors
to the site contributing to the project by 62%, without reduc-
ing data quality. We also discover that contribution rates are
the same for people who choose to register, and those who
remain anonymous, indicating that the interface should cater
for differences in participant motivation. The study provides
evidence that to maximize contribution rates, projects should
offer the option to create an account, but the process should
not be a barrier to immediate contribution, nor should it be
required.
Author Keywords
Citizen Science; Gamification; Productivity; Crowdsourcing
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
Miscellaneous
INTRODUCTION
Citizen Science – the participation of ‘lay’ volunteers in sci-
entific endeavour – is now an important means of collecting,
curating and analysing data [31]. Notable successes in this
domain include Foldit, a game to illuminate protein structure
[4], and Galaxy Zoo [19], a web app for classifying galaxies.
Projects cover a wide range of topics and activities, and whilst
some require interaction with the external environment [35, 5,
33, 18, 23], many are conducted entirely online, on platforms
such as Zooniverse [29]. This brings huge opportunities –
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
CHI 2016, May 7–12, 2016, San Jose, California, USA.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-3362-7/16/05 ...$15.00.
http://dx.doi.org/10.1145/2858036.2858319
in many cases people need only an Internet connection and
the motivation to participate. It also brings challenges: how
can we ensure data quality, retain participants and maximize
contributions in this domain?
In this note we examine the role of online registration in de-
termining contribution patterns and participation rates in cit-
izen science. Whilst many projects allow people to partici-
pate without signing up in advance, it is also common to en-
courage – or require – users to create an account. This has
obvious advantages for the platform, as it makes it easier to
keep out automated traffic, monitor contribution quality and
prompt contributors to return to the project after a period of
absence [20, 3, 28, 27, 9]. It is also beneficial for citizen sci-
entists, who are able to keep track of their work, and obtain
information about how their contributions are being used [20,
24, 25, 13]. Registration is also necessary for the functioning
of certain ‘gamified’ UI components used in many citizen sci-
ence projects, such as badges and leaderboards, which reward
those who make significant contributions [2].
While gamification in citizen science has been demonstrated
to be effective and motivating [1, 14, 8], concerns have also
been raised that encouraging competitive behaviour may re-
duce altruism [7], and that game interfaces may have a neg-
ative effect on intrinsic motivation, and alienate traditional
citizen science volunteers [10, 22, 34, 2, 21]. Competition is
now an established part of citizen science, but it is only one
aspect of the complex set of factors motivating participation,
which include not only extrinsic, reward- or reputation-based
factors, but also inherent interest in the task or subject matter,
and the satisfaction of contributing to a collective goal [20].
Previous research has shown that increasing the ‘work’ done
during registration for an online community decreases the
number of people prepared to go through it [17, 6]. Here we
examine the effects of making registration, and participation
in the ‘game’ of contribution, completely optional. In a study
conducted on a palaeontology-focused data cataloguing ap-
plication, visitors are presented, at random, with a mandatory
registration page that they must complete before entering the
project, or allowed to contribute straight away, with the op-
tion of signing-up to ‘claim’ their contributions if they wish.
We hypothesize that removing the barrier of account creation
will increase the number of people who make at least one con-
tribution, but that to resolve cognitive dissonance [11] people
who go to the trouble of signing up, either through neces-
sity or choice, will make more contributions, on average, than
those who do not. We discover that whilst people are, indeed,
more likely to contribute when they do not have to register,
registration status (whether someone has to sign up, chooses
to sign up, or chooses not to sign up) does not appear to af-
fect the number of contributions they make. We therefore
propose a recommendation that, where possible, keeping reg-
istration optional may be a good way to increase the number
of contributors, by removing a barrier to entry for those who
are motivated by personal interest, but offering the possibility
of recognition and competition for participants who are more
extrinsically motivated.
Digitization of a museum fossil collection
Manchester Museum has a world class fossil collection com-
prising around 100,000 fossils. Half of these objects are not
recorded in the museum’s database and as such accessing,
cataloguing, and generating knowledge from the collection
is problematic and time consuming. Staff and volunteers are
photographing the fossils with their corresponding labels as a
stepping-stone to making the artefacts more widely available.
To make these images accessible and useful to the public and
scientists, they must first be catalogued in digital format.
To achieve a fully digitised record of the fossil collection, a
web application was created to crowdsource the entry of fossil
information from the photographs. Its goal was to engage cit-
izen scientists interested in palaeontology, or otherwise keen
to help with curation, in contributing to the scientific goals of
the museum. The application was built using an agile partici-
patory design process. It was constructed initially with feed-
back from curation staff at Manchester Museum to ensure it
functioned correctly from a scientific and technical perspec-
tive, and then refined iteratively during a two-week beta test-
ing period with a convenience sample of people who had not
used the app before.
The application has two stages for serving images to users.
Firstly it shows images in the image queue that have not yet
been completed (see Figure 2). A contributor checks the in-
formation on the label, and then enters it into a form under-
neath the image. Secondly it shows images that have been
completed and moved to the review queue. These images can
be checked by other contributors, who can assess and edit the
data (see Figure 1). This feature was put in to allow users
to self-regulate the quality of the data supplied, ultimately
resulting in fewer inaccuracies [12], and was used in addi-
tion to presenting images multiple times for cross-checking
results [32].
Contributors received a point for each task completed (either
submission or review), and a leaderboard displays the names
of the 10 people with the highest number of points. An activ-
ity feed was added to give a sense of immediacy to the user
experience, enabling the user to see that other users are cur-
rently completing tasks. Where contributors were registered,
their name was displayed on the feed or leaderboard; if a con-
tributor was not registered, ‘Secret Scientist’ was displayed
instead.
Figure 1. Data in a labelled item available for review.
Figure 2. A sample image.
STUDY
The study compares two landing pages: Interface A requires
people to create an account or log in before they can see the
rest of the app; Interface B takes people straight to the app,
and allows them to interact with it and contribute straight
away, with the option of signing up if they wish. Data was
collected for the study for six weeks directly following the
project launch, on August 12th 2015. The application was
promoted via SciStarter1, Manchester Museum’s Twitter and
Facebook accounts, the citizen science and palaentology fo-
rums on Reddit, and internal mailing lists and email bulletins
within the University of Manchester. We tested two core hy-
potheses:
H1: A participant is more likely to make at least one contri-
bution, if he/she does not have to register.
H2: A participant with an account will make a greater number
of contributions as he/she is able to get credit for them.
Method
We used A/B testing (split testing), and the pseudo-random
number generator function built into PHP, to assign visitors
at random to the following groups:
•Group A: directed to the registration/login pages to ac-
cess the web application to complete or review image label
data.2
1http://scistarter.com
2https://natureslibrary.co.uk
•Group B: directed straight to the web application to com-
plete or review image label data.3
A further category, Group C, consists of those users initially
allocated to Group B who decide to register. The time of the
switch is logged in the database.
For each visitor, a unique ID was set as a tracking cookie, to
allow us to monitor any participant who was not logged in
(this applies to all participants in Group B, but also to mem-
bers of Group A or C who were not logged in). This method
has some limitations: if the same individual accessed the ap-
plication through different browsers or computers this would
result in different entries. On the other hand, it allows us to
unequivocally identify individuals using several IP addresses
(e.g., different Wi-Fi networks) or sharing IPs with other in-
dividuals (e.g., corporate IPs).
Additionally, controlling the traffic generated by robots —
which can account for 40% of the traffic in some web-
sites [16]— is a well-known challenge of A/B testing exper-
iments [15]. These entries may introduce noise in the data,
risking the reliability of the results. Therefore we followed a
systematic approach to distinguish robots and humans:
1. Robots were removed by comparing their user agent string
against the entries in a dictionary4which included common
robot identifiers such as bot, proxy, spider, slurp, etc.
2. People who accepted a cookie were classified as members
of ‘A’ or ‘B’.
3. People who did not accept the cookie were still allocated
to a group on their first visit, and had a unique ID logged,
which remained the same until they left the application. It
is key to consider these individuals as the application al-
lowed people to enter data without accepting the cookie.
A caveat is that if the user did not accept a cookie and re-
turned, he/she would be classified as a new user.
Results
383 individuals were allocated to Group A (registration was
required), while 445 were free to enter data without registra-
tion (Group B). Thirty-two individuals (8.36%) who were al-
located to Group A made at least one contribution, compared
with 57 (13%) of those allocated to condition B – a 62% in-
crease in contribution rate. A Mann-Whitney test indicates
that there is an effect of having to register on likelihood of
contributing W=3.8,p=0.05, which confirms H1.
Figure 3. Number of contributions per individual in each group.
3https://natureslibrary.co.uk/share/index/0
4http://useragentstring.com
As shown in Figure 3 participation rates followed a typical
pattern for a citizen science application. Most people tended
to be ‘dabblers’, contributing in smaller numbers overall (see
the mode in Table 1), with a small group of highly engaged
participants who made large contributions [9].
A Kruskall-Wallis test suggests that, in terms of number of
contributions per individual, there is no difference between
the groups χ2=4.77,p=0.09. In line with this, if we con-
sider individuals from C as members of B, a Mann-Whitney
test rejects group dissimilarity, W=972.5,p=0.6. Con-
sequently H2 is rejected, indicating that the fact of having
registered does not have an effect on the number of contribu-
tions.
Group N % total M Mdn Mo Max
A (32/383) 269 30.22 8.41 5 1 30
B (57/445) 267 30 5.56 3 1 56
C 354 39.78 25.29 6.5 1 137
BC 621 69.78 10.89 3 1 137
Table 1. Descriptive statistics of contributions per group. Column 1
shows the group (nocontributing/noallocated to group initially).
Analysis of Group C
Several individuals (N = 19) who fell in Group B regis-
tered despite the fact this was not mandatory for participa-
tion. Fourteen of them made at least one contribution. This
accounts for 29% of participants in Group B who contributed
at least once. Interestingly, 9 of them registered before mak-
ing any contribution, whereas the remaining five made at least
one —Figure 4 (left) shows the distribution of these individu-
als. Figure 4 (right) shows the number of contributions before
and after registration per individual: on average, the number
of contributions before registering accounts for 18% of total
contributions in a broad range that expands from 1.4 to 44%.
It is worth mentioning that out of the 890 labelled pictures,
269 were submitted by individuals from Group A, 267 by
Group B and 354 by Group C. Table 1 shows that the number
of contributions of those who registered even if they did not
have to clearly stands out as these 14 participants account for
almost 40% of the contributions.
Figure 4. Group C behaviour: on the left, the distribution of individuals
based on the number of contributions they made before they registered;
on the right, the total number of contributions of those who contributed
before and after they registered.
Data Quality
Data needed to be entered in up to seven fields of the form.
A scoring system was developed to determine whether there
was a difference between groups in terms of data quality: 0 =
nothing/garbage; 1 = one field correct; 2 = two or more fields
correct; 3 = completely correct. 5% of the total contributions
for each group was retrieved at random from the database us-
ing an SQL query. The mean quality scores out of three were
2.87 for Group A and 2.71 for Group B (excluding Group C).
On the whole the data entered for the image labels were high
quality, with no garbage entries. A Mann-Whitney test indi-
cates there was no significant difference between groups A
and B when it came to data quality, W=543.5,p=0.62.
DISCUSSION
The results support H1: A participant is more likely to make
at least one contribution if he/she does not have to register.
In our study, 8% of people assigned to Group A made at
least one contribution; for those assigned to Group B, this
was 13%, equating to an additional 5% of site visitors partic-
ipating – a 62% increase in the contribution rate.
A concern around trying to increase this potentially more ca-
sual participation is that it might result in lower quality data.
Would participants take the task seriously if they are not ac-
countable for their results? Previous work has linked online
reputation to provision of higher quality data [30], but there
is also concern that competition based on the number of con-
tributions can lead to less care taken in entering data [10]. In
this study we find that data quality is high, and does not vary
as a function of registration status.
We do not have strong support for H2, as the difference in
contribution rates between the groups is not statistically sig-
nificant. The contribution rates show the long tail distribu-
tion typical of citizen science projects, and it is interesting
to note that this is true of all groups, providing further evi-
dence that this pattern of activity is typical in citizen science
projects. It is, however, interesting to consider some of the
descriptive data from the perspective of motivation and topic
interest. Of the people who moved to Group C (optional reg-
istration), two-thirds did so before making any contributions,
which potentially indicates that they wished to ensure they
were able to get ‘credit’ before starting any work. It therefore
appears important to cater to a group of people who may be
keen to keep track of their contributions, or wish to publicly
participate in the project via the leaderboard and feed.
Additionally, we see contributors in Group C making more
than 70 submissions, and another high contributer (48
submissions) in Group B, whilst everyone in Group A
made fewer than 40 submissions. It is difficult to draw any
conclusions based on a small number of participants, but it
is reasonable to assume that these four contributors were
interested enough in the task to complete it in high volumes.
The contributions of the 14 individuals from group C account
for almost 40% of the 890 pictures that were labelled. If
they had been allocated to Group A, where they would not
have had a chance to even see the task before signing up, it is
possible the project may have lost a lot of entries.
DESIGN RECOMMENDATIONS
The results point to two clear recommendations:
Allow people to start contributing as soon as possible. Vis-
itors should be able to see the task, and contribute, before be-
ing required to register. We saw a 62% rise in the number
of site visitors contributing when we removed the registration
barrier, and allowed people to get straight to the task.
Allow people to register. There are many reasons that it is
helpful to sign up to a project. Citizen scientists get to stay in
touch with the project and become part of a community; plat-
forms get to understand more about their contributors, and are
better able to communicate with them [26]. There is also ev-
idence that more extrinsically-motivated people may want to
register, so they can get explicit credit for their contributions,
and participate in game-aspects of an app [1]. It is possible
this explains the behaviour of at least some of Group C, most
of whom signed up before making any contribution.
Methodological Considerations
This study is relatively small in scale, although the number
of participants is similar to other controlled studies in this
area [21]. The long-tail distribution of contributions also
matches that of other projects, indicating that the sample
could be viewed as representative of ‘typical’ citizen science.
The task was a relatively straightforward classification task,
however, and it is therefore necessary to be cautious about
applying the results to more complex or involved tasks.
The study was controlled and conducted in the wild, lending
it both internal and external validity, but it was purely quan-
titative, and did not collect any data regarding participants’
thoughts and motivations, so it is not possible to be certain
why participants made particular decisions. It should also be
noted that the results apply to purely online studies; where
field work is involved, or there is some other reason that it is
important to identify contributors, optional registration would
not be recommended.
CONCLUSION
This work demonstrates that it is possible to increase con-
tributions to online citizen science by more than 60%, by
allowing people to participate in a project without obliging
them to officially sign up. It also provides evidence that be-
ing able to record contributions, and potentially gain some
form of recognition for them, is important for some people,
and therefore registration should be offered. Many citizen
science projects follow this model, but the way in which reg-
istration is handled by projects varies considerably, and has
not previously been investigated systematically. We report
an empirical study demonstrating that the way in which ac-
count creation is handled really does make a difference, and
propose an evidence-based model for the registration process,
that a new project can default to.
PROJECT DATA
We practise open science, and have made project code
and data available at https://github.com/refractiveco/
natureslibrary and http://iam-data.cs.manchester.
ac.uk/investigations/13.
REFERENCES
1. Anne Bowser, Derek Hansen, Yurong He, Carol Boston,
Matthew Reid, Logan Gunnell, and Jennifer Preece.
2013. Using Gamification to Inspire New Citizen
Science Volunteers. In Proceedings of the First
International Conference on Gameful Design, Research,
and Applications (Gamification ’13). ACM, New York,
NY, USA, 18–25.
2. Anne Bowser, Derek Hansen, Jennifer Preece, Yurong
He, Carol Boston, and Jen Hammock. 2014. Gamifying
Citizen Science: A Study of Two User Groups. In
Proceedings of the Companion Publication of the 17th
ACM Conference on Computer Supported Cooperative
Work & Social Computing (CSCW Companion ’14).
ACM, New York, NY, USA, 137–140.
3. Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and
Michael S. Bernstein. 2015. Break It Down: A
Comparison of Macro- and Microtasks. In Proceedings
of the 33rd Annual ACM Conference on Human Factors
in Computing Systems (CHI ’15). ACM, New York, NY,
USA, 4061–4064.
4. Seth Cooper, Firas Khatib, Adrien Treuille, Janos
Barbero, Jeehyung Lee, Michael Beenen, Andrew
Leaver-Fay, David Baker, Zoran Popovi´
c, and others.
2010. Predicting protein structures with a multiplayer
online game. Nature 466, 7307 (2010), 756–760.
5. Mark Cottman-Fields, Margot Brereton, and Paul Roe.
2013. Virtual Birding: Extending an Environmental
Pastime into the Virtual World for Citizen Science. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’13). ACM, New
York, NY, USA, 2029–2032.
6. Sara Drenner, Shilad Sen, and Loren Terveen. 2008.
Crafting the Initial User Experience to Achieve
Community Goals. In Proceedings of the 2008 ACM
Conference on Recommender Systems (RecSys ’08).
ACM, New York, NY, USA, 187–194.
7. John Duffy and Tatiana Kornienko. 2010. Does
competition affect giving? Journal of Economic
Behavior & Organization 74, 12 (2010), 82 – 103.
8. David Easley and Arpita Ghosh. 2013. Incentives,
Gamification, and Game Theory: An Economic
Approach to Badge Design. In Proceedings of the
Fourteenth ACM Conference on Electronic Commerce
(EC ’13). ACM, New York, NY, USA, 359–376.
9. Alexandra Eveleigh, Charlene Jennett, Ann Blandford,
Philip Brohan, and Anna L. Cox. 2014. Designing for
Dabblers and Deterring Drop-outs in Citizen Science. In
Proceedings of CHI ’14. 2985–2994.
10. Alexandra Eveleigh, Charlene Jennett, Stuart Lynn, and
Anna L. Cox. 2013. “I Want to Be a Captain! I Want to
Be a Captain!”: Gamification in the Old Weather Citizen
Science Project. In Proceedings of the First
International Conference on Gameful Design, Research,
and Applications (Gamification ’13). ACM, New York,
NY, USA, 79–82.
11. Leon Festinger. 1957. A Theory of Cognitive
Dissonance. Stanford University Press.
12. Derek L. Hansen, Patrick J. Schone, Douglas Corey,
Matthew Reid, and Jake Gehring. 2013. Quality Control
Mechanisms for Crowdsourcing: Peer Review,
Arbitration, & Expertise at Familysearch Indexing.
In Proceedings of the 2013 Conference on Computer
Supported Cooperative Work (CSCW ’13). ACM, New
York, NY, USA, 649–660.
13. Ioanna Iacovides, Charlene Jennett, Cassandra
Cornish-Trestrail, and Anna L. Cox. 2013. Do Games
Attract or Sustain Engagement in Citizen Science?: A
Study of Volunteer Motivations. In CHI ’13 Extended
Abstracts on Human Factors in Computing Systems
(CHI EA ’13). ACM, New York, NY, USA, 1101–1106.
14. Nicole Immorlica, Greg Stoddard, and Vasilis
Syrgkanis. 2015. Social Status and Badge Design. In
Proceedings of the 24th International Conference on
World Wide Web (WWW ’15). International World Wide
Web Conferences Steering Committee, Republic and
Canton of Geneva, Switzerland, 473–483.
15. Ron Kohavi, Roger Longbotham, Dan Sommerfield, and
Randal Henne. 2009. Controlled experiments on the
web: survey and practical guide. Data Mining and
Knowledge Discovery 18, 1 (2009), 140–181.
16. Ron Kohavi and Rajesh Parekh. 2003. Ten
supplementary analyses to Improve E-commerce Web
Sites. In In Proceedings of the Fifth WEBKDD
Workshop.
17. Robert Kraut and Paul Resnick. 2012. Building
Successful Online Communities: Evidence-Based Social
Design. MIT Press.
18. Stacey Kuznetsov, Carrie Doonan, Nathan Wilson,
Swarna Mohan, Scott E. Hudson, and Eric Paulos. 2015.
DIYbio Things: Open Source Biology Tools As
Platforms for Hybrid Knowledge Production and
Scientific Participation. In Proceedings of the 33rd
Annual ACM Conference on Human Factors in
Computing Systems (CHI ’15). ACM, New York, NY,
USA, 4065–4068.
19. Chris J Lintott, Kevin Schawinski, Anˇ
ze Slosar, Kate
Land, Steven Bamford, Daniel Thomas, M Jordan
Raddick, Robert C Nichol, Alex Szalay, Dan Andreescu,
and others. 2008. Galaxy Zoo: morphologies derived
from visual inspection of galaxies from the Sloan Digital
Sky Survey. Monthly Notices of the Royal Astronomical
Society 389, 3 (2008), 1179–1189.
20. Oded Nov, Ofer Arazy, and David Anderson. 2014.
Scientists@Home: What Drives the Quantity and
Quality of Online Citizen Science Participation? PLoS
ONE 9, 4 (2014), e90375.
21. Chris Preist, Elaine Massung, and David Coyle. 2014.
Competing or Aiming to Be Average?: Normification
As a Means of Engaging Digital Volunteers. In
Proceedings of the 17th ACM Conference on Computer
Supported Cooperative Work & Social Computing
(CSCW ’14). ACM, New York, NY, USA, 1222–1233.
22. Nathan Prestopnik and Kevin Crowston. 2012.
Purposeful Gaming & Socio-computational Systems: A
Citizen Science Design Case. In Proceedings of the 17th
ACM International Conference on Supporting Group
Work (GROUP ’12). ACM, New York, NY, USA, 75–84.
23. Christine Robson, Marti Hearst, Chris Kau, and Jeffrey
Pierce. 2013. Comparing the Use of Social Networking
and Traditional Media Channels for Promoting Citizen
Science. In Proceedings of the 2013 Conference on
Computer Supported Cooperative Work (CSCW ’13).
ACM, New York, NY, USA, 1463–1468.
24. Dana Rotman, Jen Hammock, Jenny J. Preece, Carol L.
Boston, Derek L. Hansen, Anne Bowser, and Yurong
He. 2014. Does Motivation in Citizen Science Change
with Time and Culture?. In Proceedings of the
Companion Publication of the 17th ACM Conference on
Computer Supported Cooperative Work & Social
Computing (CSCW Companion ’14). ACM, New York,
NY, USA, 229–232.
25. Dana Rotman, Jenny Preece, Jen Hammock, Kezee
Procita, Derek Hansen, Cynthia Parr, Darcy Lewis, and
David Jacobs. 2012. Dynamic Changes in Motivation in
Collaborative Citizen-science Projects. In Proceedings
of CSCW ’12. 217–226.
26. Avi Segal, Ya’akov (Kobi) Gal, Robert J. Simpson,
Victoria Victoria Homsy, Mark Hartswood, Kevin R.
Page, and Marina Jirotka. 2015. Improving Productivity
in Citizen Science Through Controlled Intervention. In
Proceedings of the 24th International Conference on
World Wide Web (WWW ’15 Companion). International
World Wide Web Conferences Steering Committee,
Republic and Canton of Geneva, Switzerland, 331–337.
27. S. Andrew Sheppard and Loren Terveen. 2011. Quality
is a Verb: The Operationalization of Data Quality in a
Citizen Science Community. In Proceedings of the 7th
International Symposium on Wikis and Open
Collaboration (WikiSym ’11). ACM, New York, NY,
USA, 29–38.
28. S. Andrew Sheppard, Andrea Wiggins, and Loren
Terveen. 2014. Capturing Quality: Retaining
Provenance for Curated Volunteer Monitoring Data. In
Proceedings of the 17th ACM Conference on Computer
Supported Cooperative Work & Social Computing
(CSCW ’14). ACM, New York, NY, USA, 1234–1245.
29. Robert Simpson, Kevin R. Page, and David De Roure.
2014. Zooniverse: Observing the World’s Largest
Citizen Science Platform. In Proceedings of the 23rd
International Conference on World Wide Web (WWW
’14 Companion). International World Wide Web
Conferences Steering Committee, Republic and Canton
of Geneva, Switzerland, 1049–1054.
30. Yla R. Tausczik and James W. Pennebaker. 2011.
Predicting the Perceived Quality of Online Mathematics
Contributions from Users’ Reputations. In Proceedings
of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’11). ACM, New York, NY,
USA, 1885–1888.
31. Ramine Tinati, Max Van Kleek, Elena Simperl, Markus
Luczak-R¨
osch, Robert Simpson, and Nigel Shadbolt.
2015. Designing for Citizen Data Analysis: A
Cross-Sectional Case Study of a Multi-Domain Citizen
Science Platform. In Proceedings of CHI ’15.
4069–4078.
32. Luis von Ahn and Laura Dabbish. 2004. Labeling
Images with a Computer Game. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’04). ACM, New York, NY, USA,
319–326.
33. Jon Whittle. 2014. How Much Participation is Enough?:
A Comparison of Six Participatory Design Projects in
Terms of Outcomes. In Proceedings of the 13th
Participatory Design Conference: Research Papers -
Volume 1 (PDC ’14). ACM, New York, NY, USA,
121–130.
34. Andrea Wiggins. 2013. Free As in Puppies:
Compensating for ICT Constraints in Citizen Science. In
Proceedings of the 2013 Conference on Computer
Supported Cooperative Work (CSCW ’13). ACM, New
York, NY, USA, 1469–1480.
35. Andrea Wiggins and Kevin Crowston. 2011. From
Conservation to Crowdsourcing: A Typology of Citizen
Science. In Proceedings of HICSS ’11. 1–10.