ArticlePDF Available

Abstract and Figures

This paper draws upon the experience of several years of running a multi-application crowdsourcing platform, as well as a longitudinal evaluation of participant profiles, motivations and behaviour, to argue that heritage crowdsourcing cannot straightforwardly be considered a democratising form of cultural participation. While we agree that crowdsourcing helps expand public engagement with state-funded activities such as Galleries, Libraries, Archives, and Museums, we also note that both in our own experience and in other projects, the involved public cohort is not radically different in socio-demographic make-up to the one that physically visits such institutions, being for example financially better-off with high levels of formal education. In shedding light on issues of participation and cultural citizenship, through a both theoretically and empirically rich discussion, this paper light casts on the current impact of heritage crowdsourcing, in terms of both its strengths and weaknesses. The study will also be useful for cultural heritage policy and practice, museum management and curatorship, to potentially guide the choices and strategies of funders and organisations alike.
Content may be subject to copyright.
!
1!
Participation in heritage crowdsourcing
Accepted version; pre-print manuscript for Museum Management and Curatorship.
Chiara Bonacchi, Andrew Bevan, Adi Keinan-Schoonbart, Daniel Pett, Jennifer Wexler
1. Introduction
1.1. Research problem
Studies of cultural participation in the UK today show that certain social demographics
remain very detached from museums and galleries (Bennet et al., 2009). Given that such
places have long been powerful but problematic symbols of western culture, ever since their
first emergence during the Renaissance, it remains striking that both museums and galleries
have, on the whole, failed to engage ethnic minorities and people in lower socio-economic
groups (Sandell, 2002). For example, despite policies to ensure free entrance to state-
funded museums, variables such as high socio-economic background, a university degree
and/or a professional occupation have, together with ethnicity, all remained good predictors
of high levels of participation in cultural activities (Neelands et al., 2015). Similar trends have
been noted in the United States, where the attendance of financially disadvantaged groups
and minorities at public art museums has been declining, despite the fact that US society
overall is increasingly multicultural and ethnically diverse (Blackwood and Purcell, 2014).
These UK and US findings are also consistent with results obtained by previous surveys
worldwide (DiMaggio et al., 1979; Merriman, 1989; National Endowment for the Arts, 1997;
Selwood, 2002; La Regina, 2009; Taylor, 2016), with cultural factors proving stronger than
structural ones in determining disengagement (Merriman, 1989). Indeed, commentators
have stressed that part of the problem of representation in cultural participation relates to the
fact that what is being measured is typically footfall at state-funded venues (Stevenson et al.,
2015:100-1). In contrast, some argue that, if policies were developed to better support and
monitor everyday cultural activities (e.g. woodworking, DIY, gardening, etc.), these would
directly benefit more people beyond well-off, white individuals with tertiary education.
Furthermore, re-thinking the ways in which policies are designed, implemented and
evaluated would help in actualising cultural democracy in a more convincing way (Taylor,
2016; Hadley and Belfiore, 2018).
Bennet et al. (2009) explain under-representation of certain groups at museums, galleries
and stately homes as also linked to the fact that these are forms of active public
!
2!
participation, which studies of social capital have consistently shown to be less common
amongst lower socio-economic groups (e.g. Hall, 1999; Li et al., 2003; Warde et al., 2003).
In this article, we ask what happens when engagement with museum, gallery, and archive
content and materials is instead conducted in the private space of the home or the office,
thanks to the application of crowdsourcing. Does this make participation more diverse or
does the important social problem of selective under-representation persist?
1.2. Research context: heritage crowdsourcing
Crowdsourcing was a term first coined by Jeff Howe in a seminal article for Wired magazine
(Howe, 2006) by combining the words ‘crowd’ and ‘outsourcing’ in order to refer to the
commercial practice of distributing paid labour amongst individuals located in geographically
disparate regions of the globe. Outside of the commercial world, however, the meaning and
mission of this practice have changed to adapt to the remit of not-for-profit activities. In most
cases, this has entailed not offering monetary rewards in exchange for the completion of
crowdsourced tasks (Ridge, 2013), with user motivations instead often being primarily
intrinsic. For the purpose of this article, we will define heritage crowdsourcing as the
creation, digitisation, enhancement, analysis and interpretation of cultural heritage objects
(including data) and places by relatively large groups of people, through the completion of
individually small tasks over the Internet.
In the last decade, crowdsourcing has been increasingly explored by higher education
institutions, Galleries, Libraries, Museums and Archives (GLAMs) and heritage organisations
(Dunn and Hedges, 2012; Ridge, 2013, 2014; Terras, 2016). Some of these crowdsourcing
undertakings have been set up by a specific institution as standalone endeavours or as a
series of projects relating to the collections held by that institution. This is the case for
several initiatives led by GLAMs such as the British Library, the New York Public Library, the
National Library of Australia, or the Smithsonian Transcription Centre, and by a few single-
purpose research endeavours such as Field Expedition: Mongolia and GlobalXplorer
(Holley, 2010; Lascarides and Vershbow, 2014; Lin et al., 2014; Parilla and Ferriter, 2016;
Ridge, 2018; Yates, 2018). A second group of projects has instead been hosted by
thematically focused or multi-topic crowdsourcing websites, as in the case of Zooniverse or
MicroPasts (Lintott et al., 2008; Bevan et al., 2014). These platforms have created an array
of crowdsourcing templates to perform tasks of different kinds, ranging from the transcription
of archival documents to the public indexing of photographs and videos through social
tagging. They have also become spaces that bring together different institutions, cultural
heritage objects and task types. It is with this second category that our article is concerned,
via the case study of MicroPasts (see 2.).
!
3!
In the next section, we introduce MicroPasts, which is to our knowledge the first website
hosting multiple types of heritage crowdsourcing, and we explain how this project has helped
to create a distributed and primarily ‘private’ kind of cultural participation. MicroPasts placed
a considerable, publicly-acknowledged emphasis on evaluating participant profiles,
motivations and behaviour and a following section examines how the socio-demographic
dimensions of MicroPasts participants compare to those of people who take part in state-
funded heritage activities via physical visitation; it explores what the democratising power of
heritage crowdsourcing could be, and how this method can help museums, galleries and
other state-funded institutions to diversify participation.
2. Methodology
2.1. MicroPasts: a case study of heritage crowdsourcing
MicroPasts was established in 2013 by a team of researchers at the UCL Institute of
Archaeology and the British Museum (with coordination also now provided by researchers at
the University of Cambridge Fitzwilliam Museum and the University of Stirling), thanks to
funding from the UK Arts and Humanities Research Council (Bevan et al., 2014; Bonacchi et
al., 2014, 2015a, 2015b; Wexler et al., 2015). Its aim has been to leverage crowd- and other
web-based methods to create and study collaborations between citizens inside and outside
heritage organisations, in order to co-produce data and knowledge about the human past
and its contemporary reception. The MicroPasts crowdsourcing website
(crowdsourced.micropasts.org) has been developed using Free and Open Source Software
and particularly an existing citizen science framework known as Pybossa (Scifabric, 2018),
which supports sophisticated task scheduling, a registered user base and a wide variety of
different applications. Sometimes building on existing Pybossa templates, sometimes
creating new ones, MicroPasts has developed a series of modular applications each tuned
to enable collective capture of a specific kind of open data (e.g. 3D models, transcriptions,
photo tags), and this modularity enables easy adaptation to different archaeological and
heritage collections. Task redundancy and a mix of automatic and expert quality assessment
have been the main mechanisms used to ensure that the resulting crowdsourced data is of
good research quality (Allahbakhsh et al., 2013; Daniel et al., 2018). This means that each
crowdsourcing task (e.g. transcription, tagging, etc.) is completed by two or more
participants, depending on the application template, and is subsequently consolidated by an
experienced participant and/or a ‘traditional’ researcher. Raw, crowdsourced and verified
!
4!
data is released (immediately, as it is produced) via the website under a Creative Commons
license (primarily as CC-BY) and the source code under a General Public License.
Until now, the majority of the crowdsourcing applications powered by MicroPasts consisted
of transcription and 3D photo-masking, and, in the first two years, they focused especially on
Bronze Age museum collections. During this initial phase, the project transcribed and
georeferenced the National Bronze Age Index, an archive of about 30,000 cards
documenting prehistoric metal artefacts found in Britain during the 19th and early 20th century
(Pett et al., 2014; Wexler et al., 2015; Figure 1). Participants online were also invited to help
with drawing outlines of objects on sets of photos of those objects to mask out the
background (Lombraña González et al., 2014; Figure 2). This task was designed to facilitate
and speed-up the 3D modelling of artefacts via photogrammetric approaches that otherwise
proceeded offline (Bevan et al., 2014). Over time, MicroPasts has also enabled quite
different applications often on a more one-off experimental basis, such as those for the
public indexing of historical photographs, the testing of object description vocabularies via
tagging (e.g. Keinan-Schoonbaert et al., 2014), content analysis through video-tagging or
sound track transcription, while also adapting and using existing transcription templates for
the digitisation of structured and unstructured textual data. This varied crowdsourcing
landscape has been nurtured through collaborations with 20 other museums, archives,
libraries, heritage groups and university-led research projects in the UK, US, Italy and
Germany (Table 1).
Figure 1. Screenshot showing the functioning of the transcription and geo-referencing
application used to crowdsource the digitisation of the National Bronze Age Index.
Figure 2. Screenshot showing the functioning of the 3D photo-masking application.
Table 1. Organisations involved with MicroPasts beyond the founding institutions (UCL and
the British Museum).
Country where the
institution/project
lead is based
Organisations and projects
UK
Petrie Museum
Ancient Identities in Modern Britain project
Society of Antiquaries of London
New Forest National Park Authority
Portable Antiquities Scheme
Egypt’s Exploration Society
Postcard to Palmyra project
!
5!
Archaeology Data Service
Sainsbury Institute for the Study of Japanese Arts and Cultures
The Impact of Evolving Rice Systems from China to Southeast Asia
Mary Rose Trust
Palestine Exploration Fund
US
Montpellier Foundation
Project Andvari
American Numismatic Society
Minnesota Historical Society
Denver Museum of Nature and Science
Italy
Museo Egizio
Museo Multimediale del Regno di Arborea
Germany
University of Munich
Although MicroPasts started by enabling a strongly scaffolded and contributory kind of
participation, where people were invited to complete rather mechanical crowdsourcing tasks
(e.g. for transcription, geo-referencing, photo-masking and tagging, etc.), it thereafter
evolved to encompass collaborative and co-creative levels of participation as well (Simon,
2010). So far, MicroPasts has mainly enabled projects designed by museums, archives and
teams of academic researchers, but, over the years, citizen collaborators have been
encouraged to suggest ways of taking forward the MicroPasts project and platforms and of
improving workflows and procedures. As a result, some of the volunteers also started to
undertake new activities they had proposed and which required additional skills and deeper
engagement at the interface between the generation and interpretation of cultural heritage
data. In particular, a small group of eleven participants learnt the full online and offline
pipeline to build 3D models, while half a dozen people (in some cases coinciding with the
first group of 3D modellers) also helped validate transcribed entries before these were finally
approved by a researcher based in the relevant partner organisation. In this paper, however,
we will examine only the contributory level of participation consisting in the submission of
crowdsourcing tasks, given the small number of those involved in collaborative and co-
creative efforts.
2.2. Evaluation and research methods
As heritage crowdsourcing initiatives have emerged, researchers have tried to better
understand their users and the interests and values behind their participation. This body of
research, primarily grounded in individual project evaluations, has, however, been
conducted mainly via end-of-project surveys or cross-sectional formative investigations
(Holley, 2010; Causer and Wallace, 2012; Causer and Terras, 2014; Ridge, 2014). To our
knowledge, heritage crowdsourcing has never been evaluated longitudinally with a view to
understanding participant profiles, motivations and behaviour, as well as the scope, potential
and limitations of participant representation.
!
6!
With the latter goals in mind, and adopting a longitudinal, mixed quantitative and qualitative
approach, MicroPasts collected behavioural data from the records of completed tasks, as
well as socio-demographic and attitudinal information from pop-up surveys that were coded
to appear, respectively, after the submission of the 1st and of the 25th crowdsourcing task.
The results from the short survey undertaken after the first task cover: (1) how the
contributor found out about MicroPasts; and (2) whether he/she was working with
archaeology or history as part of his/her main job (the main job done for living). The second
survey, appearing after the submission of the 25th crowdsourcing task aimed to question
more heavily-engaged participants who had been involved with MicroPasts far beyond a
single, first encounter. The threshold of twenty-five was decided after evaluating the possible
cut-off points in plots that displayed the ranking of participants based on their number of
completed tasks, as recorded two months after the public launch of the crowdsourcing
platform (see Bonacchi et al., 2014).1 The results of this second survey cover: (1) the
contributor’s reasons for participating in the specific crowdsourcing application to which the
survey was linked, and (2) any other crowdsourcing applications the participant might have
been involved in; (3) the highest level of formal education attained; (4-5) the participant’s
occupation and, if in employment, job title; and (6) the country where the contributor was
living (see Appendix 1 for the full questionnaires). Obviously, the above choices about the
number of surveys and the depth of their questioning reflect a balancing act in terms of
asking yet more of people already volunteering their time online, and in particular we judged
that it might be potentially off-putting and disruptive to inquire about ethnicity or income, and
decided not to include questions on these measures.
Social survey literature (Bryman, 2012) suggested to us that questioning participants while
they were still connected to the platform, rather than at the end of the whole project, would
increase the reliability of answers to questions about their interaction with the applications,
the overall response rate, the response rate to open-ended questions and the richness of
the responses to those questions. As previously mentioned, prior evaluations of heritage
crowdsourcing based on social surveys were cross-sectional and, as such, allowed for the
gathering only relatively small numbers of responses. Over the period from 16 April 2014 to
27 April 2017, during which the evaluation took place, it was possible to collect 853 task-1
survey responses and 56 task-25 survey responses.
1 At the time when the decision was made, the median of crowdsourcing tasks submitted oscillated
between two and five tasks across the applications available at the time (Bonacchi et al., 2015b).
!
7!
3. Results
3.1. Participant representation
MicroPasts’ outreach efforts were aimed at two main groups of people, consisting of
communities of interest in the past that were already established offline (e.g. archaeological
and historical societies in Britain, the metal detectorists working with the Portable Antiquities
Scheme, existing British Museum audiences, students and academics at UCL and at other
universities), and a wider and unknown online ‘crowd’ of geographically dispersed
collaborators. The outreach campaign was undertaken through social media, blogs, online
magazines and newspapers, as well as more targeted emails and few face-to-face
presentations to selected groups. As already noted by other crowdsourcing projects, online
publicity was more effective in attracting a higher number of contributors than any other kind
of tailored communication (Causer and Wallace, 2012).
Table 2. How participants had heard of MicroPasts.
How they heard of MicroPasts
Count
Via British Museum people/websites/social
media and/or Portable Antiquities Scheme
166
Via UCL people/websites/social media and/or
Portable Antiquities Scheme
49
Via people/websites of another university (NOT
University College London)
39
I was told by someone who does NOT belong to
any of the categories listed above
55
Casually browsing the web
115
From an online newspaper/magazine
145
Via an archaeological/historical society
45
As a result of this outreach, 1623 people registered with MicroPasts, but not necessarily
active on it, up until 30 April 2017, in addition to anonymous collaborators whose numbers
we cannot estimate with complete accuracy, but are likely, based on IP addresses, to be
considerably larger than the registered group. Visits to the MicroPasts website have been
widespread internationally, but the number of those from the UK and US has been the
highest (Figure 3). Only a minority of individuals had heard of MicroPasts directly from the
founding institutions (the British Museum or the UCL Institute of Archaeology) and because
they were already engaged with them; the majority instead came to know of MicroPasts by
casually browsing the web and/or via online newspapers such as the Guardian, magazines
such as British Archaeology and Discovery, heritage or technology blogs and news sites
!
8!
(e.g. hyperallergic.com, popsci.com, etc., Table 2). More engaged participants had attained
relatively high levels of formal education, with almost all of them having either a university
degree or a post-graduate degree (21 in each category, out of 49 respondents to this
question in the task-25 survey). Furthermore, the large majority of more engaged
contributors was either in employment or retired and these two groups together made up
88% of the total of task-25 survey respondents, with the remaining 21% being composed of
students, unemployed or stay-at-home participants. These findings are in line with the
results of cross-sectional evaluations of other heritage crowdsourcing projects (Causer and
Wallace, 2012; Eccles and Greg, 2014).
Not only did a large part of contributors consist of professionals, but the latter held jobs that
frequently were related in some way to the activities and the tasks proposed. Three quarters
of respondents to the task-1 survey (72%) were not working with history or archaeology as
part of their main job they did for living (Figure 4). However, based on the job titles
mentioned by task-25 survey respondents, we know that many of the more engaged
participants were active in fields that spanned administration, database management,
accountancy, sales, transports, communication, publishing, IT, and sometimes the arts and
humanities.
Figure 3. (a) Screenshot from Google Analytics showing the geographic spread of visits to
the MicroPasts website; (b) top ten countries from which sessions originate.
Figure 4. MicroPasts participants’ relevance of their main job to the fields of archaeology or
history. Break-down of task-1 survey respondents.
3.2. The centrality of process and activity type
As of 30 April 2017, 1009 registered participants had been actively involved at contributory
level on the MicroPasts crowdsourcing platform and a further group of anonymous users that
we estimate (from IP address uniqueness) to be considerably higher than the number of
registered ones. In line with the findings presented in other published literature, the
distribution of this kind of participation exhibited a long tail, with a very small group of people
submitting the vast majority of the tasks and a high number of more fleetingly involved
participants (Figure 5, which confirms the trend already discussed in Bonacchi et al., 2014
and in Holley, 2010; Oomen and Aroyo, 2011; Corneli and Mikroyannidis, 2012: 284; Dunn
and Hedges, 2012: 12; Causer and Terras, 2014; Eccles and Greg, 2014). The trend can be
observed for participation in transcription and photo-masking applications without marked
differences, as proved by the similarities in Gini coefficient for these two crowdsourcing task
!
9!
types (Figure 6). Gini coefficient and Lorenz curve are measures of inequality, frequently
applied in the social sciences and especially in economics (e.g. Fidan, 2016; Brown, 2017);
here we use the Gini coefficient, which is the ratio of the area that lies between the line of
equality (45° line in Figure 4) and the Lorenz curve (the red curve in Figure 4): a metric
where 1 expresses maximum inequality and 0 minimum inequality.
The number of participants engaged in transcription and photo-masking, the two
crowdsourcing application types that were deployed since the beginning of MicroPasts, are
not significantly different (Table 3). However, participants mostly tended to engage with
either one or the other kind of task, and only 18% of them engaged with both types of
crowdsourcing activities (Table 4). This is particularly significant if we consider that, for the
first two years of the project both transcription and 3D photo-masking tasks focused mostly
on the same collections, consisting of Bronze Age metal artefacts housed at the British
Museum and their object cards. The type of material that people engaged with, instead, had
less of an impact on the choice of the applications to participate in (Table 5).
Figure 5. Rank size plot for both photo-masking and transcription tasks as of 30 April 2017.
Figure 6. Lorenz curve (red) and Gini coefficient for photo-masking tasks (left) and
transcription tasks (right), calculated on tasks submitted as of 30 April 2017.
Table 3. Participants engaged with each crowdsourcing task type.2
Task type
Count
%
Participants engaged in photo-masking
511
50.6
Participants engaged in transcription
642
63.6
Participants engaged in tagging/classification
115
11.4
Table 4. Participants engaged with 1-3 different task types.
Count
%
Participants who did only one type of task
795
79
Participants who did 2 types of task
182
18
Participants who did all 3 types of task
32
0.03
Table 5. Participants engaged with different types of heritage assets: text-based or visual
and audio-visual assets.
2 It should be noted that, as underlined before, photo-masking and transcription are the
crowdsourcing types of applications that were deployed more consistently and substantially since the
public launch of the MicroPasts project.
!
10!
Count
%
Participants engaged with text-based applications only
415
41
Participants engaged with visuals-based applications only
339
34
Participants engaged with both text-based and visuals-based applications
248
25
The insights provided by behavioural data regarding the centrality of activity type to the
appeal of crowdsourcing are reinforced by the qualitative analysis of participants’
motivations. Responses to the task-25 survey were examined with the aim of identifying the
smallest possible number of different motivational categories, of which eight stood out (Table
6). Certain very intrinsic kinds of motivation are prevalent (a general sense of helping out,
the sheer enjoyment of the experience, the act of giving back to an institution, etc.), proving
that process underlies several of the main reasons why people participate in heritage
crowdsourcing. Enjoying the proposed activity, contributing to knowledge production, and
‘helping out’ are all process-focused motivations. By comparison, a desire to engage with a
particular kind of heritage object or subject does not feature prominently, if at all. This result
is interesting in that it corroborates findings from the previous section, regarding a possible
link between activity types and contributorsprofessions.3
Table 6. Participant motivations, analysed from the responses given to the task-25 survey.
Motivational category
Examples
Learning about history and archaeology
An interesting way to learn a bit more about
history & archaeology.
[…] I am learning a little bit about archaeology in
the process.
Giving back to / connecting with an institution
Assisting the British museum as a thank you for
visiting out metal detecting club (Trowbridge)
Volunteering in return for help/support received
from the FLO for Birmingham and Staffs.
Interest and curiosity
A basic fascination with history
curiosity
I am a Celtic Artist with a degree in
Anthropology, and I find the work interesting.
Skill building or career development
Experience for a future career in Ancient History
and Archaeology, as I am currently studying a
part-time BA (Hons) degree in Classical
Studies. I am also between modules at the
moment, so I have the time to dedicate to this
project.
Enjoyment
Find this fascinating, addicted and worthwhile
Fun
3 In future and pending the availability of a sufficient number of responses to support the assessment
of statistically significant relationships, it would also be helpful to investigate possible links between
motivations for participating, on the one hand, and contributors’ occupation, profession and location
on the other.
!
11!
I enjoy it. I am learning a little bit about
archaeology in the process.
It’s oddly relaxing
Helping out
Helping a project
[…] I am happy to help with the projects
I just want to help.
Contributing to knowledge production
I have been interested in archaeology since I
was a child. This is a chance for me to
participate in increasing the availability of
knowledge for scholars. I was also a fan of
Time Team”!
To help contribute towards greater scientific
knowledge
[…] help out with the historical research
Identity and self-definition
Ancestors were English and Scottish (and
American Indian).
4. Discussion
The analysis has, we would argue, demonstrated that crowdsourcing is often a highly
process-focused activity. It is primarily the kind of activity transcription, photo-masking or
tagging and classification, for example that affects what crowdsourcing applications people
decide to engage with. Participants tend to choose a specific activity type consistently and
there is some limited but thought-provoking evidence to suggest a link between a
contributor’s profession and their preferred type of crowdsourcing, even if more research
and analysis is clearly needed to confirm or falsify this latter hypothesis. Conversely, a
desire to connect with a certain institution or a personal interest in a particular collection or
asset type (text, photo, video, sound) are less decisive in determining the kind of
crowdsourcing people engage with. In contrast, in his survey-based study of museum
visitation, Merriman (1989) showed that a specific or general interest in museum content
was the most frequently mentioned motivation for people’s physical visitation of museums.
As a process-focused activity that is mostly undertaken on one’s own and often at home, we
might expect heritage crowdsourcing to appeal to a cohort of participants with similar socio-
demographics to individuals engaged in everyday cultural activities such as DIY or crafts.
We could then imagine wider representation across the dimensions of education and
occupation, but this is not the case. A possible reason is that, in crowdsourcing, social
motivations expressed through volunteering or helping out behaviour are at the basis of the
engagement, and these recur more amongst those who have higher income and formal
education levels (Gil-Lacruz et al., 2017 for a synthesis of studies on the positive correlation
between income and education, and volunteerism). In fact, a study by Ponciano and
Brasileiro characterises contributors at the peak of the long tail of participation in
!
12!
crowdsourcing as volunteers (those submitting the vast majority of the tasks, see section 3.1
above), and the long tail itself of the more fleetingly involved as engaged in ‘helping out’
behaviour (Ponciano and Brasileiro, 2014). The first of these, volunteerism, characterises
people “usually actively seeking out opportunities to help others”, whereas the second,
‘helping out behaviour’, is “sporadic participation in which the individual is faced with an
unexpected request to help someone to do something” (Ponciano and Brasileiro, 2014:248).
Additionally, using a crowdsourcing web platform requires digital skills, adequate Internet
access, software and hardware to participate, and these are also less widespread amongst
the more economically vulnerable (Eynon, 2009; Simon, 2010). Whilst all these factors may
have a role, how they play out together is something that needs further empirical
investigation to be fully understood.
In the light of this study and supporting reflections by others, heritage crowdsourcing cannot
necessarily or always be defined as a democratising form of cultural engagement, but it can
help GLAMs and heritage places to reach some of those with whom they have not already
engaged. It can be of use, for instance, to initiate and sustain participation among people
who have neither a general nor a strong interest in the human past, heritage objects or
places, but who are very much interested in performing certain types of activities. Thus,
more dedicated efforts to publicise heritage crowdsourcing amongst activity-focused groups
of interest such as photographers, contributors to other crowdsourcing projects revolving
around similar task types, digital DYI communities, 3D modellers etc. would be a promising
means by which effectively to expand participation in museum collections and content
beyond existing audiences, even though this may still be within a rather homogeneous, well-
educated and well-off group of participants. This ‘expansion’ would not be likely to be
quantitatively large either, since active participants (those who do most of the tasks and/or
register with named accounts, for example) in heritage crowdsourcing are not ‘crowds’ (see
section 3.1., also Ridge, 2013; Causer and Terras, 2014). Additionally, most of them are
involved only occasionally and lightly, leaving just a handful to complete most of the
proposed tasks. These numbers could be slightly more significant for smaller organisations,
also helping them to acquire visibility as a result of joining an existing, multi-project and
multi-institution platform that display stronger brands such as that of the British Museum.
The choice to join an existing multi-project platform, rather than setting up an additional and
standalone one, is strategic also because such platform offers more of the same types of
applications, albeit focussing on different collections.
In conclusion, this research shows the potential of crowdsourcing as a method for
participatory heritage creation, enhancing and interpretation that museums, galleries,
!
13!
archives and libraries can adopt to involve people whose primary interests do not
necessarily relate to GLAMs collections or indeed themes, but are instead strongly linked to
the activities that crowdsourcing projects enable. In doing so, the article is the first to focus
the actual ‘civic’, albeit not always socially democratising, role of this web-based
collaborative method: it can allow breaking the ‘fourth wall’ of existing audiences and include
some of those who rarely interact with heritage via GLAMs institutions. Our research thus
highlights the utility and importance of embedding digitally-enabled cultural participation in
museums. Proceeding in this direction will permit to create spaces that are more
convincingly integrating private and public, state-funded and everyday forms of participation.
Over time, this is likely to help construct systems of heritage curation whose principles,
priorities and operations are more widely shared amongst the population and co-designed.
References
Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., &
Dustdar, S. (2013). Quality Control in Crowdsourcing Systems: Issues and Directions. IEEE
Internet Computing, 17(2), 76-81. DOI: 10.1109/MIC.2013.20.
Bennet, T., Savage, M., Silva, E., Warde, A., Gayo-Cal, M., & Wright, D. (2009). Culture,
class distinction. Abingdon and New York: Routledge.
Bevan, A., Pett, D., Bonacchi, C., Keinan-Schoonbaert, A., Lombraña González, D., Sparks,
R., Wexler, J., & Wilkin, N. (2014). Citizen Archaeologists. Online Collaborative Research
about the Human Past. Human Computation, 1(2), 185-199. DOI: 10.15346/hc.v1i2.9.
Blackwood, A. & Purcell, D. (2014). Curating Inequality: The Link between Cultural
Reproduction and Race in the Visual Arts. Sociological Inquiry, 84(2), 238-263. DOI:
10.1111/soin.12030.
Bonacchi, C., Bevan, A., Pett, D., Keinan-Schoonbaert, A., Sparks, R., Wexler, J. & Wilkin,
N. (2014). Crowd-sourced Archaeological Research: The MicroPasts Project. Archaeology
International, 17. DOI: 61-68. 10.5334/ai.1705.
Bonacchi, C., Pett, D., Bevan, A., & Keinan-Schoonbaert, A. (2015a). Experiments in Crowd-
funding Community Archaeology. Journal of Community Archaeology & Heritage, 2(3), 184-
!
14!
198. DOI: 10.1179/2051819615Z.00000000041.
Bonacchi, C., Bevan, A., Pett, D., & Keinan-Schoonbaert, A. (2015b). Crowd- and
Community-fuelled Archaeology. Early Results from the micropasts Project. In Giligny, F.,
Djindjian, F., Costa, L., Moscati, P. & Robert, S. Eds. CAA2014, 21st Century Archaeology.
Concepts, Methods and Tools. Proceedings of the 42nd Annual Conference on Computer
Applications and Quantitative Methods in Archaeology, Paris. Oxford: Archaeopress, pp.
279-288.
Brown, R. (2017). Inequality crisis. Bristol: Policy Press.
Bryman, A. (2012). Social Research Methods. Oxford: Oxford University Press.
Causer, T. & Terras, M. (2014). ‘Many Hands Make Light Work’: Transcribe Bentham and
Crowdsourcing Manuscript Collections. In M. Ridge. Crowdsourcing Our Cultural Heritage.
Farnham: Ashgate Publishing Limited, pp. 57-88.
Causer, T. & Wallace, V. (2012). Building A Volunteer Community: Results and Findings
from Transcribe Bentham. DHQ: Digital Humanities Quarterly, 6(2).
http://www.digitalhumanities.org/dhq/vol/6/2/000125/000125.html (accessed 23 August
2018).
Corneli, J. & Mikroyannidis, A. (2012). Crowdsourcing Education on the Web: A Role-Based
Analysis of Online Learning Communities. In: Okada, A., Connolly, T., Scott, P.J. (Eds.)
Collaborative Learning 2.0: Open Educational Resources. IGI Global. DOI: 10.4018/978-1-
4666-0300-4.
Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B. & Allahbakhsh, M. (2018). Quality
Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and
Assurance Actions. ACM Computing Surveys, 51, 1-40. DOI: 10.1145/3148148.
DiMaggio, P., Useem, M. & Brown, P. (1979). Audience Studies of the Performing Arts and
Museums: A Critical Review. Washington, DC: National Endowment for the Arts, Research
Division, Report 9. https://www.arts.gov/publications/audience-studies-performing-arts-and-
museums-critical-review (accessed 23 August 2018).
!
15!
Dunn, S. & Hedges, M. (2012). Crowd-Sourcing Scoping Study. Engaging the Crowd with
Humanity Research. London: Centre for e-Research, Department of Digital Humanities,
King’s College London. https://www.nottingham.ac.uk/digital-humanities-
centre/documents/dunn-and-hedges-crowdsourcing.pdf (accessed 17 September 2017).
Eccles, K. & Greg, A. (2014). Your Paintings Tagger: Crowdsourcing Descriptive Metadata
for a National Virtual Collection. In: Ridge, M. (Ed.) Crowdsourcing our Cultural Heritage.
Farnham: Ashgate, pp. 185-208.
Eynon, R. (2009). Mapping the digital divide in Britain: implications for learning and
education. Learning, Media and Technology, 34(4), 277-
290, DOI: 10.1080/17439880903345874.
Eveleigh, A., Jennett , C., Blandford, A., Brohan, P. & Cox, A. L. (2014). Designing for
dabblers and deterring drop-outs in citizen science. ACM Press, pp. 2985-2994. DOI:
10.1145/2556288.2557262.
Fidan, H. (2016). Measurement of the Intersectoral Digital Divide with the Gini Coefficients:
Case Study Turkey and Lithuania. Engineering Economics, 27(4),
10.5755/j01.ee.27.4.13882.
Gil-Lacruz, A.L., Marcuello & C., Saz-Gil, I. (2017). Individual and Social Factors in
Volunteering Participation Rates in Europe. Cross-Cultural Research, 51(5), 464-490. DOI:
10.1177/1069397117694135.
Hadley, S. & Belfiore, E. (2018). Cultural democracy and cultural policy. Cultural Trends,
27(3), 218-223. DOI: 10.1080/09548963.2018.1474009.
Hall, P. (1999). Social capital in Britain. British Journal of Political Science, 29, 417-461.
Haythornthwaite, C. (2009). Crowds and communities: light and heavy work of peer
production. In: C. Haythornthwaite and A. Gruzd (Eds), Proceedings of the 42nd Hawaii
International Conference on System Sciences. Los Alamitos, CA: IEEE Computer Society.
https://www.ideals.illinois.edu/handle/2142/9457.
Holley, R. (2010). Crowdsourcing: How and Why Should Libraries Do It? D-Lib Magazine,
16(3/4). https://doi.org/10.1045/march2010-holley.
!
16!
Howe, J. (2006). The Rise of Crowdsourcing. Wired, 16 November 2006.
http://www.wired.com/2006/06/crowds/ (accessed 18 September 2017).
La Regina, A. (2009). L’archeologia e il suo pubblico. Firenze: Giunti.
Lascarides, M. & Vershbow, B. (2014). Whats on the Menu? Crowdsourcing at the New
York Public Library. In: M. Ridge (Ed.) Crowdsourcing Our Cultural Heritage. Farnham:
Ashgate Publishing Limited, pp. 113-1137.
Li, Y., Savage, M. & Pickles, A. (2003). Social capital and social exclusion in England and
Wales 1972-1999. British Journal of Sociology, 54(4), 497-526. DOI:
10.1080/0007131032000143564.
Lintott, C.J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M.J.,
Nichol, R.C., Szalay, A., Andreescu, D., Murray, P., & Vandenberg, J. (2008). Galaxy Zoo:
morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey.
Monthly Notices of the Royal Astronomical Society, 389(3), 1179-1189. DOI: 10.1111/j.1365-
2966.2008.13689.x.
Keinan-Schoonbaert, A., Pett, D., Lombraña González, D. & Bonacchi, C. (2014).
MicroPasts/MicroPasts-Horsfield: The George Horsfield Archive crowdsourcing code base
first release (Version 1.0). Zenodo. DOI:10.5281/zenodo.1405468.
Lombraña González, D., Pett, D. & Bevan, A. (2014). MicroPasts/photomasking: Source
code for creation of photomasking in Pybossa (Version 1.0). Zenodo.
DOI:10.5281/zenodo.1405475.
Merriman, N. (1989). Museum visiting as a cultural phenomenon. In: P. Vergo (Ed.) The
New Museology. London: Reacktion Books, pp. 149-171.
National Endowment for the Arts (1997). Demographic Characteristics of Arts Attendance.
Washington, DC: National Endowment for the Arts, Research Division, Report 71.
https://www.arts.gov/publications/demographic-characteristics-arts-attendance-2002
(accessed 23 August 2018).
Neelands, J., Belfiore, E., Firth, C., Hart, N., Perrin, L., Brock, S., & Woddis, J., (2015).
!
17!
Enriching Britain: culture, creativity and growth. Warwick: University of Warwick.
https://warwick.ac.uk/research/warwickcommission/futureculture/finalreport/warwick_commis
sion_report_2015.pdf (accessed 23 August 2018).
Oomen, J. & Aroyo, L. (2011). Crowdsourcing in the cultural heritage domain: opportunities
and challenges. New York, USA: AMC Press, pp.138-149. DOI: 10.1145/2103354.2103373.
Parilla, L. & Ferriter, M. (2016). Social Media and Crowdsourced Transcription of Historical
Materials at the Smithsonian Institution: Methods for Strengthening Community Engagement
and Its Tie to Transcription Output. The American Archivist, 79(2), 438-460. DOI:
10.17723/0360-9081-79.2.438.
Pett, D., Lombraña González & D., Bevan A. (2014). MicroPasts/bronzeAgeIndex: Source
code for transcription of the NBAI via Pybossa (Version 1.0). Zenodo.
DOI:10.5281/zenodo.1405477.
Ponciano, L. & Brasileiro, F. (2014). Finding VolunteersEngagement Profiles in Human
Computation for Citizen Science Projects. Human Computation, 1(2), 247-266. DOI:
10.15346/hc.v1i2.12.
Ridge, M. (2013). From Tagging to Theorizing: Deepening Engagement with Cultural
Heritage through Crowdsourcing. Curator: The Museum Journal, 56(4), 435-450. DOI:
10.1111/cura.12046.
Ridge, M. (2014). Crowdsourcing Our Cultural Heritage: Introduction. In: M. Ridge (Ed.)
Crowdsourcing Our Cultural Heritage. Farnham: Ashgate Publishing Limited, pp. 1-16.
Ridge, M. (2018). Crowdsourcing at the British Library: lessons learnt and future directions.
Conference paper, Digital Humanities Congress. https://www.dhi.ac.uk/dhc/2018/paper/148
(accessed 18 August 2018).
Sandell, R. (2002). Museums, Society, Inequality. London and New York: Routledge.
Scifabric (2018). Pybossa. Available from: https://github.com/Scifabric/pybossa (accessed
28 August 2018).
Selwood, S. (2002). Audiences for contemporary art: assertion vs evidence. In: C. Painter
!
18!
(Ed.) Contemporary Art and the Home. Oxford: Berg.
Simon, N. (2010). The participatory museum. Santa Cruz, California: Museum 2.0.
Stevenson, D., Balling, G., & Kann-Rasmussen, N. (2015). Cultural participation in Europe:
shared problem or shared problematisation? International Journal of Cultural Policy, 23(1),
89-106. DOI: 10.1080/10286632.2015.1043290.
Taylor, M. (2016). Nonparticipation or different styles of participation? Alternative
interpretations from Taking Part. Cultural Trends, 25(3), 169-181. DOI:
10.1080/09548963.2016.1204051.
Terras, M. (2016). Crowdsourcing in the Digital Humanities. In: S. Schreibman, R. Siemens,
& J. Unsworth (Eds.) A new companion to digital humanities. Chichester: John Wiley & Sons
Inc., pp. 420 439.
Warde, A., Tampubolon, G., Longhurst, B., Ray, K., Savage, M., & Tomlinson, M. (2003).
Trends in social capital: membership of associations in Great Britain, 1991-98. British
Journal of Political Science, 33, 515-534. DOI: 10.1017/S000712340321022X.
Wexler, J., Bevan, A., Bonacchi, C., Keinan-Schoonbaert, A., Pett, D., & Wilkin, N. (2015).
Collective Re-Excavation and Lost Media from the Last Century of British Prehistoric
Studies. Journal of Contemporary Archaeology, 2(1), 126-142. DOI: 10.1558/jca.v2i1.27124.
Yates, D. (2018). Crowdsourcing Antiquities Crime Fighting: A Review of GlobalXplorer.
Advances in Archaeological Practice, 6(2), 173-178. DOI:10.1017/aap.2018.8.
... Crowdsourcing [6] is the process of outsourcing tasks or problems to a large group of people, typically using the web as a platform for information exchange. Crowdsourcing methods have been adopted in GLAM and heritage organisations to carry out several tasks, such as metadata creation, analysis of cultural heritage objects, as well as contributions of private objects or experiences [13,1,10]. Notably, public collections rarely contain materials belonging to private collections or individuals, e.g. household documents. ...
Preprint
Full-text available
Recipes of popular origin and handwritten cookbooks are often overlooked by scholars. Rag\`u is a pilot project that tries to fill in this gap by gathering and digitising a collection of cookbooks belonging to the Italian traditional cuisine, and making it accessible via a digital platform. The project aims at contributing to two research lines: a) to identify agile methods for publishing data in a low-cost crowdsourcing project, and b) to devise an effective storytelling journey for the Rag\`u project.
... This table also highlights how previous existing data management solutions for the cultural heritage meet the requirements set out for this platform. Specifically, we consider Omeka S 13 [30], Dspace [43], Fedora [35], Islandora 14 [24], Samvera, 15 Research Space [34], Culture Gate [22], Mirador [46], Pybossa [4] and Zooniverse [40,42]. An in-depth study of the state of the art in this field that looks at these other solutions in detail is available [9]. ...
Article
Full-text available
This article explores and presents innovative methods and technologies for supporting citizen curation of cultural heritage. Relevant outcomes of the SPICE project (Social Participation, Cohesion, and Inclusion through Cultural Engagement) are presented, focusing on enhancing the state of content management and delivery strategies in museums and memory institutions. We argue that citizen curation requires a principled way of managing and integrating citizen responses, contributions, and data flows in the domain of cultural heritage, raising challenges and opportunities such as integration of distributed and diverse data sources, authoritativeness, interdependence, privacy, data reuse and rights management. The solution is a Linked Data Hub (LDH), which integrates museum collections and user-generated content and repurposes them to end-user systems tailored to specific use cases for citizens and museum practitioners. Such LDH must be non-open, by offering an approach that gives citizens and organisations, such as museums or engagement agencies, meaningful control over their data by implementing user-tailored policies and negotiated access and terms of use. Additionally, our solution addresses privacy violations in user-contributed content by offering a near-real-time content monitoring framework. We present the LDH discussing pilot applications within the EU-funded project SPICE, including “Deep Viewpoints”, which currently supports the Irish Museum of Modern Art (IMMA) in citizen curation activities. Overall, this article serves as a critical milestone in closing functional gaps and advancing the state of technology in managing citizen responses and contributions in the cultural heritage domain.
... Crowdsourcing is a key participatory practice for urban design that is based on ICTs (Gün, Demir, & Pak, 2020). Bonacchi, Bevan, Keinan-Schoonbaert, Pett, & Jennifer (2019) define heritage crowdsourcing as "the creation, digitization, enhancement, analysis and interpretation of cultural heritage objects (including data) and places by relatively large groups of people, through the completion of individually small tasks over the Internet". Furthermore, as Ridge (2013) argues, cultural heritage crowdsourcing initiatives encourage online collaboration and participation and should be recognized as a valuable form of public engagement. ...
... Existing research on the sustainability of cultural heritage crowdsourcing has predominantly focused on singular aspects such as participation motivation [10], task design [11], and user experience [12], with few studies exploring the impact mechanisms of project sustainability from a holistic perspective. While some scholars have proposed multidimensional sustainable development frameworks encompassing institutional, technological, organizational, and cultural aspects [13], these primarily emphasize theoretical exposition and lack empirical exploration. ...
Article
Full-text available
Cultural heritage crowdsourcing has emerged as a promising approach to address the challenges of digitizing and preserving cultural heritage, contributing to the sustainable development goals of cultural preservation and digital inclusivity. However, the long-term sustainability of these projects faces numerous obstacles. This study explores the key configurational determinants and dynamic evolutionary mechanisms driving the sustainable development of cultural heritage crowdsourcing projects, aiming to enhance their longevity and impact. An innovative integration of fuzzy-set qualitative comparative analysis (fsQCA) and system dynamics (SD) is employed, drawing upon a “resource coordination–stakeholder interaction–value co-creation” analytical framework. Through a multi-case comparison of 18 cultural heritage crowdsourcing projects, we identify necessary conditions for project sustainability, including platform support, data resources, knowledge capital, and digitalization performance. The study reveals multiple sufficient pathways to sustainability through configurational combinations of participant motivation, innovation drive, social capital, and social impact. Our system dynamics analysis demonstrates that crowdsourcing project sustainability exhibits significant nonlinear dynamic characteristics, influenced by the interaction and emergent effects of the resource–participation–performance chain. This research offers both theoretical insights and practical guidance for optimizing crowdsourcing mechanisms and sustainable project operations, contributing to the broader goals of sustainable cultural heritage preservation and digital humanities development. The findings provide a roadmap for policymakers and project managers to design and implement more sustainable and impactful cultural heritage crowdsourcing initiatives, aligning with global sustainability objectives in the digital age.
Article
Full-text available
Cilj: cilj istraživanja bio je predstaviti rezultate formativne evaluacije sudionika projekta rada mnoštva HAWaton Metodologija: U prvom dijelu predstavljen je koncept rada mnoštva uz pregled literature i dosadašnjih istraživanja koja se odnose na projekte rada mnoštva u baštinskim institucijama. Drugi dio rada odnosi se na istraživanje temeljeno na evaluaciji projekta HAWaton. Projekt HAWaton proveden je tijekom 2024. godine u organizaciji Nacionalne i sveučilišne knjižnice u Zagrebu koja je surađivala s nekoliko narodnih i školskih knjižnica diljem Hrvatske. Projekt se temeljio na prikupljanu mrežnog sadržaja na određenu temu s ciljem izrade tematskih zbirki na Hrvatskome arhivu weba. U projektu je sudjelovalo 186 učenika srednjih škola nad kojima je provedena evaluacija kako bi se istražilo njihovo mišljenje o organizacijskim i provedbenim aspektima projekta te njihovim preprekama. Evaluacija je provedena na temelju prilagođenog predloška upitnika formativne evaluacije (Ivanjko, Zlodi i Horvat, 2024), a analizirana su 173 važeća upitnika. Rezultati: Prema rezultatima, sudionici projekta HAWaton uglavnom su izuzetno zadovoljni njegovim provedbenim i organizacijskim aspektima (72%). Kao glavnu prepreku projekta, sudionici su naveli nedostatak vremena (33 %), a ostale navedene prepreke su: nedostatak resursa ili alata (15 %), tehničke poteškoće u korištenju alata (13 %), te nedostatak motivacije (12 %). Valja napomenuti da je 14% ispitanika istaknulo da se ne susreće s preprekama. Pozitivni rezultati ove evaluacije mogu poslužiti kao poticaj drugim knjižnicama za organizaciju sličnih događanja temeljenih na radu mnoštva, ali i za provođenje evaluacija sudionika. Originalnost: Ovim se radom stavlja naglasak na evaluaciju sudionika u projektima rada mnoštva unutar baštinskih institucija. Takav pristup omogućuje dublje razumijevanje iskustava, potreba i povratnih informacija sudionika, što može poslužiti kao vrijedan alat za unaprjeđenje sličnih projekata. Ovaj rad otvara put knjižnicama i drugim baštinskim ustanovama da uvedu slične metodologije vrednovanja vlastitih projekata koje osiguravaju kvalitetnije i održivije rezultate.
Book
Full-text available
The essence of the book is to investigate the forms, benefits and limitations of engaging consumers in the development of cultural institution offerings. It aims to identify methods for culture product development with consumer engagement, in particular concerning exhibition and educational activities of cultural institutions. The volume provides an extensive literature review of issues related to market orientation, customer centricity, consumer engagement and co-creation of products and services, with an emphasis on the specificity of the cultural sector. The authors conduct research to capture different perspectives on consumer engagement, including those of consumers themselves, managers responsible for designing the offer and cultural institution employees. They identify forms, success factors of and barriers to involving consumers (visitors) in the development of museum offerings. The combination of quantitative and qualitative research methods provides novel and valuable insights into the phenomenon under study. This book will be of interest to scholars interested in culture marketing and consumer behaviour, as well as managers of museums, science centres and other cultural institutions.
Article
This study investigates the visitor experience at the '3D HimmapanVR' virtual museum, which focuses on the preservation and dissemination of cultural heritage related to Thai mythical entities, specifically the Himmapan animals. Despite their cultural significance, information about these creatures is limited and fragmented. The HimmapanVR initiative aims to mitigate this issue by establishing a virtual museum that curates and exhibits these entities via a virtual reality (VR) platform, thus enhancing their accessibility, educational value, and conservation. The project entails the digitization of artwork, the creation of 3D models of archaeological artifacts, and the utilization of digital paintings and animations to depict the three primary and fifteen subsidiary types of Himmapan creatures. The hypothesis posits that the virtual museum will influence users' Effort Expectancy (EE), Performance Expectancy (PE), and the perceived authenticity of the archaeological objects. Data collected from 30 participants indicate that the virtual museum effectively augments knowledge and engenders a sense of authenticity. However, enhancing the VR user experience remains a challenge. Conclusively, this study presents the inaugural virtual museum dedicated to Himmapan animals, which occupy an essential niche in Thai cultural heritage. To date, no existing physical or virtual museum offers an exhaustive compilation and presentation of various Himmapan creatures, a gap this project endeavors to fill.
Conference Paper
Full-text available
The MicroPasts project is a novel experiment in the use of crowd-based methodologies to enable participatory archaeological research. Building on a long tradition of offline community archaeology in the UK, this initiative aims to integrate crowd-sourcing, crowd-funding and forum-based discussion to encourage groups of academics and volunteers to collaborate on the web. This paper will introduce MicroPasts, its aims, methods and initial results, with a particular emphasis on project evaluation. The evaluative work conducted over the first few months of the project already demonstrates the potential for crowd-sourced archaeological 3D modelling, especially amongst younger audiences, next to more traditional kinds of crowd-sourcing such as transcription. It has also allowed a comparative assessment of different methods for sustaining contributor participation through time and a discussion of their implications for the sustainability of the MicroPasts project and (potentially) other archaeological crowd-sourcing endeavours.
Article
Full-text available
Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar—all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives, and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.
Article
Full-text available
The digital divide, which arises from the usage differences of the information systems and appears as a new inequality is a concept which affects negatively the business processes. This concept which is in general scrutinized personally, regionally and globally and which is neglected by the sectors and the enterprises is one of the major factors affecting the sectoral and economic structure. Thus, determining the inequality level which results from the intensifying digital divide along with developing technology gains importance. In general the computer and internet usage rates are used as indexes for determination of the digital divide levels and the analyses are carried out with statistical methods according to the demographic variables. Gini method, one of the standard tools of inequality measurement, is rarely encountered in the digital divide studies. The purpose of this study is to determine the intersectoral digital divide by Gini method. The usage differences of the Turkish and Lithuanian sectoral information systems were analyzed in order to determine the digital divide by Gini method. The data was obtained from the reports published by Tuik and Eurostat between 2010 and 2013 based on the studies of the usage of the information systems by the enterprises. Gini values, which show the sectoral digital divide, were calculated using the data regarding the computer, internet, social media usages, having web page and selling over web which belong to the sectors classified according to the Nace Rev.2 system. The analysis revealed that both Turkey and Lithuania have low levels of sectoral digital divide. The highest level of digital divide was observed in the case of the selling over web and an increasing trend of the sectoral differences has been identified. Although Lithuania had high rates of information systems usage, it has been determined that the intersectoral differences are more than those in Turkey, in terms of usage of the information systems. © 2016, Kauno Technologijos Universitetas. All rights reserved.
Article
In early 2017, Sarah Parcak used her $1 million TED Prize to build the GlobalXplorer° platform (https://www.globalxplorer.org ) “to identify and quantify looting and encroachment to sites of archaeological and historical importance,” using a crowdsourced “citizen science” methodology popularized by the Zooniverse web portal. GlobalXplorer° invited the public to search satellite imagery from Peru for evidence of looting within 100 m × 100 m squares, training them along the way and gamifying participation. In this review, I test the platform and consider the applicability of GlobalXplorer° as a vector for changing the way that the general public perceives the global illicit trade in cultural objects.
Article
Crowdsourced transcription has grown in popularity as a tool for generating transcribed data and public engagement. This method of making digitized materials available on online platforms designed for volunteers to transcribe content works particularly well with science and historical materials. A well-designed site can offer volunteers a chance to interact with collections while providing the cultural institution with a new access point for researchers in the form of searchable text; a well-designed program of engagement can support sustained activity and unexpected positive outcomes. Many questions remain about how best to engage the public and the quality of resulting transcription. Many institutions design their sites to provide carefully structured experiences for volunteers. These projects organize materials around a research goal or subject and often provide detailed templates for the transcription. By fashioning a highly structured experience, are we fully engaging volunteers to interact with the materials? What happens if an institution creates an online environment that allows volunteers more choices and control? Would this affect the online community and transcription output? And what would be the impact of a structured engagement?
Article
The main goal of this article is to explore the role of individual sociodemographic characteristics and national social backgrounds in forming people’s decisions to engage in voluntary work. We have drawn data from the European Value Survey (1990, 1999, and 2008). We analyze voluntary work as an aggregate measure and also through four different categories. We have performed multilevel regression models taking into account a hierarchical structure of two levels: individual and country. There are no relevant gender and age differences, and, in fact, the most important differences lie in the impact of social factors rather than individual characteristics. We also highlight that geographical effects are diluted after controlling for social factors, but a certain level of geographical variance remains unclarified by the explanatory variables. This conclusion has important policy implications because it opens the door to implementing social policies that could be effective for all European countries.
Conference Paper
In most online citizen science projects, a large proportion of participants contribute in small quantities. To investigate how low contributors differ from committed volunteers, we distributed a survey to members of the Old Weather project, followed by interviews with respondents selected according to a range of contribution levels. The studies reveal a complex relationship between motivations and contribution. Whilst high contributors were deeply engaged by social or competitive features, low contributors described a solitary experience of 'dabbling' in projects for short periods. Since the majority of participants exhibit this small-scale contribution pattern, there is great potential value in designing interfaces to tempt lone workers to complete 'just another page', or to lure early drop-outs back into participation. This includes breaking the work into components which can be tackled without a major commitment of time and effort, and providing feedback on the quality and value of these contributions.