Conference PaperPDF Available

Abstract and Figures

The goal of the manuscript is to look into different interdisciplinary incentive mechanisms for participation processes, with focus on immersive technologies like AR, aspects of personality types, and location-based access to relevant information.
Content may be subject to copyright.
Assistive Spatial Civic Participation “Take Part”
Greta Hoffmann*)
Tim Straub*)
Claudia Niemeyer*)
Simon Kloker*)
Tom Zentek**)
Jella Pfeiffer*)
*) Institute of Information & Market Engineering (IISM), Karlsruhe Institute for Technology
(KIT), Germany
**) FZI Research Center for Information Technology, Germany
Keywords: Assistance Systems, Gamification, Participation, Crowdsourcing
Digital participation is more present than ever. Starting with the Web 2.0 development in the
early 2000’s users turned from mere content consumers into active prosumers (Knorr, 2003).
Since then many concepts were developed that involved users in different ways. For the
purposes of this paper relevant literature can especially be found on topics of crowdsourcing
through the amazon mechanical turk marketplace, (Mason and Suri, 2012, Wang, Ipeirotis and
Provost, 2013, Straub et al., 2014a, Straub et al., 2014b Straub et al., 2015) crowdfunding and
participatory budgeting (Hammon and Hippner, 2012, Niemeyer et al., 2016) and political
participation (Hall et al., 2013). However, while digital civic participation had been hailed as a
savior for deliberative democracies, it was recently criticized for only marginally fulfilling its
many promises (Davis, 2010). In the end, online participation is subject to the so-called
attention economy i.e., it is competing for the users’ attention with many other offerings
(Albrecht, 2006). It remains that one of the biggest challenges in online collaboration is finding
incentives to attract large crowds including all facets of society (Shaw et al., 2011, Wang et al.,
2013, Straub et al., 2015). Thus, research should focus on the new opportunities enabled by
technology that facilitate new actions, such as the promotion, investigation and discussion of
political material (Lilleker and Koc-Michalska, 2013). This would make for non-traditional
forms of participation, creating a more vibrant and non-hierarchical, yet also more chaotic
political communication environment (Chaffey et al., 2009).
In recent years, several big-budget architectural projects not only in Germany (Stuttgart 21, the
airport Berlin, the Elb-Philharmonie) but worldwide (the world championship-related building
measures in Brazil, the Olympic games in Rio), were in the news for the huge opposing
demonstrations they invoked
. And while these projects managed to overcome the obstacle of
attracting large crowds, most people in these examples only raised their opinion when it was
(SPIEGEL ONLINE, n.d.-c), (Berliner Morgenpost, n.d.) (SPIEGEL ONLINE, n.d.-b),
(SPIEGEL ONLINE, n.d.-a), (heute-Nachrichten, n.d.))
too late, even though information about these projects was publicly available beforehand. This
shows that digital technology while being powerful in terms of communication and spread can
harm public participation processes instead of benefitting them if it isn’t channeled into projects
that encourage premonitory public debates, provide information materials on relevant locations
and incent public decision making processes (e.g., voting). Looking at other fields of online
collaboration e.g., crowdsourcing (Shaw et al., 2011, Wang et al., 2013) we can already find
examples of successful implementations.
The goal of this paper is to look into different interdisciplinary incentive mechanisms, with
focus on aspects of personality types, enhancement possibilities through technological
advancement and quicker access to relevant information. Thus the following research questions
aim to determine possible research directions for combined future research.
How should participatory incentives be designed to encourage different personality types in
public decision processes?
What kind of visualization- and presentation types can provide assistance to an in-depth
understanding of the problem as well as potential outcomes?
What mechanisms of e-participation can help to generate consensus-based decisions in civic
One recent trend that proved to motivate and attract large crowds of people has emerged through
mobile augmented reality applications like Pokémon Go and Ingress (>50 million downloads
alone on Android in the first 19 days
. Augmented reality (AR) supplements elements into the
real-word environment with sound, video, and graphics based on sensor data, like GPS and
gyroscope, and context-sensitive user profiles. Due to the widespread distribution of mobile
devices like smartphones or tablets most AR applications are still developed for these platforms,
however development is now leaning towards wearables focusing on smart glasses (e.g.,
Google Glass, Vuzix, and Microsoft HoloLens). We believe that AR-technology can be used
as a powerful tool towards winning a user base for participatory contexts where other
approaches have struggled. Not only does its simulation capability allow for deeper immersion
through enhanced visualization possibilities, it can also illustrate specific consequences of
decisions on the spot, thus shortening the timespan between question and answer, making it an
ideal tool for enhancing voter participation. And while the use of AR and VR-technology could
be very successfully used to foster promotion and investigation of political material, further
implementation of gamification elements could additionally incent the actual participation,
encouraging the players by different means to engage in the voting process (Mekler et al., 2013),
(Hamari et al., 2014).
By integrating new mechanisms and approaches such as gamification (Deterding, 2012), public
decision making processes like voting (Teubner et al., 2015, Niemeyer et al., 2016),
participatory budgeting (Niemeyer et al., 2016) and AR assistance systems (Pfeiffer et al.,
2015), public participation processes could overcome big parts of the challenges it is currently
still facing. Thus, we propose “Take Part”, a spatial mobile civic participation assistance system
that is intelligent and interactive (Maedche et al., 2016), to be one promising approach to do
this. In examples like the aforementioned monumental architectural project “Stuttgart 21” an
assistance system like the one we introduce in the next section of the paper could have prevented
big parts of the escalation by distributing the different aspects and benefits of this endeavor to
a much broader audience. It also would have contributed to building a communicational bridge
(TechCrunch, n.d.)).
between the opinions and wishes of the citizens and the plans and rationales of the political
decision makers, thus leading towards a healthy participatory decision making process.
Take Part
In the process of implementing the introduced assistive system, the following design decisions
have been made to tackle certain problems mentioned earlier. To convey potentially complex
topics to a broader audience, the project is designed to appeal to different layers of
understanding (visual and auditory as well as textual and factual) thus allowing for less biased
attitudes. By presenting the information at relevant, decision-making moments on the specific
location the interest of passing civics can be sparked and channeled into an immediate voting
process, thus lowering the perceived potential participation load. Lack of attachment towards
less relatable topics can be overcome by inducing feelings of relation and empathy to the
problem at hand (e.g., by adding a storyline or comprehensive visually tangible consequences)
while lack of information as well as interest can be tackled by on-demand, on location
information including the visualization of different possible outcomes (e.g., through a location-
based notification system). Especially recurring and long-term participation can be fostered
with the help of gamified design elements. Thanks to the high spread of the necessary
technology (in general all web-enabled technology, for the spatial features all mobile devices)
a big group of civics can be included in the decision-making process without fear of exclusion
or favoritism. Building on design guides from the field of user experience design (Garrett,
2010), the application will be designed with a strong focus on easy access as well as high
usability. Thus, by overcoming the entry barriers to participate in decision making of public
projects such a spatial civic participation assistance system could strongly contribute towards
more civic involvement leading to deeper consensus and wiser decisions. Furthermore,
awareness towards a much broader range of public projects (like new bike lanes, parks, parking
spots,…) can be spread and risen by using the location based notification systems.
Lastly one of the most crucial benefits of using a mobile assistance system is possibly a
significant shortening of voting processes. By allowing for immediate, on location voting
processes, participation impediment should be considerably lower while participative
incentives could be fostered immensely as described above.
(For exemplary Mock-Ups please see the appendix.)
The process of assisting participatory decisions could be as follows (see Figure 1):
Figure 1- Process "Take Part" Platform
(i) A public institution (e.g., city, state, country, agency) decides on several projects which
potentially could be realized or they want a public opinion on.
(ii) One or several public servants enter the project data by filling out a digitally provided form
including the project-name, a two-sentence description, realization period, spatial data, (1.
where is the project going to happen; 2. Where should users be notified) and potentially regional
voting limitations (to prevent social influence biases and herding effects, i.e. “Boaty
-situations (Lorenz et al., 2011, Muchnik et al., 2013, T. Wang et al., 2014) as
well as further data that can be depicted on demand like short descriptive videos, 3D-Models,
data visualizations, facts and related literature as well as source material. Optionally a data
designer could be hired to target the presented data to the relevant target audience in an
appropriate way.
(iii) The entered data is depicted on an online-platform. Users can access and consult the
platform via any Browser at home and participate in the voting processes via their login. With
the help of mobile AR systems (Google Glasses, Daydream), extensive Information in form of
comprehensive visual representations and overlays is accessible. Registration should be
handled with consideration of identity management. This could be done by tools like passport
verification or other identity verification services like CIP or KYC or digital-chip verification
to prevent voter fraud.
(iv) Apart from the online platform users can download a cross-platform app for mobile devices
that allows for additional transmission of spatial data. As soon as the users enter the vicinity of
the specified location, they receive a notification.
(v) They can now access the data of the potential project, including visual data like AR
representations (e.g., 3D-projections), graphics and architectural models; informational
material like numbers, charts, articles or social material like discussion forums and FAQs. The
first page will only give a very short overview including the voting deadline to keep the
A crowd-voting initiative of the Natural Environment Research Council led to the outcome to name their new
boat “Boaty McBoatface”. This example shows that crowds sometimes need restrictions in order to secure the
quality. For more details see: last accessed: September
6, 2016
cognitive load as low as possible. Depending on preferred information reception the user can
either choose the type of data at that moment or select a preference in the settings beforehand.
(vi) If they are registered on the platform, they can immediately continue to the voting process,
voting either for or against the realization of the project or decline to vote in the sense of crowd-
voting (Prpić et al., 2015) (on refusal, they have the opportunity to select a reason like “not
interested”, “not well enough informed”,…).
In a more complex but also more informative version the process could be threefold: 1. A short
importance validation survey (“How much do you care about the outcome of this vote?” 12345),
2. The actual voting (“Do you want this project to happen?” “yes” “no” “don’t vote”), 3. Vote
confirmation (“You voted to make the project happen”/” You voted to prevent this project from
happening”, written on top: “no, this was wrong, go back”)
(vii) If one decided to vote one way or the other, one will be notified about the results of the
voting process as soon as the process is finished. Furthermore, the participants have the
opportunity to follow up on the next steps of the project if it is happening.
(viii) After the voting processes finish, the gathered data will be displayed publicly (visible for
everyone with or without registration) on the online platform as well as via notification in the
app and affected location. Through this, people will get notifications everywhere, which
incentivizes them to visit the location. In addition, registered users visiting the location will be
notified about the planned project, and hence incentivized to take part. The
institutions/municipalities should state on what projects will be realized and why based on the
data so as not to lose the trust of the users. On that note possible implementations of self-set
penalties for ignoring or disobeying the public will could be considered to raise trust and belief
in the user base to ensure future participation.
Potential problems
While the presented assistance system could significantly raise public awareness as well as
participation there are specific problems within the outlined process that would have to be
overcome for the project to really make a difference. First, public institutions often lack
technical expertise and knowledge compared to corporations (i.e., small local authority town
halls vs. international billion dollar companies) (Hellsmanns et al., 2016, Kubicek et al., 2011).
Even if public servants are spared to take care of the data entry into the platform, the designers
have to make sure that the interface is designed with great care towards ease of use. Also the
risk of losing the users interest due to data entries in potentially raw, dry forms is high.
Employment of a dedicated information designer would strongly be advised to ensure high
quality presentation, target audience analysis as well as providing the digital competence to
create and implement data for AR-technologies.
Voter fraud will be an issue, even with passport confirmation. For this as well as for data
security reasons IT-Security specialists should be included in the process of building that
platform. While the voting results will be public, the user data should stay private in all
circumstances to ensure a truly democratic process. Regarding the actual voting some heavy
field testing should go into the extensiveness of the voting process. Especially the inclusion of
the self-stated importance validation could deliver ground for some interesting research (e.g.,
is there a correlation between the time spent on the informational part of the project and the
self-stated importance? Is there a pattern to be found between long-term participation and high
or low self-stated interest?). Some problems could occur on the decision on where to display
the specific voting notification (in the mobile version). While places with strong human activity
might be preferable (like train stations or market places) to gain a bigger audience, people might
easily be overwhelmed or distracted and thus ignore the opportunity(ies). On the other hand,
some of the places where projects might be planned might be debatable even though few people
are directly affected and therefore incentivized to vote. Research into timing as well as
relevance will have to be conducted to this conundrum.
Last of all there is the issue of trust. As recent political events have shown, not trusting the
conducting institution or worse, government, can result in voter decline as well as protest
voting. A healthy participation in voting processes can only be established with trust which
means that the respective institutions will be responsible to ongoing ensure the trust of their
users. More than anything else this means trust in the meaningfulness of their vote by visibly
implementing the project according to the results. Once this trust is lost, the whole project
becomes redundant. In order to ensure the meaning of the voting process the institutions might
agree to conditions of penalty for not seeing through the public opinion. Furthermore,
documented Follow-Ups to the projects should be presented to give everyone the sense of
having participated in something feasible. The aforementioned information-designer could be
a big help on this side of the project too.
We think that with some time, research and design effort AR-technology can and will be used
as a strong asset in the assistance of participatory processes, with “Take Part” as only one
exemplary model of implementation. With respect towards the fields of user interaction and
user experience design as well as enhanced presentational design processes the integration of
those new assistive technologies can greatly benefit different kinds participatory projects, not
only in terms of civic participation but also in the fields of crowd-based information gathering
(e.g., in terms of biological or geographical data), crowd-sourcing (patreon, rental or sharing
platforms) and civic engagement.
Albrecht, S. (2006). “Whose voice is heard in online deliberation?: A study of participation
and representation in political debates on the internet.” Information, Communication &
Society, 9(1), 6282.
Berliner Morgenpost. (n.d.). “Veranstalter sprechen von 320 000 bei Anti-TTIP-Demos.”
Retrieved from
Chaffey, D., F. Ellis-Chadwick, R. Mayer and K. Johnston. (2009). Internet Marketing:
Strategy, Implementation and Practice. Practice (Vol. 3).
Davis, A. (2010). “New media and fat democracy: the paradox of online participation.” New
Media & Society, 12(5), 745761.
Deterding, S. (2012). “Gamification: designing for motivation.” Interactions, 19, 1417.
Garrett, J. J. (2010). The Elements of User Experience: User-Centered Design for the Web
and Beyond. Elements.
Hall, M., S. Kimbrough, W. Michalk, J. Schneider and C. Weinhardt. (2013). “Making
Solution Pluralism in Policy Making Accessible: Optimization of Design and Services
for Constituent Well-Being.” Interdisciplinary Informatics Faculty Proceedings &
Hamari, J., J. Koivisto and H. Sarsa. (2014). “Does gamification work? - A literature review
of empirical studies on gamification.” In: Proceedings of the Annual Hawaii
International Conference on System Sciences (pp. 30253034). IEEE Computer Society.
Hammon, L. and H. Hippner. (2012). “Crowdsourcing.” BISE, 4(3), 163166.
Hellsmanns, A., C. Niemeyer, M. Hall, T. Zentek and C. Weinhardt. (2016). “Towards a
Requirement Framework for Online Participation Platforms.” Interdisciplinary
Informatics Faculty Proceedings & Presentations.
heute-Nachrichten. (n.d.). “Olympia in Rio: Proteste und Tränengas bei Fackellauf.”
Retrieved from
Knorr, E. (2003). “2004: The Year of Web Services.” Retrieved from
Kubicek, H., B. Lippa and A. Koop. (2011). „Erfolgreich beteiligt.“ Nutzen und
Erfolgsfaktoren internetgestützter BürgerbeteiligungEine empirische Analyse von zwölf
Fallbeispielen. Gütersloh: Bertelsmann Stiftung.
Lilleker, D. and K. Koc-Michalska. (2013). “MEPs Online: Understanding Communication
Strategies for Remote Representatives.” In: R. R. and M. D. Nixon P (Ed.), Politics and
the Internet in Comparative Context: Views from the Cloud. London: Bournemouth
University, Fern Barrow, Poole, Dorset, BH12 5BB, UK.
Lorenz, J., H. Rauhut, F. Schweitzer and D. Helbing. (2011). “How social influence can
undermine the wisdom of crowd effect.” Proceedings of the National Academy of
Sciences, 108(22), 90209025.
Maedche, A., S. Morana, S. Schacht, D. Werth and J. Krumeich. (2016). “Advanced User
Assistance Systems.” Business & Information Systems Engineering, 58(5), 367370.
Mason, W. and S. Suri. (2012). “Conducting behavioral research on Amazon’s Mechanical
Turk.” Behavior Research Methods, 44(1), 123.
Mekler, E. D., B. hlmann, F. Opwis, K. Tuch and N. A. (2013). “Do points, levels and
leaderboards harm intrinsic motivation? An empirical analysis of common gami?cation
elements. In: BT - Proceedings of the International Conference on Gameful Design,
Research, and Applications (Gami?cation ’13).,” 66–72.
Muchnik, L., S. Aral and S. J. Taylor. (2013). “Social influence bias: a randomized
experiment.” Science (New York, N.Y.), 341(6146), 64751.
Niemeyer, C., T. Wagenknecht and C. Weinhardt. (2016). “Emotional Arousal Effects in
Participatory Budgeting Decisions.” Proceeings of the Second Karlsruhe Service Summit
Research Workshop, Karlsruhe, 25. - 26. Februar 2016.
Pfeiffer, J., T. Pfeiffer and M. Meißner. (2015). “Towards Attentive In-Store Recommender
Systems” (pp. 161–173). Springer International Publishing.
Prpić, J., P. P. Shukla, J. H. Kietzmann and I. P. McCarthy. (2015). “How to work a crowd:
Developing crowd capital through crowdsourcing.” Business Horizons, 58(1), 7785.
Shaw, A. D., J. J. Horton and D. L. Chen. (2011). “Designing incentives for inexpert human
raters.” In: Proceedings of the ACM 2011 conference on Computer supported
cooperative work - CSCW ’11 (p. 275). New York, New York, USA: ACM Press.
SPIEGEL ONLINE. (n.d.-a). “200.000 Brasilianer protestieren gegen teure Fußball-WM.”
Retrieved from
SPIEGEL ONLINE. (n.d.-b). “Problem-Baustelle Elbphilharmonie: Frust hoch sieben in
Hamburg.” Retrieved from
SPIEGEL ONLINE. (n.d.-c). “Stuttgart 21;-Proteste: Kritiker wollen erst Baustopp, dann
Bahn-Gespräch.” Retrieved from
Straub, T., H. Gimpel and F. Teschner. (2014a). “The Negative Effect of Feedback on
Performance in Crowd Labor Tournaments.” Collective Intelligence 2014 : Proceedings,
Cambridge, Massachusetts/USA, June 10-12, 2014.
Straub, T., H. Gimpel, F. Teschner and C. Weinhardt. (2014b). “Feedback and Performance in
Crowd Work: a Real Effort Experiment.” ECIS, 010.
Straub, T., H. Gimpel, F. Teschner and C. Weinhardt. (2015). “How (not) to Incent Crowd
Workers: Payment Schemes and Feedback in Crowdsourcing.” Business and Information
Systems Engineering, 57(3), 167179.
TechCrunch. (n.d.). “Pokémon Go estimated at over 75M downloads worldwide.” Retrieved
Teubner, T., M. T. P. Adam and C. Niemeyer. (2015). “Measuring risk preferences in field
experiments - Proposition of a simplified task.” Economics Bulletin, 35(3).
Wang, J., P. G. Ipeirotis and F. Provost. (2013). “Quality-Based Pricing for Crowdsourced
Wang, T., D. Wang and F. Wang. (2014). “Quantifying herding effects in crowd wisdom.” In:
Proceedings of the 20th ACM SIGKDD international conference on Knowledge
discovery and data mining - KDD ’14 (pp. 10871096). New York, New York, USA:
ACM Press.
Figure 2 - VR Assisted Visualizations: Display of an art work that is planned to be installed in
front of this building.
Figure 3 - Mock-Up Online Platform
Figure 4 - Mock-Up Mobile Application
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Individual risk preferences can serve as an effective control variable in order to describe human decisions and behavior. Due to limited participants’ attention and time, using standard procedures may be difficult. This paper hence proposes a risk preference elicitation task, aiming to assess individual risk preferences in experiments conducted outside the lab. The test is evaluated against a well-established task by means of two online experiments comprising a total of 490 participants.
Full-text available
Traditionally, the term ‘crowd’ was used almost exclusively in the context of people who self-organized around a common purpose, emotion or experience. Today, however, firms often refer to ‘crowds’ in discussions of how collections of individuals can be engaged for organizational purposes. Crowdsourcing, the use of information technologies to outsource business responsibilities to crowds, can now significantly influence a firm’s ability to leverage previously unattainable resources to build competitive advantage. Nonetheless, many managers are hesitant to consider crowdsourcing because they don’t understand how its various types can add value to the firm. In response, we explain what crowdsourcing is, the advantages it offers and how firms can pursue crowdsourcing. We begin by formulating a crowdsourcing typology and show how its four categories (crowd-voting, micro-task, idea and solution crowdsourcing) can help firms develop ‘crowd capital’, an organizational-level resource harnessed from the crowd. We then present a three-step process model for generating crowd capital. Step one includes important considerations that shape how a crowd is to be constructed. Step two outlines the capabilities firms need to develop to acquire and assimilate resources (e.g., knowledge, labor, funds) from the crowd. Step three addresses key decision-areas that executives need to address to effectively engage crowds.
Full-text available
The emergence of online paid micro-crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), allows on-demand and at scale distribution of tasks to human workers around the world. In such settings, online workers come and complete small tasks posted by a company, working for as long or as little as they wish. Such temporary employer-employee relationships give rise to adverse selection, moral hazard, and many other challenges. How can we ensure that the submitted work is accurate, especially when the verification cost is comparable to the cost of performing the task? How can we estimate the exhibited quality of the workers? What pricing strategies should be used to induce the effort of workers with varying ability levels? We develop a comprehensive framework for managing the quality in such micro crowdsourcing settings: First, we describe an algorithm for estimating the error rates of the participating workers, and show how to separate systematic worker biases from unrecoverable errors and generate an unbiased “worker quality” measurement. Next, we present a selective repeated-labeling algorithm that acquires labels in a way so that quality requirements can be met at minimum cost. Then, we propose a quality-adjusted pricing scheme that adjusts the payment level according to the contributed value by each worker. We test our compensation scheme in a principal-agent setting in which workers respond to incentives by varying their effort. Our simulation results demonstrate that the proposed pricing scheme is able to induce workers to exert higher levels of effort and yield larger profits for employers compared to the commonly adopted uniform pricing schemes. We also describe strategies that build on our quality control and pricing framework, to tackle crowdsourced tasks of increasingly higher complexity, while still maintaining a tight quality control of the process.
Conference Paper
With crowd labor markets such as Amazon Mechanical Turk (MTurk for short), oDesk, or Clickworker, it is relatively easy to access a large, distributed workforce with different skill and pay levels on demand. The crowd processes tasks which range from simple repetitive email tagging to creative and complex jobs such as building logos or websites. In crowd labor, crowdsourcing and other collective intelligence settings such as prediction markets, one challenge is to properly incentivize worker effort and quality of work. Besides intrinsic motivation, these systems typically hand out monetary incentives: Often they include a flat fee and an additional bonus for high quality work. Bonus payments can be handed out through various ranking or linear payments schemes known from labor economics. Which incentive structure works best in crowd work, is still an open question. In this paper we address the question if performance feedback about a worker’s relative position in a rank-order tournament influences his behavior in crowd labor settings. We conducted a real effort experiment on MTurk analyzing the effect of performance feedback on worker effort in rank-order tournaments. In line with standard theory, we observe that on average, rank-order tournaments improve performance compared to a piece rate payment. In rank-order tournaments, feedback has on average a negative effect on performance. In a nutshell, the root for this unintuitive result is participant heterogeneity: While comparatively low performing workers stop working all together, comparatively high performing workers knowing that they will be rewarded work less.
Crowdsourcing gains momentum: In digital work places such as Amazon Mechanical Turk, oDesk, Clickworker, 99designs, or InnoCentive it is easy to distribute human work to hundreds or thousands of freelancers. In these crowdsourcing settings, one challenge is to properly incent worker effort to create value. Common incentive schemes are piece rate payments and rank-order tournaments among workers. Tournaments might or might not disclose a worker’s current competitive position via a leaderboard. Following an exploratory approach, we derive a model on worker performance in rank-order tournaments and present a series of real effort studies using experimental techniques on an online labor market to test the model and to compare dyadic tournaments to piece rate payments. Data suggests that on average dyadic tournaments do not improve performance compared to a simple piece rate for simple and short crowdsourcing tasks. Furthermore, giving feedback on the competitive position in such tournaments tends to be negatively related to workers’ performance. This relation is partially mediated by task completion and moderated by the provision of feedback: When playing against strong competitors, feedback is associated with workers quitting the task altogether and, thus, showing lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Overall, individual piece rate payments are most simple to communicate and implement while incenting performance is on par with more complex dyadic tournaments.
Conference Paper
It is heavily debated within the gamification community whether specific game elements may actually undermine users' intrinsic motivation. This online experiment examined the effects of three commonly employed game design elements -- points, leaderboard, levels -- on users' performance, intrinsic motivation, perceived autonomy and competence in an image annotation task. Implementation of these game elements significantly increased performance, but did not affect perceived autonomy, competence or intrinsic motivation. Our findings suggest that points, levels and leaderboards by themselves neither make nor break users' intrinsic motivation in non-game contexts. Instead, it is assumed that they act as progress indicators, guiding and enhancing user performance. While more research on the contextual factors that may potentially mediate the effects of game elements on intrinsic motivation is required, it seems that the implementation of points, levels, and leaderboards is a viable means to promote specific user behavior in non-game contexts.
This piece speculates on the internet’s wider influences on the shape of institutional politics in representative ‘actually existing democracies’. Findings, based on 100 semi-structured interviews with political actors (politicians, journalists and officials) operating around the UK Parliament, suggest two contrasting trends. On the one hand, more political actors at the immediate edges of the UK institutional political process are being further engaged in a sort of centrifugal movement going outwards from the centre. At the same time, the space between this extended political centre and its public periphery is increasing. This fatter, democratic elitist shift in UK politics may be interpreted as ‘new’ and ICT-driven. It might equally be argued that new media is exacerbating pre-existing political party and media trends in mature democracies which fail to engage ordinary citizens.