Article

Ethics and tactics of professional crowdwork

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Paid crowd workers are not just an API call---but all too often, they are treated like one.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... message boards) when engaging in their MTurk work as a source of primary or secondary income. Workers that use online communities gain insights about requestors, microtask requirements, and advice on how to become more productive in their microtasking endeavors (Irani, 2015b;Martin et al., 2016;Savage et al., 2020;Schmidt, 2015;Silberman and Irani, 2016;Silberman et al., 2010;Wang et al., 2017;Wood et al., 2018;Yin et al., 2016). Such microtasking resources have been termed a public good within the Open Source Software (OSS) literature where knowledge is freely shared and accessible to all members (Wasko and Faraj, 2000;Wasko et al., 2009). ...
... Thus, outcomes can include the amount of work that is contracted (i.e. tasks approved to complete), the payment received for completing the tasks, as well as any bonus earnings received for high performance or subsequent opportunities because of the quality of individuals' performance (Martin et al., 2016;Silberman et al., 2010;Yin et al., 2016). Opportunity recognition should positively affect the relationship between community members and these outcomes. ...
... Figure 2 shows the MTurk platform, where tasks are listed for workers to complete, with one task opened to show the details provided by the platform. Due to MTurk's longevity and popularity, several online communities have emerged externally from the MTurk platform and are not managed or hosted by Amazon (Irani, 2015b;Martin et al., 2016;Savage et al., 2020;Silberman and Irani, 2016;Silberman et al., 2010;Wood et al., 2018;Yin et al., 2016). Figure 3 shows the main page of MTurk Forum, a popular MTurkrelated online community where microtask workers socialize, share information about HITs available on MTurk, and advice/tips about requesters and how to be more productive while microtasking. ...
Article
Purpose Political skill has emerged as a concept of interest within the information systems literature to explain individual performance outcomes. The purpose of this paper is to adapt political skill to technology-mediated contexts. Specifically, the authors seek to understand political skill's role in shaping microtask workers' opportunity recognition when utilizing online communities in microtask work environments. Design/methodology/approach The authors tested their research model using a survey of 348 Amazon Mechanical Turk (MTurk) workers who participate in microtask-related online communities. MTurk is a large, popular microtasking platform used by thousands of microtask workers daily, with several online communities supporting microtask workers. Findings Technology-based political skill plays a critical role in shaping the resources microtasking workers rely upon from online communities, including opportunity recognition and knowledge sharing. The ability to develop opportunity recognition positively impacts a microtask worker's ability to leverage online communities for microtask worker performance. Tenure in the community acts as a moderator within the model. Originality/value The present study makes several contributions. First, the authors adapt political skill to an online community to account for how microtask workers understand a community's socio-technical environment. Second, the authors demonstrate the antecedent role of political skill for opportunity recognition and knowledge sharing. Third, the authors provide empirical validation of the link between online communities and microtask worker performance.
... The workers become frustrated from (2) malfunctioning environments, where technical errors in the task interfaces created by the requesters cause unsuccessful submissions (Silberman et al., 2010b;Bederson and Quinn, 2011;Silberman, 2015;Brawley and Pury, 2016;McInnis et al., 2016;Berg et al., 2018). They also complain about (3) workload misestimation where inequality of required effort and time leads to fruitless attempts to finish the tasks (Silberman et al., 2010a;Silberman, 2015). This inequality consequently maybe also be reflected in (4) low payment (i.e., low hourly payment ratio), which affects the workers' motivation and satisfaction (Ross et al., 2010;Silberman, 2015;Gadiraju et al., 2017;Berg et al., 2018). ...
... Task Operation Problems (6-8) Several studies indicate that there are often (6) missing responses from requesters on inquiries from workers related to tasks or desired solutions (Silberman, 2010;Silberman et al., 2010a;Silberman et al., 2010b;Bederson and Quinn, 2011;Dow et al., 2012;Chandler et al., 2013;Alagarai Sampath et al., 2014;Silberman, 2015;Brawley and Pury, 2016;Deng and Joshi, 2016;Schwartz, 2018;Berg et al., 2018). Requesters often give only (7) minor feedback to submitted results (Dow et al., 2012;Gaikwad et al., 2017;Schwartz, 2018;Berg et al., 2018). ...
... Task Evaluation Problems (9-11) Many articles report on (9) unfair rejections of results in two ways. On one hand, requesters or automatic algorithms make (a) harsh evaluations of submissions rejecting results that have been created to the best of the workers' abilities (Porter, 2017;Silberman, 2010;Silberman et al., 2010a;Silberman et al., 2010b;Bederson and Quinn, 2011;Irani and Silberman, 2013;Peng et al., 2014;Brawley and Pury, 2016;Guth and Brabham, 2017;Berg et al., 2018). On the other hand, there are often (b) missing explanations for rejections where requesters provide no or unclear reasons to the workers, making workers eventually become disappointed (Porter, 2017;Silberman et al., 2010b;Irani and Silberman, 2013;Peng et al., 2014;Silberman, 2015;Brawley and Pury, 2016;Guth and Brabham, 2017;Berg et al., 2018). ...
... Many businesses with large amounts of data use crowd employment to create metadata and remove duplicate entries from their databases. Moderation of usergenerated content on collaborative websites is another popular application of crowd employment (Silberman et al. 2010). This is confirmed by the evidence collected within Eurofound´s study. ...
... In conclusion, the motivation of crowd workers includes the fun in doing this type of work, learning opportunities, social exchange, recognition by other crowd workers and clients, the opportunity for selfmarketing and a better combination of work and private life (Klebe and Neugebauer 2014). Furthermore, people get involved in a crowd employment platform as a source of (additional) income (Klebe and Neugebauer 2014;Silberman et al. 2010). Nevertheless, workers may be reluctant to engage in crowd employment due to concerns about data protection and fair pay (Klebe and Neugebauer 2014). ...
... For example, 90 percent of the tasks offered at Amazon Mechanical Turk are paid less than $0.10 (€0.07), equaling an hourly rate of around $2 (€1.44) (Irani and Silberman 2013). According to Silberman et al. (2010), the yearly income of a 'Turker' amounts to less than $10,000 (€ 7,000). A similar low income is found in the analyzed case studies. ...
Chapter
This book-chapter explores the effects of digitalisation on the music industries, and in particular the artists, looking at how revenue streams from recorded music has changed due to streaming and how the artists take on a broader range of tasks and functions in the music industry value-chains.
... Furthermore, it has analyzed how organizations must decompose and re-aggregate tasks in order to achieve the best possible results [1,8,19,79]. However, with crowd work increasing, we also need a better understanding of crowd workers, as well as their working conditions, behaviors, attitudes, and outcomes [10,17,18,41,71]. Initial research has already investigated the motivational structures of crowd workers [11,39,44,46,69]; it shows that crowd workers are not only motivated extrinsically by financial rewards but also by intrinsic motivation such as the task itself [8,51]. ...
... There are three streams of research on crowd work that are important for our paper. One major stream concerns crowd working platforms as online labor markets that balance demand (i.e., crowdsourcers broadcasting tasks) and supply (i.e., crowd workers contributing solutions) of labor [10,11,29,37,41,61,69,71]. In this paper, we follow this conceptualization of crowd working platforms. ...
... Similarly, researchers have investigated how financial compensation affects working behaviors (e.g., effort and participation [11,49,61]) and outcomes (e.g., quality of work [11,69]). While this research is important for leveraging crowd work in organizational contexts, the perspective of individual crowd workers has not been sufficiently explored [17,28,41,71]. We must understand how crowd workers perceive their working conditionsof which financial compensation is a pivotal aspectand how these conditions influence psychological work outcomes such as work satisfaction and identification [17,18,71]. ...
Article
Full-text available
Crowd work reflects a new form of gainful employment on the Internet. We study how the nature of the tasks being performed and financial compensation jointly shape work perceptions of crowd workers in order to better understand the changing modes and patterns of digital work. Surveying individuals on 23 German crowd working platforms, this work is the first to add a multi-platform perspective on perceived working conditions in crowd work. We show that crowd workers need rather high levels of financial compensation before task characteristics become relevant for shaping favorable perceptions of working conditions. We explain these results by considering financial compensation as an informational cue indicating the appreciation of working effort which is internalized by well-paid crowd workers. Resulting boundary conditions for task design are discussed. These results help us understand when and under what conditions crowd work can be regarded as a fulfilling type of employment in highly developed countries.
... However, guidelines for the ethical treatment of participants in the lab have rarely been applied to the online setting. Unlike in-lab studies where participants contribute data and learn how their data will be used to achieve a research aim, participants in online experiments often receive little information about the research they just participated in [49,50]. Many IRBs even exempt online studies conducted on Amazon's Mechanical Turk (MTurk) from informed consent because they are considered low risk [43]. ...
... (1) Our work focuses on researchers' challenges for closing the information loop with research participants and extends prior work that focuses on participant needs [30,42,49,50]. We found that the main obstacles preventing researchers from including science communication information are a lack of awareness of their participants' interests, limited time, issues pertaining to their own privacy, concerns about experiment bias, and a dearth of tools to support creating these pages. ...
... In this work, we identified the barriers researchers face in providing participants information and developed design strategies to address these challenges. Our work extends prior work on the ethics of crowd work [30,49,50] by focusing on the challenges that researchers face. We urge platforms and systems to incorporate these design strategies. ...
Article
Online experiments allow researchers to collect data from large, demographically diverse global populations. Unlike in-lab studies, however, online experiments often fail to inform participants about the research to which they contribute. This paper is the first to investigate barriers that prevent researchers from providing such science communication in online experiments. We found that the main obstacles preventing researchers from including such information are assumptions about participant disinterest, limited time, concerns about losing anonymity, and concerns about experimental bias. Researchers also noted the dearth of tools to help them close the information loop with their study participants. Based on these findings, we formulated design requirements and implemented Digestif, a new web-based tool that supports researchers in providing their participants with science communication pages. Our evaluation shows that Digestif's scaffolding, examples, and nudges to focus on participants make researchers more aware of their participants' curiosity about research and more likely to disclose pertinent research information.
... In crowdsourcing, work is allocated based on requesters' open calls placed on information technology (IT) platforms that enable them to access widely distributed "crowds" of workers, their skills and their ideas (e.g., Afuah Crowdsourcing is not without its critics and controversies. In particular, crowdsourcing practices have been questioned repeatedly on ethical grounds (e.g., Bergvall-Kåreborn and Howcroft 2014;Felstiner 2011;Kleemann et al. 2008;Silberman et al. 2010). Two opposing views on the ethics of crowdsourcing are apparent (Fish and Srinivasan 2012). ...
... Researchers, to date, have been primarily concerned with managerial, behavioural and technological questions (for reviews, Aguinis and Lawal 2013;Doan et al. 2011;Hossain and Kauranen 2015;Pedersen et al. 2013;Saxton et al. 2013;Tavakoli et al. 2017;Zuchowski et al. 2016). Even though many have raised ethical concerns and the need to research ethical aspects of crowdsourcing has been repeatedly suggested (e.g., Bergvall-Kåreborn and Howcroft 2014;Felstiner 2011;Kleemann et al. 2008;Silberman et al. 2010), the existing crowdsourcing literature provides little in terms of relevant empirical studies and is very limited in debating ethical issues theoretically. ...
... However, some exceptions exist in which several broader ethical concerns about crowdsourcing have been discussed. Notably, these include studies based on a view of (microwork) crowdsourcing as ethically problematic (Irani and Silberman 2014;Silberman et al. 2010), with Irani and colleagues providing tools such as Turkopticon (Irani and Silberman 2013) and Dynamo (Salehi et al. 2015) to support crowd workers. Some studies have raised issues from an external, political-economic view and are concerned with the neo-liberal conditions that underlie crowdsourcing (Bergvall-Kåreborn and Howcroft 2014;Kleemann et al. 2008;Scholz 2017). ...
Article
Full-text available
Crowdsourcing practices have generated much discussion on their ethics and fairness, yet these topics have received little scholarly investigation. Some have criticized crowdsourcing for worker exploitation and for undermining workplace regulations. Others have lauded crowdsourcing for enabling workers' autonomy and allowing disadvantaged people to access previously unreachable job markets. In this paper, we examine the ethics in crowdsourcing practices by focusing on three questions: a) what ethical issues exist in crowdsourcing practices? b) are ethical norms emerging or are issues emerging that require ethical norms? and, more generally, c) how can the ethics of crowdsourcing practices be established? We answer these questions by engaging with Jürgen Habermas' (Habermas 1990; Habermas 1993) discourse ethics theory to interpret findings from a longitudinal field study (from 2013-2016) involving key crowdsourcing participants (workers, platform organizers and requesters) of three crowdsourcing communities. Grounded in this empirical study, we identify ethical concerns and discuss the ones for which ethical norms have emerged as well as others which remain unresolved and problematic in crowdsourcing practices. Furthermore, we provide normative considerations of how ethical concerns can be identified, discussed and resolved based on the principles of discourse ethics.
... Similarly, participants of Aloisi's (2016) study indicated that ratings were 'arbitrary, unfair or biased', with few options for recourse. Silberman et al. (2010) reported frequent problems for AMT workers, including arbitrary rejections, fraudulent tasks, prohibitively short deadlines, long pay delays and uncommunicative requesters. Bederson and Quinn (2011) noted that one of the underlying problems was the relative lack of consequences for cheating behaviour. ...
... On AMT, workers are unable to effectively filter and search for assignments in line with their interests (Chilton et al. 2010). According to Silberman et al. (2010), the interface only allows assignments to be sorted by creation date or reward amount, not providing more advanced features like wage rate, type of task or level of difficulty. Therefore, workers agree to a rate before even knowing what the assignment involves (Graham et al. 2017a;2017b). ...
Book
Full-text available
The ‘gig economy’, also referred to as the platform economy, is a market system in which companies or individual requesters hire workers to perform short assignments. These transactions are mediated through online labour platforms, either outsourcing work to a geographically dispersed crowd or allocating work to individuals in a specific area. Over the last decade, the diversity of activities mediated through online labour platforms has increased dramatically. In addition to the specific hazards associated with these different types of activities, there are also psychosocial risks related to the way gig work is organised, designed and managed. The aim of this review is to provide a comprehensive overview of these risks, identifying research gaps and strategies to address them.
... El acceso al trabajo no es continuo ni regular, y el trabajo no siempre se paga. Los empleadores solo pagan si están satisfechos con los resultados, y esto deja a los trabajadores vulnerables a los caprichos de los empleadores (Felstiner, 2011;Silberman et al, 2010;Klebe y Neugebauer, 2014). ...
... Sin embargo, esto es muy subjetivo, y para algunos trabajadores, estos elementos específicos causan estrés debido a la necesidad de auto-organización y la falta de diferenciación entre el trabajo y la vida privada. Silberman et al (2010) señalan que el Crowd Job ha creado trabajo para muchos en un momento de incertidumbre. Kittur et al (2013) afirman que este tipo de empleo crea nuevas oportunidades de ingresos y movilidad social en regiones del mundo con economías locales estancadas, al tiempo que mitiga la escasez de expertos en áreas geográficas específicas. ...
... El acceso al trabajo no es continuo ni regular, y el trabajo no siempre se paga. Los empleadores solo pagan si están satisfechos con los resultados, y esto deja a los trabajadores vulnerables a los caprichos de los empleadores (Felstiner, 2011;Silberman et al, 2010;Klebe y Neugebauer, 2014). ...
... Sin embargo, esto es muy subjetivo, y para algunos trabajadores, estos elementos específicos causan estrés debido a la necesidad de auto-organización y la falta de diferenciación entre el trabajo y la vida privada. Silberman et al (2010) señalan que el Crowd Job ha creado trabajo para muchos en un momento de incertidumbre. Kittur et al (2013) afirman que este tipo de empleo crea nuevas oportunidades de ingresos y movilidad social en regiones del mundo con economías locales estancadas, al tiempo que mitiga la escasez de expertos en áreas geográficas específicas. ...
Technical Report
Full-text available
El acelerado cambio tecnológico ha marcado un nuevo ritmo en la aparición de ocupaciones emergentes, frecuentemente relacionadas con plataformas digitales. Desde el aumento de la capacidad de almacenamiento y de la velocidad de procesamiento de información, se han potenciado los procesos de digitalización que se traducen en la creación de aplicaciones para atender una serie de necesidades. Entender mejor las características de estas ocupaciones resulta fundamental para dar respuesta a la demanda de nuevas habilidades de las personas y a los requerimientos de cambios en el marco institucional con el fin de garantizar el empleo decente. En este trabajo se realiza un levantamiento de información disponible en Costa Rica sobre las habilidades demandadas de la fuerza laboral y se detecta un aumento de la demanda de habilidades digitales avanzadas, así como de competencias y habilidades para el diseño y la resolución de problemas, vinculadas a las ocupaciones tradicionales en ingeniería. Para complementar la información disponible, se analizan las características de las ocupaciones emergentes sobre la base de consultas a expertos y administradores de plataformas y un sondeo a personas que trabajan vinculadas a esas plataformas, lo que permite presentar un primer avance respecto de las características de este tipo de trabajo en Costa Rica.
... Extrinsic incentives are defined as: " to do work or refer to behaviour that pertains for something apart from and external to the work itself to attain some separable outcome such as monetary reward or recognition from other people" Ryan and Deci (20 0 0) . The importance of payment compared to other rewards reveal that people are not always interested in performing tasks for fun and pastime Silberman et al. (2010) . Extrinsic incentives positively moderate the relationship between task effort and engagement Liang et al. (2018) . ...
... A worker can improve her reputation by contributing additional material and pieces of evidence during task execution. If a task is wrongly performed as determined in the validation phase, the worker receives a lower reputation, and ultimately fewer tasks and little to no money Silberman et al. (2010) . Similarly, a requester can impose a social sanction, i.e., banning or blacklisting the worker for submitting lousy work. ...
Article
Crowdsourcing, a distributed human problem-solving paradigm is an active research area which has attracted significant attention in the fields of computer science, business, and information systems. Crowdsourcing holds novelty with advantages like open innovation, scalability, and cost-efficiency. Although considerable research work is performed, however, a survey on the crowdsourcing process-technology has not been divulged yet. In this paper, we present a systematic survey of crowdsourcing in focussing emerging techniques and approaches for improving conventional and developing future crowdsourcing systems. We first present a simplified definition of crowdsourcing. Then, we propose a framework based on three major components, synthesize a wide spectrum of existing studies for various dimensions of the framework. According to the framework, we first introduce the initialization step, including task design, task settings, and incentive mechanisms. Next, in the implementation step, we look into task decomposition, crowd and platform selection, and task assignment. In the last step, we discuss different answer aggregation techniques, validation methods and reward tactics, and reputation management. Finally, we identify open issues and suggest possible research directions for the future.
... These motivations can be divided into 2 dimensions: extrinsic motivations [25][26][27] and intrinsic motivations [28][29][30]. For the extrinsic motivations, researchers have shown that financial incentives such as monetary stimulus play an important role in the users' participating behaviors [31]. Some studies have shown that the reward is the primary source of income on the crowdsourcing platforms and this reward drives users to participate in tasks [25][26][27]. ...
... The reward is monetary numbers, which accord with the peripheral cue. Previous studies have explored the role of reward in the crowdsourcing field [46][47][48] and have indicated that financial reward is the most critical motivation, as most respondents reported that they do not perform tasks for fun or to kill time [31]. Some studies have shown that money or points have a positive effect on the user's participation in online health communities [28,49]. ...
Article
Full-text available
Background Web-based crowdsourcing promotes the goals achieved effectively by gaining solutions from public groups via the internet, and it has gained extensive attention in both business and academia. As a new mode of sourcing, crowdsourcing has been proven to improve efficiency, quality, and diversity of tasks. However, little attention has been given to crowdsourcing in the health sector. Objective Crowdsourced health care information websites enable patients to post their questions in the question pool, which is accessible to all doctors, and the patients wait for doctors to respond to their questions. Since the sustainable development of crowdsourced health care information websites depends on the participation of the doctors, we aimed to investigate the factors influencing doctors’ participation in providing health care information in these websites from the perspective of the elaboration-likelihood model. Methods We collected 1524 questions with complete patient-doctor interaction processes from an online health community in China to test all the hypotheses. We divided the doctors into 2 groups based on the sequence of the answers: (1) doctor who answered the patient’s question first and (2) the doctors who answered that question after the doctor who answered first. All analyses were conducted using the ordinary least squares method. Results First, the ability of the doctor who first answered the health-related question was found to positively influence the participation of the following doctors who answered after the first doctor responded to the question (βoffline1=.177, P<.001; βoffline2=.063, P=.048; βonline=.418, P<.001). Second, the reward that the patient offered for the best answer showed a positive effect on doctors’ participation (β=.019, P<.001). Third, the question’s complexity was found to positively moderate the relationships between the ability of the first doctor who answered and the participation of the following doctors (β=.186, P=.05) and to mitigate the effect between the reward and the participation of the following doctors (β=–.003, P=.10). Conclusions This study has both theoretical and practical contributions. Online health community managers can build effective incentive mechanisms to encourage highly competent doctors to participate in the provision of medical services in crowdsourced health care information websites and they can increase the reward incentives for each question to increase the participation of the doctors.
... Even traditional organizations have been shown to use platforms such as these to source on-demand work directly from freelancers, creating the threat of immediate replacement for existing workers (Corporaal & Lehdonvirta, 2017;Howe, 2006;Schenk & Guittard, 2011). Workers have limited options for dissent because the global supply of workers is high and because there are currently three times as many contractors as clients on many labor market platforms (Bergvall-Kåreborn & Howcroft, 2014;Graham, Hjorth, & Lehdonvirta, 2017;Silberman, Irani, & Ross, 2010). Many platforms treat workers interchangeably, and platforms can often sustain losing those who do not accept the system's terms (Kleemann, Voß, & Rieder, 2008;Postigo, 2016). ...
... Algorithms may also prevent contact with human managers. When an algorithm, instead of a person, is on the other side of a managerial relationship, it can create an additional obstacle for workers to question or challenge the directions they are given or have a say in the labor process (Graham et al., 2017;Silberman, Irani, & Ross, 2010). ...
Article
The widespread implementation of algorithmic technologies in organizations prompts questions about how algorithms may reshape organizational control. We use Edwards’ (1979) perspective of “contested terrain,” wherein managers implement production technologies to maximize the value of labor and workers resist, to synthesize the interdisciplinary research on algorithms at work. We find that algorithmic control in the workplace operates through six main mechanisms, which we call the “6 Rs”—employers can use algorithms to direct workers by restricting and recommending, evaluate workers by recording and rating, and discipline workers by replacing and rewarding. We also discuss several key insights regarding algorithmic control. First, labor process theory helps to highlight potential problems with the largely positive view of algorithms at work. Second, the technical capabilities of algorithmic systems facilitate a form of rational control that is distinct from the technical and bureaucratic control used by employers for the past century. Third, employers’ use of algorithms is sparking the development of new algorithmic occupations. Finally, workers are individually and collectively resisting algorithmic control through a set of emerging tactics we call algoactivism. These insights sketch the contested terrain of algorithmic control and map critical areas for future research.
... The working conditions of crowdworkers have come under significant scrutiny. Silberman (2010) reported in detail on the issues that crowdworkers face. A lot has been written about the low rates of pay that crowdworks often suffer, with the average worker earning a median of USD$2 per hour (Hara et al., 2017). ...
... The result is not surprising. Workers make a living on Amazon Mechanical Turk, so anything that negatively affects the metrics that represent them on the platform, for instance the number of tasks they have completed, or their rejection rates, materially affects their chances of getting work (see, e.g., Silberman et al., 2010). ...
Chapter
Crowdsourcing psychometric data is common in areas of Human-Computer Interaction (HCI) such as information visualization, text entry, and interface design. In some of the social sciences, crowdsourcing data is now considered routine, and even standard. In this chapter, we explore the collection of data in this manner, beginning by describing the variety of approaches can be used to crowdsource data. Then, we evaluate past literature that has compared the results of these approaches to more traditional data-collection techniques. From this literature, we synthesize a set of design and implementation guidelines for crowdsourcing studies. Finally, we describe how particular analytic techniques can be recruited to aid the analysis of large-scale crowdsourced data. The goal of this chapter it to clearly enumerate the difficulties of crowdsourcing psychometric data and to explore how, with careful planning and execution, these limitations can be overcome.
... In the modern world, the incentive of payment is able to increase worker engagement more than entertainment and recreation in most cases, which has been also illustrated in [63]. Motivating workers with monetary rewards significantly increases workers' contribution while providing better task completion rate. ...
Preprint
Crowdsourcing, in which human intelligence and productivity is dynamically mobilized to tackle tasks too complex for automation alone to handle, has grown to be an important research topic and inspired new businesses (e.g., Uber, Airbnb). Over the years, crowdsourcing has morphed from providing a platform where workers and tasks can be matched up manually into one which leverages data-driven algorithmic management approaches powered by artificial intelligence (AI) to achieve increasingly sophisticated optimization objectives. In this paper, we provide a survey presenting a unique systematic overview on how AI can empower crowdsourcing - which we refer to as AI-Empowered Crowdsourcing(AIEC). We propose a taxonomy which divides algorithmic crowdsourcing into three major areas: 1) task delegation, 2) motivating workers, and 3) quality control, focusing on the major objectives which need to be accomplished. We discuss the limitations and insights, and curate the challenges of doing research in each of these areas to highlight promising future research directions.
... The crowdsourcing work environment must be considered when implementing human computation on a crowdsourcing platform. It has been noted that, while the crowdsourcing marketplace allows workers to work without geographic or time constraints, labor exploitation can occur due to differences in position and economic disparity [153]. One of the concerns raised on low compensation is that crowdsourcing of translation tasks may upset the balance of the market by competing with professional translators [52]. ...
Preprint
Human computation is an approach to solving problems that prove difficult using AI only, and involves the cooperation of many humans. Because human computation requires close engagement with both "human populations as users" and "human populations as driving forces," establishing mutual trust between AI and humans is an important issue to further the development of human computation. This survey lays the groundwork for the realization of trustworthy human computation. First, the trustworthiness of human computation as computing systems, that is, trust offered by humans to AI, is examined using the RAS (Reliability, Availability, and Serviceability) analogy, which define measures of trustworthiness in conventional computer systems. Next, the social trustworthiness provided by human computation systems to users or participants is discussed from the perspective of AI ethics, including fairness, privacy, and transparency. Then, we consider human--AI collaboration based on two-way trust, in which humans and AI build mutual trust and accomplish difficult tasks through reciprocal collaboration. Finally, future challenges and research directions for realizing trustworthy human computation are discussed.
... They actively attempt to produce satisfactory work for requestors with post-task and compensatory strategies: (1) To expand their reputation, we found that about half of the crowdfarms rely on external resources, such as the requestor referrals and advertisements. These measures exceed the strategies of typical solo crowdworkers who mostly rely on preventative tactics such as sticking to familiar tasks and returning a task as soon as it is found to be difficult [88,222]. With these caveats, we find that the study confirmed and extended prior research and presents a coherent, logical picture of a rapidly evolving new organizational form. ...
... A range of investigative news stories have also uncovered poor conditions, limited psychological support, tenuous employment, and even evidence that moderators themselves can be radicalized as a result of the content they are judging (Chen 2014;Newton 2019). These workers are too often treated like "just an API call" by both technology firms and machine learning researchers who use them to build labeled datasets (Silberman et al. 2010). This sits closely alongside many other forms of labor and material extraction used in production of machine learning systems (Crawford & Joler 2018;Ensmenger 2018). ...
Article
Full-text available
Concerns around machine learning’s societal impacts have led to proposals to certify some systems. While prominent governance efforts to date center around networking standards bodies such as the Institute of Electrical and Electronics Engineers (IEEE), we argue that machine learning certification should build on structures from the sustainability domain. Policy challenges of machine learning and sustainability share significant structural similarities, including difficult to observe credence properties, such as data collection characteristics or carbon emissions from model training, and value chain concerns, including core-periphery inequalities, networks of labor, and fragmented and modular value creation. While networking-style standards typically draw their adoption and enforcement from functional needs to conform to enable network participation, machine learning, despite its digital nature, does not benefit from this dynamic. We therefore apply research on certification systems in sustainability, particularly of commodities, to generate lessons across both areas, informing emerging proposals such as the EU’s AI Act.
... To expand their reputation, we found that about half of the crowdfarms rely on external resources, such as the requestor referrals and advertisements. These measures exceed the strategies of typical solo crowdworkers who mostly rely on preventative tactics such as sticking to familiar tasks and returning a task as soon as it is found to be difficult [21,58]. Crowdfarms also address requestor feedback during their longer macrotask engagements, enabling them to safeguard their reputations in a remedial and more effective manner with less likelihood of a desk rejection the way solo crowdworkers do, and of course in comparison to typical crowdworkers working alone with limited incomes [5], crowdfarms have more resources to deploy to manage their reputations, through advertising and employees dedicated to chasing after positive comments. ...
Conference Paper
Full-text available
Crowdsourcing is a new value creation business model. Annual revenue of the Chinese market alone is hundreds of millions of dollars, yet few studies have focused on the practices of the Chinese crowdsourcing workforce, and those that do mainly focus on solo crowdworkers. We have extended our study of solo crowdworker practices to include crowdfarms, a relatively new entry to the gig economy: small companies that carry out crowdwork as a key part of their business. We report here on interviews of people who work in 53 crowdfarms. We describe how crowdfarms procure jobs, carry out macrotasks and microtasks, manage their reputation, and employ different management practices to motivate crowdworkers and customers.
... On-demand work platforms offer some benefits, such as flexible work hours and the ability to work remotely, but are plagued with poor working conditions. Workers on platforms like AMT face issues such as low pay, lack of basic worker protections, and power imbalances [29,32,49,70]. With more and more people expected to form part of this workforce, it becomes even more important to consider how to create a "future crowd workplace in which we would want our children to participate" [38]. ...
Article
Full-text available
Career development is vital for ensuring a happy and productive workforce, and for maintaining relevance in a rapidly changing economy shaped by technological progress. Yet career development is largely ignored in crowdwork. Crowdwork platforms like Amazon Mechanical Turk (AMT) do not support crowdworkers in reskilling and changing careers. In this paper, we study the career goals of AMT workers and the challenges they face in trying to transition out of crowdwork and into high-skill jobs offline or into specialized freelance work. We performed a qualitative study in which we surveyed 20 AMT workers and interviewed 6 of them about their career goals, how they are currently pursuing them, and the challenges they have faced. We found that crowdworkers aspire to transition out of AMT but face challenges due to lack of career guidance, and limited time and financial resources. Drawing on literature in career studies and organization science, we discuss how crowdworkers' challenges are further aggravated by the enviornment on AMT, and provide implications for future research and design that may better support crowdworkers in making a career change.
... A second ethical concern is the "human cost" of professional content moderation, by which paid moderators suffer from exposure to distressing content (Chen, 2014). Crowdsourcing the work of these professionals may only increase the variety of negative consequences experienced by workers (Silberman, Irani & Ross, 2010). In our methods, content producers are implicitly their own moderators: in behavioral-thresholding systems, the only individuals who will ever be exposed to inappropriate content will be those who produced it, while in reverse-correlation systems, not even the individual who attempts to subvert the content stream is likely to actually observe off-prompt content. ...
Article
Full-text available
User-generated content (UGC) is fundamental to online social engagement, but eliciting and managing it come with many challenges. The special features of UGC moderation highlight many of the general challenges of human computation in general. They also emphasize how moderation and privacy interact: people have rights to both privacy and safety online, but it is difficult to provide one without violating the other: scanning a user's inbox for potentially malicious messages seems to imply access to all safe ones as well. Are privacy and safety opposed, or is it possible in some circumstance to guarantee the safety of anonymous content without access to that content. We demonstrate that such "blind content moderation" is possible in certain domains. Additionally, the methods we introduce offer safety guarantees, an expressive content space, and require no human moderation load: they are safe, expressive, and scalable Though it may seem preposterous to try moderating UGC without human- or machine-level access to it, human computation makes blind moderation possible. We establish this existence claim by defining two very different human computational methods, behavioral thresholding and reverse correlation. Each leverages the statistical and behavioral properties of so-called "inappropriate content" in different decision settings to moderate UGC without access to a message's meaning or intention. The first, behavioral thresholding, is shown to generalize the well-known ESP game.
... The payment of the requesters and contributors are determined according to their demands [5]. After completing a task, the contributor gets the payment if the output is accepted by the requester [30]. If the contributor did not get payment for a task, it means that the result was not right or it did not fulfil the requesters need [4]. ...
... Literature on designing incentive structures for crowdsourcing initiatives is vast [39]- [43]. It is known that incentives heavily influence crowd motivation, perception, and behavior, thereby, determining the success of a crowdsourcing initiative [44]. ...
... The human computation of AMT relies on the invisibility of the workers to make it possible (Irani & Silberman, 2013). Programmers access turkers through the use of impersonal Application Programming Interfaces (APIs), in which workers are represented as an impersonal string of characters, instead of a name (Silberman et al., 2010). This dehumanized zone (Gray and Suri, 2019) makes the turkers appear in the context of "a new general industrial base in the cloud" (Finn, 2017, p. 327), thus "abstract[ing] physical and cultural infrastructure away altogether" (ibid). ...
Article
Contributing to research on digital platform labor in the Global South, this research surveyed 149 Brazilian workers in the Amazon Mechanical Turk (AMT) platform. We begin by offering a demographic overview of the Brazilian turkers and their relation with work in general. In line with previous studies of turkers in the USA and India, AMT offers poor working conditions for Brazilian turkers. Other findings we discuss include: how a large amount of the respondents affirmed they have been formally unemployed for a long period of time; the relative importance of the pay they receive to their financial subsistence; and how Brazilian turkers cannot receive their pay directly into their bank accounts due to Amazon restrictions, making them resort to creative circumventions of the system. Importantly, these “ghost workers” (Gray & Suri, 2019) find ways to support each other and self-organize through the WhatsApp group, where they also mobilize to fight for changes on the platform. As this type of work is still in formation in Brazil, and potentially will grow in the coming years, we argue this is a matter of concern.
... Human intelligence tasks can be rejected for no reason or for generic reasons that do not provide workers with sufficient justification. 56 Based on Wertheimer's analysis, these actions are exploitative because they are unfair to a party in a transaction. For example, an economically impoverished worker accepts a human intelligence task from an unscrupulous requester who performs one or more of the actions described in the previous paragraph, and the worker earns a lower hourly wage than would please him. ...
Article
The use of crowd workers as research participants is fast becoming commonplace in social, behavioral, and educational research, and institutional review boards are encountering more and more research protocols concerning these workers. In what sense are crowd workers vulnerable as research participants, and what should ethics reviewers look out for in evaluating a crowdsourced research protocol? Using the popular crowd‐working platform Amazon Mechanical Turk as the key example, this article aims to provide a starting point for a heuristic for ethical evaluation. The first part considers two reputed threats to crowd workers’ autonomy—undue inducements and dependent relationships—and finds that autonomy‐focused arguments about these factors are inconclusive or inapplicable. The second part proposes applying Alan Wertheimer's analysis of exploitation instead to frame the ethics of crowdsourced research. The article then provides some concrete suggestions for ethical reviewers based on the exploitation framework.
... Participants received a set amount of money for the completion of a task set (i.e., answering all 11 predictors and the completion survey). The payment of workers involved in research projects is a heavily debated topic [45]. We followed the highest state-wide minimum wage in the US ($11.50/hour at time of our study). ...
Article
Full-text available
The increased reliance on algorithmic decision-making in socially impactful processes has intensified the calls for algorithms that are unbiased and procedurally fair. Identifying fair predictors is an essential step in the construction of equitable algorithms, but the lack of ground-truth in fair predictor selection makes this a challenging task. In our study, we recruit 90 crowdworkers to judge the inclusion of various predictors for recidivism. We divide participants across three conditions with varying group composition. Our results show that participants were able to make informed decisions on predictor selection. We find that agreement with the majority vote is higher when participants are part of a more diverse group. The presented workflow, which provides a scalable and practical approach to reach a diverse audience, allows researchers to capture participants' perceptions of fairness in private while simultaneously allowing for structured participant discussion.
... It is intuitive to reason why such a high consideration is given to incentive design. Incentives heavily influence crowd motivation, perception, and behavior, thereby, determining the success of a crowdsourcing initiative [51]. Moreover, incentives are connected to structural and problem related decisions creating a complex dynamics that drives crowd behavior and successful outcomes. ...
Preprint
Full-text available
Crowdsourcing is an emerging paradigm in engineering design for open innovation. It offers various benefits to crucial aspects of design innovation, such as, generating diverse design ideas, and engaging consumers. However, crowdsourcing initiatives for engineering design are prone to failures if the complex nature of engineering design processes is not accounted for. For example, the initiative can fail if design solutions do not achieve the required quality, which, in turn, is influenced by factors such as domain knowledge, problem complexity, and incentive structures. Thus, there lies a need to systematically design crowdsourcing initiatives. In this paper, the authors build on 1 September 20, 2019 an existing framework for designing crowdsourcing initiatives in an engineering design context. They do so, by conducting an interview study with industry professionals with product design experience. The authors investigate the challenges experienced by these professionals for adopting crowdsourcing initiatives for engineering design. Through the study, research opportunities are identified that expand and aid the adoption of the framework. The authors discuss relevant literature for the identified research directions and frame the research gaps that need to be pursued. The authors conclude by encouraging academic communities to pursue collaborative efforts towards enabling systematic design of crowdsourcing initiatives for engineering design. Managerial Relevance Statement Crowdsourcing has been proven to reduce development costs, reduce development time and offer various other benefits to crucial aspects of design innovation. Despite these advantages, crowdsourcing initiatives are prone to failure which suggests a need to rigorously design crowdsourcing initiatives for an engineering design context. In this paper, a framework for the design of crowdsourcing initiatives is presented. The authors build on this research by presenting the results of an interview study with industry professionals. The perspectives from industry professionals are supported and discussed with existing literature in the field.
... Emergent research findings suggests that crowdworkers do develop different skills through their work on the platforms, ranging from learning online interaction etiquette, new languages to interact with clients, business development, marketing, negotiating, networking, customer relations and communication, digital literacy and technical skills such as software development, problem-solving, math or writing skills (Barnes, Green and de Hoyos, 2015;Green, de Hoyos, Barnes, Baldau and Behle, 2014;Gupta, 2017). Also, empirical research shows that crowdwork requires a set of complex and non-trivial skills, for example learning how to use and navigate the often opaque and non-intuitive interfaces of the platforms or how to find stimulating and well-paid tasks (Gupta, 2017;Martin, O'Neill, Gupta and Hanrahan, 2016;Silberman, Irani and Ross, 2010). Most importantly, a recent scoping study of crowdworkers' learning practices uncovered evidence of a considerable number and range of workplace learning activities and self-regulatory learning strategies being undertaken by crowdworkers to support their work on the platforms (Margaryan, 2016;Margaryan, forthcoming). ...
Article
This paper compares the strategies used by crowdworkers and conventional knowledge workers to self-regulate their learning in the workplace. Crowdworkers are a self-employed, radically distributed workforce operating outside conventional organisational settings; they have no access to the sorts of training, professional development and incidental learning opportunities that workers in conventional workplaces typically do. The paper explores what differences there are between crowdworkers and conventional knowledge workers in terms of self-regulated learning strategies they undertake. Data were drawn from four datasets using the same survey instrument. Respondents included crowdworkers from CrowdFlower and Upwork platforms and conventional knowledge workers in the finance, education and healthcare sectors. The results show that the majority of crowdworkers and conventional knowledge workers used a wide range of SRL strategies. Among 20 strategies explored, a statistically significant difference was uncovered in the use of only one strategy. Specifically, crowdworkers were significantly less likely than the conventional workers to articulate plans of how to achieve their learning goals. The results suggest that, despite working outside organisational structures, crowdworkers are similar to conventional workers in terms of how they self-regulate their workplace learning. The paper concludes by discussing the implications of these findings and proposing directions for future research.
... Requesters also have insufficient knowledge or motivation for helping workers [27,43,51]. Consequently, these models are usually scarce and do not always address workers' needs [74,75]. To enable skill development in crowd workers without requiring experts, we introduce the system Crowd Coach: a Chrome plugin that provides workers with short advice from peers while working on AMT. ...
Article
Traditional employment usually provides mechanisms for workers to improve their skills to access better opportunities. However, crowd work platforms like Amazon Mechanical Turk (AMT) generally do not support skill development (i.e., becoming faster and better at work). While researchers have started to tackle this problem, most solutions are dependent on experts or requesters willing to help. However, requesters generally lack the necessary knowledge, and experts are rare and expensive. To further facilitate crowd workers' skill growth, we present Crowd Coach, a system that enables workers to receive peer coaching while on the job. We conduct a field experiment and real world deployment to study Crowd Coach in the wild. Hundreds of workers used Crowd Coach in a variety of tasks, including writing, doing surveys, and labeling images. We find that Crowd Coach enhances workers' speed without sacrificing their work quality, especially in audio transcription tasks. We posit that peer coaching systems hold potential for better supporting crowd workers' skill development while on the job. We finish with design implications from our research.
... From the other side of the crowdworking relationship, the organisation should offer the crowdsourced workers fair and ethical conditions for the work being done ( Silberman, Irani and Ross, 2010) . Understanding and designing systems to respond to the individual's motivations for becoming involved on a task (Crowston and Fagnot, 2008;Cuel et al, 2011), either on a voluntary or on a remuneration basis, will lead to improved satisfaction from the workforce, and improve the efficiency of the task by reducing dropouts (Eveleigh at al, 2014;Jackson et al, 2015). ...
Preprint
Full-text available
Organisations are increasingly open to scrutiny, and need to be able to prove that they operate in a fair and ethical way. Accountability should extend to the production and use of the data and knowledge assets used in AI systems, as it would for any raw material or process used in production of physical goods. This paper considers collective intelligence, comprising data and knowledge generated by crowd-sourced workforces, which can be used as core components of AI systems. A proposal is made for the development of a supply chain model for tracking the creation and use of crowdsourced collective intelligence assets, with a blockchain based decentralised architecture identified as an appropriate means of providing validation, accountability and fairness.
Article
Microtask gig workers (MGWs) rely on digital platforms to arrange work agreements with requesters to complete well-defined microtasks. Many MGWs use an electronic network of practice (ENP) to facilitate information sharing about desirable and undesirable microtasks. This study uses social capital theory to theorize how social capital’s dimensions – structural, cognitive, and relational – shape the development of uncertainty-reducing and individualized-skill benefits. Based on survey data from 436 Amazon Mechanical Turk (MTurk) workers, the findings demonstrate that unique social capital dimensions affect specific ENP benefits. Understanding the communication style of an ENP (i.e., cognitive social capital) positively influences the uncertainty-reducing benefits of microtask information quality (MIQ) related to MTurk work. Combined with expectations of reciprocity and trust in ENP members (i.e., relational social capital), MIQ shapes microtask opportunity recognition (MOR), whereby individual MGWs identify opportunities to complete financially beneficial microtasks. The present study demonstrates that contextual factors, based on the coopetive nature of microtask ENPs, affect the interrelated structure of social capital theory and its underlying dimensions. Lastly, post hoc findings demonstrate the influence of MOR on MGWs’ financial performance, challenging previously held assumptions about the role of MIQ within the microtask literature.
Article
Microtask crowdsourcing holds great potential as an employment opportunity with the flexibility and anonymity that individuals with disability may require. Though prior research has explored the accessibility of crowd work, the lived crowd work experiences of the broader community of workers with disability are still largely under-explored, especially when it comes to how their experiences are similar to or different from the experiences of workers without disability. In this work, we aim to obtain a deeper understanding of the microtask crowdsourcing experience for people with disabilities, especially regarding their financial and social experiences of participating in crowd work, along with the benefits and challenges that they encounter through this work. Specifically, we first surveyed 1,200 crowd workers both with and without disability about their experiences using the Amazon Mechanical Turk platform, and the differences we found inspired the design of a follow-up survey to gain greater understanding of the crowd work experience for workers with disability. Our findings reveal that workers with disability receive unique benefits from performing crowd work, such as a greater sense of purpose, but also encounter many challenges, such as completing tasks on time and earning a livable wage, causing them to turn to online communities for assistance. Although many of the challenges they face are not unique to crowd workers with disability, workers with disability may be disproportionately impacted by these challenges. From our findings, we provide implications for crowd platforms, as well as the gig economy as a whole, that seek to promote greater consideration of workers with a diverse range of conditions to create a more valuable work experience for them.
Chapter
Understanding sources of learning has become a major area of research in Education Management. Building on the assumptions that crowd learning is distributed across societies and education institutions and that it creates an innovative perspective for education for next-generation over the time, this article examines the link between formal education and innovative crowd-created knowledge. The article concludes by examining implications of crowd learning concept for actual and future education management systems. This paper explores how the crowd learns and remembers over time in the context, and how more realistic assumptions of student experience may be used in building crowd knowledge processes. The aim of the paper is to determine the assessment of crowd learning, its history, concepts and its influence on future learning process, including the changing instructor's role.
Article
The prevalent use of digital labor platforms has transformed the nature of work globally. Such algorithm-based platforms have triggered many technological, legal, ethical, and human resource management challenges. Despite some benefits (i.e., flexibility), the precarious conditions and commodification of jobs are major concerns in these platform-based employment conditions. The remote-work paradigm shift during the COVID-19 pandemic has made the interplay between technology, digitalization, and precarious workers' well-being a critical issue to address. This paper focuses on microtask platforms by examining overall well-being associated with turking as a work experience. Using a sample of 401 Amazon Mechanical Turk workers during the early stage of the COVID-19 pandemic, data were collected on individual conditions affecting the overall quality of workers' lives. The results from two structural equation models demonstrated the direct and mediating effects of task characteristics, excessive working, and financial pressure, mirroring the bright and dark sides of turking. Greater turking task significance and meaningfulness increase personal growth opportunities, ultimately improving workers' perceived quality of life. However, excessive work and greater financial pressure decrease self-acceptance and overall quality of life. This study examines the complicated nature of work experience on algorithm-based platforms by unpacking individual factors that affect workers' well-being.
Preprint
Full-text available
This article-based doctoral thesis explores the stakeholder perspectives and experiences of crowdsourced creative work on two of the leading crowdsourcing platforms. The thesis has two parts. In the first part, we explore creative work from the perspective of the crowd worker. In the second part, we explore and study the requester's perspective in different contexts and several case studies. The research is exploratory and we contribute empirical insights using survey-based and artefact-based approaches common in the field of Human-Computer Interaction (HCI). In the former approach, we explore the key issues that may limit creative work on paid crowdsourcing platforms. In the latter approach, we create computational artefacts to elicit authentic experiences from both crowd workers and requesters of crowdsourced creative work. The thesis contributes a classification of crowd workers into five archetypal profiles, based on the crowd workers' demographics, disposition, and preferences for creative work. We propose a three-part classification of creative work on crowdsourcing platforms: creative tasks, creativity tests, and creativity judgements (also referred to as creative feedback). The thesis further investigates the emerging research topic of how requesters can be supported in interpreting and evaluating complex creative work. Last, we discuss the design implications for research and practice and contribute a vision of creative work on future crowdsourcing platforms with the aim of empowering crowd workers and fostering an ecosystem around tailored platforms for creative microwork. Keywords: creative work, creativity, creativity support tools, crowdsourcing
Article
Microtasks accomplished by humans are used in many corners of the Internet. They help to make decisions where it is not possible to rely on algorithms (yet) like insult detection or fake reviews. People conducting crowdwork, crowdworkers, are often recruited via platforms where employers have more power than crowdworkers. This is sometimes misused by offering poor work conditions, which can lead to poor work quality. Online feedback systems (OFS) can discipline employers to improve work conditions and subsequently work quality. Unfortunately, the majority of crowdworkers do not contribute to an OFS and remain silent. We develop and test a model based on self-determination theory with PLS-SEM to explain their silence. Perceived cost and perceived non-relevance are deterrents to contributions. However, satisfaction in helping others and the wish to belong to the community are significant motivational factors, which could be used in the design of an OFS to foster crowdworkers’ contributions.
Conference Paper
Full-text available
The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied to large training datasets crawled from pre-existing text on the Web, we extend the critique to challenge datasets custom-created by crowdworkers. We present several sets of criticisms, where ethical and scientific issues in language model research reinforce each other: labour injustices in crowdwork, dataset quality and inscrutability, inequities in the research community, and centralized corporate control of the technology. We also present a new type of tool for researchers to use in examining large datasets when evaluating them for quality.
Article
Crowdsourcing platforms are powerful tools for academic researchers. Proponents claim that crowdsourcing helps researchers quickly and affordably recruit enough human subjects with diverse backgrounds to generate significant statistical power, while critics raise concerns about unreliable data quality, labor exploitation, and unequal power dynamics between researchers and workers. We examine these concerns along three dimensions: methods, fairness, and politics. We find that researchers offer vastly different compensation rates for crowdsourced tasks, and address potential concerns about data validity by using platform-specific tools and user verification methods. Additionally, workers depend upon crowdsourcing platforms for a significant portion of their income, are motivated more by fear of losing access to work than by specific compensation rates, and are frustrated by a lack of transparency and occasional unfair treatment from job requesters. Finally, we discuss critical computing scholars’ proposals to address crowdsourcing’s problems, challenges with implementing these resolutions, and potential avenues for future research.
Article
Computational systems, including machine learning, artificial intelligence, and big data analytics, are not only inescapable parts of social life but also increasingly at issue in legal practice and processes. We propose turning more law and social science attention to new technological developments through the study of “law in computation,” that is, computational systems' integration with regulatory and administrative procedures, the sociotechnical infrastructures that support them, and their impact on how individuals and populations are interpellated through the law. We present cases for which examining law in computation illuminates how new technological processes potentially mitigate, exacerbate, or mask human biases present in legal systems, and propose future directions and methods for research. As computational systems become ever more sophisticated, understanding the law in computation is critical not only for law and social science scholarship, but also for everyday civics.
Article
Full-text available
This paper empirically investigates a Common Information Space (CIS) established by medical secretaries so they could support each other during their workplace’s transition to a new comprehensive electronic health record, called the Healthcare Platform (HP). With the new system, the secretaries were expected to become partially obsolete, as doctors were to take on a significant load of the clerical work, such as documenting and coding. To handle their changing work situation, the medical secretaries set up an online support group in parallel to, but independent from, the official implementation support organization. The paper’s contribution is a characterization of the support group as a common information space (CIS), and analysis of the specific qualities of a worker-driven CIS as a forum for 1) articulation work required for re-grounding changing tasks and responsibilities, 2) archiving discussions (posts) and guidelines to further their collective interpretation, and 3) creating a space independent of management for employees to work out their new role in an organization in a situation of transition and change.
Article
Full-text available
The computer and information technology revolution has changed the way we relate with one another and the world around us. In the drive for even more innovation, we rarely pause to ponder over the fundamental question of whether we should. Artificial intelligence research has come of age and fresh questions are being raised about what it means to be human and what really is consciousness. Advances made in the completion of the human genome project are leading to complicated questions in eugenics and euthenics. As root kits become more accessible, individuals write more malicious software. Is it time for computer programming to become “controlled knowledge” considering the potential for damage. This paper examines the contemporary ethical issues surrounding computer science research and posits that a robust framework of ethics needs to be in place for research of computer science. Certain areas of computer science research like autonomous intelligent agents should be tightly regulated by government to prevent abuse.
Article
Full-text available
Crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) are popular and widely used in both academic and non-academic realms, but privacy threats and challenges in crowdsourcing have not been extensively reviewed. To help push the field forward in important new directions, this paper first reviews the privacy threats in different types of crowdsourcing based on Solove’s taxonomy of privacy and Brabham’s typology of crowdsourcing. Then, the paper explores the privacy challenges associated with the characteristics of crowdsourcing task, platform, requesters, and crowd workers. These privacy challenges are discussed and categorized into both theoretical and practical challenges. Based on the review and discussion, this paper proposes a set of strategies to better understand and address many of the privacy threats and challenges in crowdsourcing. Finally, the paper concludes by suggesting research implications for the future work.
Article
Purpose This paper aims to explore workplace learning practices within two types of crowdwork – microwork and online freelancing. Specifically, the paper scopes and compares the use of workplace learning activities (WLAs) and self-regulatory learning (SRL) strategies undertaken by microworkers (MWs) and online freelancers (OFs). We hypothesised that there may be quantitative differences in the use of WLAs and SRL strategies within these two types of crowdwork, because of the underpinning differences in the complexity of tasks and skill requirements. Design/methodology/approach To test this hypothesis, a questionnaire survey was carried out among crowdworkers from two crowdwork platforms – Figure Eight (microwork) and Upwork (online freelancing). Chi-square test was used to compare WLAs and SRL strategies among OFs and MWs. Findings Both groups use many WLAs and SRL strategies. Several significant differences were identified between the groups. In particular, moderate and moderately strong associations were uncovered, whereby OFs were more likely to report (i) undertaking free online courses/tutorials and (ii) learning by receiving feedback. In addition, significant but weak or very weak associations were identified, namely, OFs were more likely to learn by (i) collaborating with others, (ii) self-study of literature and (iii) making notes when learning. In contrast, MWs were more likely to write reflective notes on learning after the completion of work tasks, although this association was very weak. Originality/value The paper contributes empirical evidence in an under-researched area – workplace learning practices in crowdwork. Crowdwork is increasingly taken up across developed and developing countries. Therefore, it is important to understand the learning potential of this form of work and where the gaps and issues might be. Better understanding of crowdworkers’ learning practices could help platform providers and policymakers to shape the design of crowdwork in ways that could be beneficial to all stakeholders.
Article
Full-text available
Financial inclusion has been defined and understood primarily in terms of access, thereby constituting ‘inclusion’/‘exclusion’ as a binary. This paper argues such a view to be myopic that risks treating financial inclusion as an end in itself, and not as means to a larger end. ‘Access’ oriented perspectives also fail to take into account considerations of structural factors like power asymmetries and pay inadequate attention to user practices. Through the case of auto-rickshaw drivers in Bangalore, India, and their use of Ola, a peer-to-peer taxi hailing service similar to Uber, we show that access is a necessary, but not sufficient condition to achieve financial inclusion in a substantive sense. By examining in detail, the financial needs and practices of rickshaw drivers, we identify the opportunities and constraints for digital technology to better support their financial practices and enhance their wellbeing. The paper proposes adding ‘autonomy’ and ‘affordances’ as two crucial factors to be included in the discourse on financial inclusion. Finally, we outline design implications for P2P technologies to contribute towards the financial inclusion of drivers.
Chapter
Full-text available
Understanding sources of learning has become a major area of research in Education Management. Building on the assumptions that crowd learning is distributed across societies and education institutions and that it creates an innovative perspective for education for next-generation over the time, this article examines the link between formal education and innovative crowd-created knowledge. The article concludes by examining implications of crowd learning concept for actual and future education management systems. This paper explores how the crowd learns and remembers over time in the context, and how more realistic assumptions of student experience may be used in building crowd knowledge processes. The aim of the paper is to determine the assessment of crowd learning, its history, concepts and its influence on future learning process, including the changing instructor's role.
Chapter
Web 2.0 is shifting work to online, virtual environments. At the same time social networking technologies are accelerating the discovery of experts, increasing the effectiveness of online knowledge acquisition and collaborative efforts. Nowadays it is possible to harness potentially unknown (large) groups of networked specialists for their abilities to amass large-scale collections of data and to solve complex business and technical problems, in the process known as crowdsourcing. Large global enterprises and entrepreneurs are increasingly adopting crowdsourcing because of its promise to give simple, low cost, access to a scalable workforce online. Enterprise crowdsourcing examples abound, taking many different shapes and forms, from mass data collection to enabling end-user driven customer support. This chapter identifies requirements for common protocols and reusable service components, extracting from existing crowdsourcing applications, in order to enable standardized interfaces supporting crowdsourcing capabilities.
Chapter
Globalisation and digitisation of value creation pose new challenges regarding sustainability and decent work. This chapter discusses possible contributions Human Factors and Ergonomics (HFE) can provide with regard to these challenges. It is our aim to show that although HFE already offers results of extensive research, it is also important to further its development in order to be prepared for dealing with the challenges and opportunities in this field. These range from updating the normative mindset of HFE to broadening its modelling approaches and to developing curricula and cooperation with key actors.
Article
Full-text available
In order to understand how a labor market for human com-putation functions, it is important to know how workers search for tasks. This paper uses two complementary meth-ods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we paid workers for self-reported information about how they search for tasks. Our main find-ings are that on a large scale, workers sort by which tasks are most recently posted and which have the largest number of tasks available. Furthermore, we find that workers look mostly at the first page of the most recently posted tasks and the first two pages of the tasks with the most available instances but in both categories the position on the result page is unimportant to workers. We observe that at least some employers try to manipulate the position of their task in the search results to exploit the tendency to search for recently posted tasks. On an individual level, we observed workers searching by almost all the possible categories and looking more than 10 pages deep. For a task we posted to Mechanical Turk, we confirmed that a favorable position in the search results do matter: our task with favorable posi-tioning was completed 30 times faster and for less money than when its position was unfavorable.
Article
Full-text available
Crowdsourcing is a form of "peer production" in which work traditionally performed by an employee is outsourced to an "undefined, generally large group of people in the form of an open call." We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage--the smallest wage a worker is willing to accept for a task and the key parameter in our labor supply model. It shows that the reservation wages of a sample of workers from Amazon's Mechanical Turk (AMT) are approximately log normally distributed, with a median wage of $1.38/hour. At the median wage, the point elasticity of extensive labor supply is 0.43. We discuss how to use our calibrated model to make predictions in applied work. Two experimental tests of the model show that many workers respond rationally to offered incentives. However, a non-trivial fraction of subjects appear to set earnings targets. These "target earners" consider not just the offered wage--which is what the rational model predicts--but also their proximity to earnings goals. Interestingly, a number of workers clearly prefer earning total amounts evenly divisible by 5, presumably because these amounts make good targets.
Article
In this paper we describe Rabj 1 , an engine designed to sim-plify collecting human input. We have used Rabj to collect over 2.3 million human judgments to augment data min-ing, data entry, and curation tasks at Freebase over the course of a year. We illustrate several successful applica-tions that have used Rabj to collect human judgment. We describe how the architecture and design decisions of Rabj are affected by the constraints of content agnosticity, data freshness, latency and visibility. We present work aimed at increasing the yield and reliability of human computation ef-forts. Finally, we discuss empirical observations and lessons learned in the course of a year of operating the service.
Article
Tools for human computers" is an underexplored design space in human computation research, which has focused on techniques for buyers of human computation rather than sellers. We characterize the sellers in one human compu-tation market, Mechanical Turk, and describe some of the challenges they face. We list several projects developed to approach these problems, and conclude with a list of open questions relevant to sellers, buyers, and researchers.
Conference Paper
This paper studies an active underground economy which specializes in the commoditization of activities such as credit card fraud, identity theft, spamming, phishing, online credential theft, and the sale of compromised hosts. Using a seven month trace of logs collected from an active underground market operating on public Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal substrate mature enough to steal wealth into the millions of dollars in less than one year.
Conference Paper
Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet.
Article
This article confronts the thorny questions that arise in attempting to apply traditional employment and labor law to emerging online “crowdsourcing” labor markets, and offers some provisional regulatory recommendations designed to clarify the employment relationship and protect crowd workers. Crowdsourcing refers to the process of taking tasks that would normally be delegated to an employee and distributing them to a large pool of online workers, the “crowd,” in the form of an open call. The article describes how crowdsourcing works, its advantages and risks, and why particular subsections of the paid crowdsourcing industry expose employees to substandard working conditions without much recourse to the law. Taking Amazon’s “Mechanical Turk” as a case study, it investigates the legal status of the “crowd,” exploring the nature of the employment relationship and the complications that might arise in applying existing work laws. In doing so it draws on employment and labor case law, but also on other areas of internet law in order to illustrate how courts grapple with the migration of regulated activity into unregulated cyberspace. Finally, the article makes a case for regulatory intervention, based on both the vulnerability of crowd workers and the failure of the law to keep up with the technological developments that drive our information economy. To that end, it presents recommendations for legislatures seeking to expand legal protections for crowdsourced employees, suggestions for how courts and administrative agencies can pursue the same objective within our existing legal framework, voluntary “best practices” for firms and venues involved in crowdsourcing, and examples of how crowd workers might begin to effectively organize and advocate on their own behalf.
Article
The paradigm of "human computation" seeks to harness human abilities to solve computational problems or otherwise perform distributed work that is beyond the scope of current AI technologies. One aspect of human computation has become known as "games with a purpose" and seeks to elicit useful computational work in fun (typically) multi-player games. Human computation also encompasses distributed work (or "peer production") systems such as Wikipedia and Question and Answer forums. In this short paper, we survey existing game-theoretic models for various human computation designs, and outline research challenges in advancing a theory that can enable better design. Engineering and Applied Sciences Version of Record
Mechanical Turk: the demographics. http://behind-the-enemy-lines.blogspot. com
  • P Ipeirotis
Ipeirotis, P. Mechanical Turk: the demographics. http://behind-the-enemy-lines.blogspot. com/2008/03/mechanical-turk-demographics.html.
Sellers' problems in human computation markets
  • M S Silberman
Silberman, M. S., et al. Sellers' problems in human computation markets. In Proceedings of HCOMP 2010.
The anatomy of a large-scale human computation engine
  • S Kochhar
Kochhar, S., et al. The anatomy of a large-scale human computation engine. In Proceedings of HCOMP 2010.
Task search in a human computation market
  • L Chilton
Chilton, L., et al. Task search in a human computation market. In Proceedings of HCOMP 2010.
The role of game theory in human computation systems
  • S Jain
  • D Parkes
Jain, S. and D. Parkes. The role of game theory in human computation systems. In Proceedings of HCOMP 2009.