To read the full-text of this research, you can request a copy directly from the authors.
... Importantly, the gig work context presents additional elements to factor in when considering data collection, such as a diversity of task domains (each involving its own unique set of data types), a broad range of labor issues and initiatives where aggregated data can be applied, and several involved stakeholder groups -both platforms and consumers hold power over and collect data from workers, risking worker (and possibly consumer) privacy and agency. A multi-platform social media analysis from Sannon et al. [81] found gig workers to experience intrusive data collection and surveillance not only from platforms but also customers. Around policy development, Kahn et al. [52] showed how privacy concerns of impacted communities deviated what's expected by privacy and development experts, suggesting a need to (re-)align preferences of higher-power groups with those of experiential experts. ...
... al. engaged with workers, advocates, regulators, and platform employees to surface priorities of each group around worker rights, but these workshops covered a broad space of policy, service and technology solutions, instead of focusing on collective datasharing infrastructures [49]. Furthermore, the expanding diversity of gig work domains call for closer examinations of how regulation can be improved across sectors [48], especially since risks and responsibilities vary widely across platforms [81], and the lack of governance between occupations can differentially impact how workers across sectors experience such risks [6,67,94]. This study extends these works to codesign for worker data exchange and knowledge sharing in a way that meets policy priorities and data needs of workers (across domains) and policy related stakeholders. ...
... The combination of pressures from higher-power actors often forces workers to accept jobs despite unsafe or unfair conditions. Corroborating prior work [65,81,94], we observe client harassment as an additional relational factor where that puts female workers at higher risk (4.1.3). Unlike other discussed initiatives, policy experts indicated a direct link between Safety and the use of data towards creating worker-centered labor and safety standards. ...
The proliferating adoption of platform-based gig work increasingly raises concerns for worker conditions. Past studies documented how platforms leveraged design to exploit labor, withheld information to generate power asymmetries, and left workers alone to manage logistical overheads as well as social isolation. However, researchers also called attention to the potential of helping workers overcome such costs via worker-led datasharing, which can enable collective actions and mutual aid among workers, while offering advocates, lawmakers and regulatory bodies insights for improving work conditions. To understand stakeholders' desiderata for a data-sharing system (i.e. functionality and policy initiatives that it can serve), we interviewed 11 policy domain experts in the U.S. and conducted co-design workshops with 14 active gig workers across four domains. Our results outline policymakers' prioritized initiatives, information needs, and (mis)alignments with workers' concerns and desires around data collectives. We offer design recommendations for data-sharing systems that support worker needs while bringing us closer to legislation that promote more thriving and equitable gig work futures.
... While there is an abundance of research on platform metrics, only some studies touch on gig workers' self-tracking practices outside of gig platforms. For instance, gig workers have been observed to selftrack as a form of protection from platforms and customers [75,99]. Additionally, gig workers self-track income, expenses, and mileage to fulfill tax obligations given their classification as independent contractors which requires them to calculate and file income and self-employment taxes [83,106]. ...
... Explorations into gig workers' accountabilities also consider how accountability is shifted between the worker and platform due to workers' classification as independent contractors -particularly what the platform will take accountability for and what they leave to the worker. Researchers have observed how gig workers are accountable for maintaining separate records of their activity that are also automatically and algorithmically evaluated by the platform [75,99]. As a result, they must manage their standing with "reputation auditing" where they rectify platform records to more accurately reflect their experiences [75]. ...
... As a result, they must manage their standing with "reputation auditing" where they rectify platform records to more accurately reflect their experiences [75]. Workers do this by maintaining records of their activity which they use to support claims, thus redirecting the accountability they previously held to the platform or customer [98,99]. These reports signify the precarious relationship between contractor and employer as they serve as the focal point of accountability negotiation between the two parties. ...
Tracking is inherent in and central to the gig economy. Platforms track gig workers' performance through metrics such as acceptance rate and punctuality, while gig workers themselves engage in self-tracking. Although prior research has extensively examined how gig platforms track workers through metrics -- with some studies briefly acknowledging the phenomenon of self-tracking among workers -- there is a dearth of studies that explore how and why gig workers track themselves. To address this, we conducted 25 semi-structured interviews, revealing how gig workers self-tracking to manage accountabilities to themselves and external entities across three identities: the holistic self, the entrepreneurial self, and the platformized self. We connect our findings to neoliberalism, through which we contextualize gig workers' self-accountability and the invisible labor of self-tracking. We further discuss how self-tracking mitigates information and power asymmetries in gig work and offer design implications to support gig workers' multi-dimensional self-tracking.
... The gig economy promotes a prevalence of crowd work and shortterm contracts, implying an urgent need for researchers to address the inequity, bias, and privacy problems caused by the absence of worker-centered deployment and the abuse of digital technologies [28,42,78,93,95]. Platform workers' acts to (re)gain autonomy can lead platform designers to redesign the algorithms [27]. ...
... Leveraging digital technologies, algorithms, platform rules, and penalties can create quality services and maximize the efciency of the delivery system [83]. Meanwhile, more research investigated harms, such as algorithm absurd [75], surveillance [64,78], and over-exploitation of workers [58]. We emphasized food couriers' low position in voicing and debating for themselves while facing rules and penalties. ...
... Food couriers lack the support to voice their "mistakes", with platforms favoring communication with customers and product suppliers. Previous research investigated similar tension from the perspective of privacy and indicated that food couriers might employ video recordings to protect their rights [78]. In our survey, we argue that the food couriers' voice is marginalized by the huge amount of orders and the fast-changing nature of the work. ...
The gig economy and digital labor platforms, such as food delivery, have become essential while also troubling the current socioeconomic landscape. Delivery platforms promise entry-level work, flexibility, and other benefits. However, researchers remain divided on if these platforms benefit workers and society at large. This study aims to shed light on the comprehensive challenges in food delivery work, uncovering gig worker-centered design opportunities to improve the lives of food couriers. Adopting an exploratory research process, we analyzed 19 ride-along food delivery videos and performed nine semi-structured interviews with food couriers in Portugal. Our findings illustrated the complexity and challenging nature of delivery work due to the entangled physical, digital, social, natural, and human factors. We captured and discussed gig worker-centered opportunities that surfaced from work challenges, echoing the needs of food couriers about supporting work, justice, inclusion, and work vision.
... Last, workers feel they have low privacy when working under AC (e.g., Wiener et al., 2021). The widespread surveillance and continuous collection of data (e.g., GPS, customer reviews) lead to the fear that AC can seriously threaten their privacy (Sannon et al., 2022). Overall, the low compatibility between a worker's personality and the experienced AC is a trigger for workers to mitigate the tight control that platforms exert over them (Curchod et al., 2020;de la Vega et al., 2021). ...
... Therefore, drivers cannot make an accurate decision on how profitable that particular ride will be. Third, workers do not know what data is collected about them and which data is used for AC decisions, making it difficult for them to access, for example, how their privacy may be affected by AC (Sannon et al., 2022). In sum, the highly opaque working environment leads to great uncertainty for workers regarding their working conditions with missing information about their earnings and job security (Pregenzer, Wieser, et al., 2021). ...
... Workers also manipulate data to protect their privacy. In this regard, workers enter fake personal information or use VPNs to make it difficult for their platform provider to track their location and related information (Sannon et al., 2022). Another approach to influence data collection is to prevent data from being collected, especially those data that would negatively impact the worker. ...
Online labor platforms (OLPs) such as Uber or Upwork heavily rely on algorithms instead of human managers to control workers’ behavior. While algorithmic control (AC) allows platform providers to control their workers efficiently, it is often perceived by workers as a tighter control (compared to human-based control) which increases their motivation to resist. Especially covert resistance (i.e., workers’ hard-to-observe oppositional actions) provides essential insights into how workers deal with AC that affect platforms’ longevity. In this study, we conducted a systematic literature review to develop a theoretical framework showing how and why workers perform covert resistance against AC. Further, our analysis reveals the enabling role of sensemaking for diverse forms of covert resistance. Overall, our study expands the literature on AC by shedding light on the formation of workers’ covert resistance. Therefore, we offer platform providers and policymakers crucial insights to create fairer working environments for workers under AC.
... First, if employers do not trust their employees, they may view monitoring as a necessary tool to enforce compliance or detect misconduct. This can lead to more intrusive monitoring practices that may be perceived as threatening or invasive by employees (Sannon et al., 2022). For employees, perceptions of excessive or unnecessary monitoring can trigger increased privacy concern and reactance, which can result in poorer performance and a loss of trust in the employer (Brown et al., 2015;Jensen & Raver, 2012;Kalischko & Riedl, 2023;Kayas et al., 2019;Stanton & Weiss, 2000). ...
... 2023). Still others adopt risk mitigation strategies, including self-protective surveillance behaviors (such as video recording themselves with customers) in response to concerns associated with various forms of workplace surveillance (Sannon et al., 2022). ...
... Deactivation 1 without recourse results in financial harm [97,119]. Our large-scale analysis of over one million Reddit comments, combined with interview data, reveals the persistence of these harms at scale, surpassing the scope of previous small-scale studies [66,91,95,118,128]. While the harms of the rideshare industry have been well-documented in qualitative work, few studies have explored what specifically should be changed within platform interfaces to mitigate harms and how policy can be used as a tool to enforce changes to platform interfaces. ...
... In the context of gig economy research, studies have delved into rideshare workers' concerns about privacy, scams, and support systems [66,91,95,118,128]. However, these studies primarily relied on manual coding of small-scale datasets (the largest we found analyzed about 2.6K posts [118]). ...
Rideshare platforms exert significant control over workers through algorithmic systems that can result in financial, emotional, and physical harm. What steps can platforms, designers, and practitioners take to mitigate these negative impacts and meet worker needs? In this paper, through a novel mixed methods study combining a LLM-based analysis of over 1 million comments posted to online platform worker communities with semi-structured interviews of workers, we thickly characterize transparency-related harms, mitigation strategies, and worker needs while validating and contextualizing our findings within the broader worker community. Our findings expose a transparency gap between existing platform designs and the information drivers need, particularly concerning promotions, fares, routes, and task allocation. Our analysis suggests that rideshare workers need key pieces of information, which we refer to as indicators, to make informed work decisions. These indicators include details about rides, driver statistics, algorithmic implementation details, and platform policy information. We argue that instead of relying on platforms to include such information in their designs, new regulations that require platforms to publish public transparency reports may be a more effective solution to improve worker well-being. We offer recommendations for implementing such a policy.
... While surveillance videos and facial recognition technology can be framed as privacy invasion to all data subjects involved, it creates disproportionate harm to immigrants when police use such systems for targeting and deportation [192]. To move toward a safer, [429][430][431][432][433][434][435][436][437][438][439][440][441][442][443][444][445][446]. [601] more accessible, and more equitable digital future, we must consider and center the needs of marginalized and otherwise vulnerable communities in developing privacy tools and advice on protective strategies [334,461]. ...
... While the question being optional could contribute to the low-response rate, we believed that it was a more ethical approach so that participants were not put in a position to sacrifice their data for monetary gains unwillingly. The different findings between our study and Huh et al. [231] could also imply shifting norms among crowdworkers as they become more attentive and protective of their privacy [435]. ...
As much as consumers express desires to safeguard their online privacy, they often fail to do so effectively in reality. In my dissertation, I combine qualitative, quantitative, and design methods to uncover the challenges consumers face in adopting online privacy behaviors, then develop and evaluate different context-specific approaches to encouraging adoption. By examining consumer reactions to data breaches, I find how consumers' assessment of risks and decisions to take action could be subject to bounded rationality and potential biases. My analysis of data breach notifications provides another lens for interpreting inaction: unclear risk communications and overwhelming presentations of recommended actions in these notifications introduce more barriers to action. I then turn to investigate a broader set of privacy, security, and identity theft protection practices; the findings further illuminate individual differences in adoption and how impractical advice could lead to practice abandonment. Leveraging these insights, I investigate how to help consumers adopt online privacy-protective behaviors in three studies: (1) a user-centered design process that identified icons to help consumers better find and exercise privacy controls, (2) a qualitative study with multiple stakeholders to reimagine computer security customer support for serving survivors of intimate partner violence, and (3) a longitudinal experiment to evaluate nudges that encourage consumers to change passwords after data breaches, taking inspiration from the Protection Motivation Theory. These three studies demonstrate how developing support solutions for consumers requires varying approaches to account for the specific context and population studied. My dissertation further suggests the importance of critically reflecting on when and how to encourage adoption. While inaction could be misguided sometimes, it could also result from rational cost-benefit deliberations or resignation in the face of practical constraints.
... This flexibility, coupled with the "flexible employment" (灵活就业ling huo jiu ye), foregrounded by the Chinese government in 2023, has shaped content production for monetization on platforms as a passion-driven work. This is notably distinct from traditional gig work-like that of food delivery workers, ridesharing drivers, or online pieceworkers-which is predicated on labor outsourcing, work surveillance, body exploitation, and the marginalization of disadvantaged classes [41,54,83,93,97]. ...
This paper critically examines flexible content creation conducted by Key Opinion Consumers (KOCs) on a prominent social media and e-commerce platform in China, Xiaohongshu (RED). Drawing on nine-month ethnographic work conducted online, we find that the production of the KOC role on RED is predicated on the interactions and negotiations among multiple stakeholders -- content creators, marketers, consumer brands (corporations), and the platform. KOCs are instrumental in RED influencer marketing tactics and amplify the mundane and daily life content popular on the platform. They navigate the dynamics in the triangulated relations with other stakeholders in order to secure economic opportunities for producing advertorial content, and yet, the labor involved in producing such content is deliberately obscured to make it appear as spontaneous, ordinary user posts for the sake of marketing campaigns. Meanwhile, the commercial value of their work is often underestimated and overshadowed in corporate paperwork, platform technological mechanisms, and business models, resulting in and reinforcing inadequate recognition and compensation of KOCs. We propose the concept of ``informal labor'' to offer a new lens to understand content creation labor that is indispensable yet unrecognized by the social media industry. We advocate for a contextualized and nuanced examination of how labor is valued and compensated and urge for better protections and working conditions for informal laborers like KOCs.
... In resistance to surveillance and hegemonic data practices of platforms [3,39,42], workers increasingly engage in self-tracking through individual means [27] or third-party tools 2 . In the absence of sufficient policy and regulations for responsible platform practices, researchers and advocates increasingly turn to data collectives and tools as a method for advancing regulation [10,30], restoring worker power [20,29,44,52] and holding platforms accountable to more ethical, fair and community-centered data practices 3 [35]. ...
Platform-based laborers face unprecedented challenges and working conditions that result from algorithmic opacity, insufficient data transparency, and unclear policies and regulations. The CSCW and HCI communities increasingly turn to worker data collectives as a means to advance related policy and regulation, hold platforms accountable for data transparency and disclosure, and empower the collective worker voice. However, fundamental questions remain for designing, governing and sustaining such data infrastructures. In this workshop, we leverage frameworks such as data feminism to design sustainable and power-aware data collectives that tackle challenges present in various types of online labor platforms (e.g., ridesharing, freelancing, crowdwork, carework). While data collectives aim to support worker collectives and complement relevant policy initiatives, the goal of this workshop is to encourage their designers to consider topics of governance, privacy, trust, and transparency. In this one-day session, we convene research and advocacy community members to reflect on critical platform work issues (e.g., worker surveillance, discrimination, wage theft, insufficient platform accountability) as well as to collaborate on codesigning data collectives that ethically and equitably address these concerns by supporting working collectivism and informing policy development.
... While the question being optional could contribute to the low-response rate, we believed that it was a more ethical approach so that participants were not put in a position to sacrifice their data for monetary gains unwillingly. The different findings between our study and Huh et al.'s study [45] could also imply shifting norms among crowdworkers as they become more attentive and protective of their privacy [87]. ...
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords. Our online experiment (n=1,386) compared the effectiveness of a threat appeal (highlighting negative consequences of breached passwords) and a coping appeal (providing instructions on how to change the breached password) in a 2x2 factorial design. Compared to the control condition, participants receiving the threat appeal were more likely to intend to change their passwords, and participants receiving both appeals were more likely to end up changing their passwords; both comparisons have a small effect size. Participants' password change behaviors are further associated with other factors such as their security attitudes (SA-6) and time passed since the breach, suggesting that PMT-based nudges are useful but insufficient to fully motivate users to change their passwords. Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
... Risk has become the basis of decision-making, communication, and evaluation of professionals [4]. Interdisciplinary research in fields of humancomputer interaction, crisis informatics, and digital civics are fundamentally changing how risk is handled by occupations tasked with managing it in various ways [8,22,20]. The results can inform the design of work practices or technologies tailored to this occupational group because the study provides implications for domestic workers' decision-making processes while navigating risks in their line of work. ...
While many occupations turned to remote work during the COVID-19 pandemic, domestic work by definition requires workers to enter other people's households, and they often work in close proximity to their employers. With domestic workers proactively handling COVID-19 risks as part of their already precarious jobs, there is a need for a conceptual understanding of risk management to aid this occupational group during a public health crisis. Our findings emerge from a preliminary qualitative study interviewing occupational groups who adopted risk work practices during the pandemic, providing insight into their risk perceptions and practices. In this paper, we focus on paid domestic workers recruited to investigate how they engaged in situated 'risk cal-culations' to assess different risks present at work. This paper invites an initial discussion on risk practices, communication, and policy to support domestic workers during crises.
... Data privacy awareness among digital users has increased worldwide over recent years. As existing studies have shown (e.g., Xia et al., 2017;Sannon et al., 2022), customers and workers also have high expectations of data privacy and data security in crowdsourcing businesses. Especially among crowdworkers, privacy concerns and fear of surveillance are widespread. ...
This chapter investigates how crowdsourcing platforms handle matters of data protection and analyzes information from 416 privacy statements. We find that German platforms mostly base their data processing solely on the GDPR, while U.S. platforms refer to numerous international, European, and state-level legal sources on data protection. The Chinese crowdsourcing platforms are usually not open to foreigners and do not refer to the GDPR. The privacy statements provide evidence that some U.S. platforms are specific in the sense that they explicitly state which data are not processed. When we compare the privacy practices of crowdsourcing platforms with the German fintech sector, it is noticeable that pseudonymization and anonymization are, at least in Germany, used much more frequently on crowdsourcing platforms. Most privacy statements did not exhaustively clarify what personal data are shared, even though they mentioned the sharing of data with third parties.
... Data privacy awareness among digital users has increased worldwide over recent years. As existing studies have shown (e.g., Xia et al., 2017;Sannon et al., 2022), customers and workers also have high expectations of data privacy and data security in crowdsourcing businesses. Especially among crowdworkers, privacy concerns and fear of surveillance are widespread. ...
This chapter examines data protection laws in Germany, the United States, and China. We describe the most important legal sources and principles of data protection and emphasize the rights of data subjects, with particular attention to personal and sensitive data. The legal frameworks for data protection on crowdsourcing platforms in the three countries show significant differences, but also some similarities. In the United States no federal omnibus regulation on the protection of personal data exists so far. The state of California recently enacted a consumer protection law similar to the GDPR. China started developing its privacy legislation after Germany and the United States, in some parts again similar to the GDPR. A characteristic of the Chinese approach is the different protection regime of personal rights with respect to private actors and to the state government. While privacy rights have expanded in the private sector, threats to privacy posed by state actors have received little attention in Chinese jurisprudence.
... Data privacy awareness among digital users has increased worldwide over recent years. As existing studies have shown (e.g., Xia et al., 2017;Sannon et al., 2022), customers and workers also have high expectations of data privacy and data security in crowdsourcing businesses. Especially among crowdworkers, privacy concerns and fear of surveillance are widespread. ...
... For instance, a system for collective actions can expose and breach the privacy of protesting workers, possibly causing losses of earning opportunities. Furthermore, while our workers called for more personalized accommodations, such arrangements inevitably trades of with privacy [46,81,104], potentially requiring platforms to access and monitor working habits and other behaviors. Implementations of personalization features should take care to not cross the line between customization and invasive surveillance. ...
... For instance, a system for collective actions can expose and breach the privacy of protesting workers, possibly causing losses of earning opportunities. Furthermore, while our workers called for more personalized accommodations, such arrangements inevitably trades off with privacy [44,77,99], potentially requiring platforms to access and monitor working habits and other behaviors. Implementations of personalization features should take care to not cross the line between customization and invasive surveillance. ...
Gig workers, and the products and services they create, play an increasingly ubiquitous role in our daily lives. But despite growing evidence suggesting that worker well-being in gig economy platforms have become significant societal problems, few studies have investigated possible solutions. We take a stride in this direction by engaging workers, platform employees, and local regulators in a series of speed dating workshops using storyboards based on real-life situations to rapidly elicit stakeholder preferences for addressing financial, physical, and social issues related to worker well-being. Our results reveal that existing public and platformic infrastructures fail to provide workers with resources needed to perform gigs, surfacing a need for multi-platform collaborations, technological interventions/advancements, as well as changes in regulations, labor laws, and the public's perception of gig workers, among others. Drawing from multi-stakeholder findings, we discuss these implications for technology, policy, and service as well as avenues for collaboration.
... The opportunities for more equitable worker outcomes must also be balanced with the known challenges of online freelancing, including the precarious nature of this work [18,27,28,65,76,79] and the lack of regulation for these non-standard arrangements [40,69]. In the U.S., online freelancers are classified as independent contractors, which means workers are exempt from the benefits afforded to traditional employees, such as access to health insurance, paid leave, and retirement [27,79]. ...
... A controversial feature of online platforms is how they enforce managerial control and oversight through algorithms, creating challenges for freelancers. Examples of this 'platformic management' [45] include evaluating freelancers' performance through ranking systems (e.g., reflecting aggregated client reviews) [61], constraining client-freelancing relationships to the platform environment [47], and even monitoring work processes (e.g., quantifying keystrokes and active time on the platform) [69]. Researchers have examined the challenges resulting from platformic management, for instance, working long, odd hours to earn decent wages [84,85], racial and gender disparities in price setting and algorithmic evaluations [35,41,61], and asymmetric power relationships with clients [4]. ...
Freelancing platforms, such as Upwork and Fiverr, have become a viable source of work for millions of freelancers worldwide. However, these gig economy systems are not typically designed in ways that centre workers' preferences and wellbeing. In this paper, we describe the development and evaluation of 'Freelance Grow, ' a design fiction portraying a freelancing platform that prioritises freelancers' professional development and peer support. The design fiction was informed by a systematic literature assessment, using recommendations from twenty-six sources for improving online freelancers' experiences. We then used the design fiction in focus groups with 23 online freelancers to investigate their views on the ideas suggested in our design fiction. Based upon a thematic analysis of the focus group transcripts, we present three opportunities and considerations for designing systems that further enable freelancers' work autonomy, entrepreneurial development, and peer support. Ultimately, we contribute an expanded understanding of design approaches to support online freelancers in the gig economy. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; Empirical studies in collaborative and social computing.
People are increasingly introduced to each other offline thanks to online platforms that make algorithmically-mediated introductions between their users. Such platforms include dating apps (e.g., Tinder) and in-person gig work websites (e.g., TaskRabbit, Care.com). Protecting the users of these online-offline systems requires answering calls from prior work to consider 'post-digital' orientations of safety: shifting from traditional technological security thinking to consider algorithm-driven consequences that emerge throughout online and offline contexts rather than solely acknowledging online threats. To support post-digital safety in platforms that make algorithmically-mediated offline introductions (AMOIs), we apply a mixed-methods approach to identify the core harms that AMOI users experience, the protective safety behaviors they employ, and the prevalence of those behaviors. First, we systematically review existing work (n=93), synthesizing the harms that threaten AMOIs and the protective behaviors people employ to combat these harms. Second, we validate prior work and fill gaps left by primarily qualitative inquiry through a survey of respondents' definitions of safety in AMOI and the prevalence and implementation of their protective behaviors. We focus on two exemplar populations who engage in AMOIs: online daters (n=476) and in-person gig workers (n=451). We draw on our systematization and prevalence data to identify several directions for designers and researchers to reimagine defensive tools to support safety in AMOIs.
The entry of on-demand ridesourcing digital labour platforms (OR-DLPs) in Kolkata, India, restructured the local taxi-cab service industry's economic geography and spatial practices. Notably, they eroded the significance of the spatial fixity of taxi stands operated by traditional trade unions, enmeshed in local society's partisan political dynamics. Therefore, OR-DLPs triggered a reconfiguration of the socio-spatial and political practices around the taxi-cab industry in the city. Globally, traditional trade unions have struggled to organise workers in informal work arrangements and DLPs. However, in Kolkata, the Kolkata Ola-Uber App-Cab Operator and Drivers Union has proved to be successful. They established hybrid and networked unionism through technological affordances, placing worker-organisers rather than external organisers at the centre of their organisational structure. Furthermore, they undertook tech-mediated resistance against the OR-DLPs, local bureaucracy (e.g. the police) and the state. We explore this context to examine the impact of OR-DLPs on labour geography, worker-organising and resistance practices, along with the revitalisation strategies of traditional trade unions in response. From a non-Western context, we expand the frame for CSCW and HCI scholars' ongoing efforts to design worker-centric technologies for resistance.
Digital upskilling and remote work are frequently presented as pathways for displaced communities who face political, economic crises and employment barriers. The viability and stability of available energy infrastructures for these communities is critical to the long-term success of these proposed job trajectories. In this paper, we investigate how 17 Syrian refugees living in Lebanon navigate their local unstable energy infrastructures to conduct digital gig work and receive digital training. We found that digital gig work and training are workarounds to the political and social inaccessibility of local labor markets for refugees and that participants rely on a series of material energy strategies to optimize their electricity access. We argue that to support refugees conducting digital gig work and training, we must recognize and account for ecological, social, and technological limitations and frailties in the development of technological solutions and draw our attention to how infrastructures are perpetually undergoing processes of breakdown, repair, and renewal. We argue that this attention to ongoing transformation and renewal creates new opportunities for productive and creative reconfiguration as well as modes through which CSCW may intervene in unstable energy contexts. From this perspective, we emphasize the importance of better resourcing displaced communities with information regarding energy access and supporting them in establishing and strengthening their own visions for alternative energy systems and employment pathways.
Delivery riders belong to crowd workers in the gig economy, representing a vulnerable group. Their privacy concerns and protection have not been duly investigated. To address this research problem, we surveyed to examine the issue of crowdsourced delivery riders' privacy concerns in China. It was found that these riders had often experienced privacy breaches and expressed major privacy concerns, such as the leakage of facial recognition information and contact details. Moreover, they lacked sufficient awareness of the Crowdwork platform's surveillance and knowledge of the Personal Information Protection Law (PIPL) in China. It is suggested that PIPL target vulnerable groups in addition to the general public, and privacy research in Human‐Computer Interaction (HCI) in China should focus on vulnerable populations like crowdsourced delivery riders.
We draw on the Protection Motivation Theory (PMT) to design interventions that encourage users to change breached passwords. Our online experiment ( n =1,386) compared the effectiveness of a threat appeal (highlighting the negative consequences after passwords were breached) and a coping appeal (providing instructions on changing the breached password) in a 2×2 factorial design. Compared to the control condition, participants receiving the threat appeal were more likely to intend to change their passwords, and participants receiving both appeals were more likely to end up changing their passwords. Participants’ password change behaviors are further associated with other factors, such as their security attitudes (SA-6) and time passed since the breach, suggesting that PMT-based interventions are useful but insufficient to fully motivate users to change their passwords. Our study contributes to PMT’s application in security research and provides concrete design implications for improving compromised credential notifications.
The increasing platformization of healthcare services in India, in the wake of COVID-19, has resulted in huge demand for home phlebotomy. However, there is a limited understanding of the impact of digitization on home phlebotomists' workflows. To address this gap, we conducted 26 semi-structured interviews with home phlebotomists, riders, and patients, supplemented by observations of the entire workday of 3 phlebotomists. We found that home phlebotomists' technology-mediated workflows are organized in ways that enable them to build strong support networks of human infrastructure, helping them negotiate and optimize their daily workflows. Moreover, while the digitization of their workflows resulted in continued surveillance, it empowered them to justify their decisions and present evidence of work when needed. Based on our findings, we discuss implications for equitable platform work and the future of platformized health and conclude with design recommendations for telehealth platforms offering home phlebotomy services.
US college sports teams are increasingly adopting personal data technologies, such as wearable sensors, with a goal of improving individual and team performance as well as individual safety. These tools can also reinforce the power that coaches hold over student-athletes and compromise student-athletes' needs for privacy and agency. To investigate preferred, and anti-preferred, approaches for navigating this complex sociotechnical challenge, we used a speculative design approach in which student athletes and technology design students developed three videos that portray tensions between student-athletes and coaches around the use of sports tracking technologies. We then shared these videos with 15 participants including student-athletes, coaches, and designers. Drawing on the perspectives of student-athletes, team staff, and designers embedded in the videos and expressed in reaction to the videos, we describe preferences for boundaries on tracking and sharing, how tracking data represent athletes, and for data practices. We also propose design requirements and recommendations for use to better align tracking technologies with these preferences.
The gig economy and gig work have grown quickly in recent years and have drawn much attention from researchers in different fields. Because the platform mediated gig economy is a relatively new phenomenon, studies have produced a range of interesting findings; of interest here are the socio‐technical issues that this work has surfaced. This systematic literature review (SLR) provides a snapshot of a range of socio‐technical issues raised in the last 12 years of literature focused on the platform mediated gig economy. Based on a sample of 515 papers gathered from nine databases in multiple disciplines, 132 were coded that specifically studied the gig economy, gig work, and gig workers. Three main socio‐technical themes were identified: (1) the digital workplace, which includes information infrastructure and digital labor that are related to the nature of gig work and the user agency; (2) algorithmic management, which includes platform governance, performance management, information asymmetry, power asymmetry, and system manipulation, relying on a diverse set of technological tools including algorithms and big data analytics; (3) ethical design, as a relevant value set that gig workers expect from the platform, which includes trust, fairness, equality, privacy, and transparency. A social informatics perspective is used to rethink the relationship between gig workers and platforms, extract the socio‐technical issues noted in prior research, and discuss the underexplored aspects of the platform mediated gig economy. The results draw attention to understudied yet critically important socio‐technical issues in the gig economy that suggest short‐ and long‐term opportunities for future research directions.
Dünya üzerinde birçok alanın dijitalleşmesi, teknolojik yenilikler ve internetin hızlı gelişmesi insanların yaşam ve çalışma şekillerinde köklü değişiklikler yaratmıştır. Dünyanın küresel bir boyuttan dijital bir boyuta geçmesi, insanların yer ve zaman ayırt etmeden kolay bir şekilde istediği bilgilere ulaşma fırsatı tanımıştır. Dahası 2008 yılında yaşanan Avrupa Borç Krizi ve ardından dünya geneline hızlıca yayılan Covid-19 pandemisi, insanların evlerinde kalarak işlerini yürütebileceği düşüncesini gün yüzüne çıkarmıştır. Uzaktan çalışma fikri insanlara cazip gelerek firmaların ya da hükümetlerin bu alanda hızlı bir şekilde çalışmalar yapmasını zorunlu hale getirmiştir. Böylece bu çalışmayla Türkiye’de büyümeye başlayan gig ekonomisinin işgücü piyasasına etkisi incelenmeye başlamıştır. Bu kapsamda örneklem olarak Türkiye’nin en büyük online işe alım platformu olan Kariyer.net’in mavi yaka istihdamına katkı sağlamayı amaçlayan konum bazlı platformu “İşin Olsun’’ uygulaması seçilmiştir. Çalışma sonucunda, İşin Olsun’un Türkiye’de 78 binden fazla tam, yarı ya da esnek zamanlı (parça başı) iş ilanı ile 10 milyona yakın iş arayan kişiyi karşılaştırarak gig ekonomisine katkı sağladığı düşünülmektedir. Ayrıca, uygulamanın benzersiz özellikleri sayesinde geçici işlerde daha fazla işe almayı açık hale getirebileceği ve istihdamı arttırabileceği tahmin edilmektedir.
Employees work in increasingly digital environments that enable advanced analytics. Yet, they lack oversight over the systems that process their data. That means that potential analysis errors or hidden biases are hard to uncover. Recent data protection legislation tries to tackle these issues, but it is inadequate. It does not prevent data misusage while at the same time stifling sensible use cases for data.
We think the conflict between data protection and increasingly data-driven systems should be solved differently. When access to an employees' data is given, all usages should be made transparent to them, according to the concept of inverse transparency. This allows individuals to benefit from sensible data usage while addressing the potentially harmful consequences of data misusage. To accomplish this, we propose a new design approach for workforce analytics software we refer to as inverse transparency by design.
To understand the developer and user perspectives on the proposal, we conduct two exploratory studies with students. First, we let small teams of developers implement analytics tools with inverse transparency by design to uncover how they judge the approach and how it materializes in their developed tools. We find that architectural changes are made without inhibiting core functionality. The developers consider our approach valuable and technically feasible. Second, we conduct a user study over three months to let participants experience the provided inverse transparency and reflect on their experience. The study models a software development workplace where most work processes are already digital. Participants perceive the transparency as beneficial and feel empowered by it. They unanimously agree that it would be an improvement for the workplace. We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
Employees work in increasingly digital environments that enable advanced analytics. Yet, they lack oversight over the systems that process their data. That means that potential analysis errors or hidden biases are hard to uncover. Recent data protection legislation tries to tackle these issues, but it is inadequate. It does not prevent data misusage while at the same time stifling sensible use cases for data. We think the conflict between data protection and increasingly data-driven systems should be solved differently. When access to an employees' data is given, all usages should be made transparent to them, according to the concept of inverse transparency. This allows individuals to benefit from sensible data usage while addressing the potentially harmful consequences of data misusage. To accomplish this, we propose a new design approach for workforce analytics we refer to as inverse transparency by design. To understand the developer and user perspectives on the proposal, we conduct two exploratory studies with students. First, we let small teams of developers implement analytics tools with inverse transparency by design to uncover how they judge the approach and how it materializes in their developed tools. We find that architectural changes are made without inhibiting core functionality. The developers consider our approach valuable and technically feasible. Second, we conduct a user study over three months to let participants experience the provided inverse transparency and reflect on their experience. The study models a software development workplace where most work processes are already digital. Participants perceive the transparency as beneficial and feel empowered by it. They unanimously agree that it would be an improvement for the workplace. We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
Online freelancing platforms, such as Upwork, hold great promise in enabling flexible work opportunities where freelancers can combine their work with other life responsibilities, hereafter work-life. However, prior research suggests that platform features and self-managing demands of freelance work can jeopardise this apparent flexibility. In this paper, we report findings from a qualitative study, combining a 14-diary and semi-structured interview with 15 Upwork freelancers. We explored online freelancers' work practices, challenges, and the impact of platform features on their everyday lives. Our qualitative data suggest that platform features and individual context shape online freelancers' work-life practices. Freelancers develop strategies to mitigate platforms' constraints and balance their individual preferences and responsibilities. Further, our findings illustrate how platform features challenge freelancers' availability expectations, work autonomy, and work detachment. This paper contributes an empirical understanding of the factors influencing online freelancers' work-life practices by drawing upon Wanda J. Orlikowski's Structuration Model of Technology. This theoretical lens renders the interplay of freelancers, platforms, and instituted norms of freelance work.
Based on a comprehensive set of studies collected via five academic databases, this scoping review examines how inequality and discrimination have been studied in the context of paid online labor. We identify three approaches in the literature that aim to (1) identify participation patterns in (national) survey data, (2) examine background characteristics of online contractors based on survey or digital trace data, and (3) reveal social biases in the hiring process using experimental data. Building on Shaw and Hargittai’s pipeline of participation, we present a multi-stage model of engagement in online labor. When we map the studies across the stages, it becomes clear that the literature focuses on later stages (i.e. having been hired and received payment). Based on this analysis, future research should examine barriers to participation in earlier stages. Furthermore, we advocate for research that examines participation across multiple pipeline stages as well as for analysis of platform-level biases.
The goal of this dissertation is to create a model for evaluating the information transparency of privacy policies based on the starting assumption that effective transparency mechanisms should aim to reduce information asymmetry between organizations that collect and process the data of respondents and the respondents themselves. For this purpose, an analytical matrix was set up, which analyzed the content of privacy policy texts to examine the fulfillment of defined requirements on theoretically defined dimensions of information transparency, theoretically conditionally called visibility and inferability, as operationalized units of measurement of certain degrees of information symmetry as indicators of information transparency in mutual correlation. Furthermore, each of the dimensions is operationalized through a certain number of indicators and sub-indicators as assumed requirements that the privacy policy should fulfill, that is, satisfy when achieving information symmetry according to respondents. Therefore, the requirements were assigned appropriate weights according to the theoretically assumed importance of each sub-indicator in defining the indicator, the sum of which is a maximum of 1 as a "measure" of the complete fulfillment of the requirements. By applying factor analysis on the collected data, on a sample of 152 health institutions in the public and private sector in the Republic of Croatia, a valid conceptual model was created that shows the influence of individual factors on information transparency, i.e. the reduction of information asymmetry between the aforementioned stakeholders, during which the results of other statistical analyzes of the sample were also extracted. According to the information transparency evaluation model, the effectiveness of transparency mechanisms is influenced to a greater extent by the factors of visibility, defined by the determinants of layering, updating and informativeness, in relation to the factors of inferability, defined by the determinants of accessibility, meaningfulness and comprehensibility of privacy policies. And although they do not correlate with each other, the defined factors can be used to determine the degree of information asymmetry of privacy policies with the aim of reducing it using the results of the match validity analysis performed during model validation. By adjusting individual determinants on the basis of obtained values of deviations from reference values based on the average of the examined institutions, it is possible to manage the efficiency of information transparency mechanisms.
Crowdsourcing markets provide workers with a centralized place to find paid work. What may not be obvious at first glance is that, in addition to the work they do for pay, crowd workers also have to shoulder a variety of unpaid invisible labor in these markets, which ultimately reduces workers' hourly wages. Invisible labor includes finding good tasks, messaging requesters, or managing payments. However, we currently know little about how much time crowd workers actually spend on invisible labor or how much it costs them economically. To ensure a fair and equitable future for crowd work, we need to be certain that workers are being paid fairly for all of the work they do. In this paper, we conduct a field study to quantify the invisible labor in crowd work. We build a plugin to record the amount of time that 100 workers on Amazon Mechanical Turk dedicate to invisible labor while completing 40,903 tasks. If we ignore the time workers spent on invisible labor, workers' median hourly wage was 2.83. We found that the invisible labor differentially impacts workers depending on their skill level and workers' demographics. The invisible labor category that took the most time and that was also the most common revolved around workers having to manage their payments. The second most time-consuming invisible labor category involved hyper-vigilance, where workers vigilantly watched over requesters' profiles for newly posted work or vigilantly searched for labor. We hope that through our paper, the invisible labor in crowdsourcing becomes more visible, and our results help to reveal the larger implications of the continuing invisibility of labor in crowdsourcing.
This article offers a systematic analysis of 727 manuscripts that used Reddit as a data source, published between 2010 and 2020. Our analysis reveals the increasing growth in use of Reddit as a data source, the range of disciplines this research is occurring in, how researchers are getting access to Reddit data, the characteristics of the datasets researchers are using, the subreddits and topics being studied, the kinds of analysis and methods researchers are engaging in, and the emerging ethical questions of research in this space. We discuss how researchers need to consider the impact of Reddit’s algorithms, affordances, and generalizability of the scientific knowledge produced using Reddit data, as well as the potential ethical dimensions of research that draws data from subreddits with potentially sensitive populations.
The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. We map them onto three categories: the new organisation of work (what is done), the new nature of work (how it is done), and the new status of workers (who does it). We then evaluate a recent initiative from the EU that seeks to address the challenges of the gig economy. The 2019 report of the European High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets is a positive step in the right direction. However, we argue that ethical concerns relating to algorithmic systems as mechanisms of control, and the discrimination, exclusion and disconnectedness faced by gig workers require further deliberation and policy response. A brief conclusion completes the analysis. The appendix presents the methodology underpinning our literature review.
Algorithmic management is used to govern digital work platforms such as Upwork or Fiverr. However, algorithmic decision-making is often non-transparent and rapidly evolving, forcing workers to constantly adapt their behavior. Extant research focuses on how workers experience algorithmic management, while often disregarding the agency that workers exert in dealing with algorithmic management. Following a sociomateriality perspective, we investigate the practices that workers develop to comply with (assumed) mechanisms of algorithmic management on digital work platforms. Based on a systematic content analysis of 12,294 scraped comments from an online community of digital freelancers, we show how workers adopt direct and indirect "anticipatory compliance practices", such as undervaluing their own work, staying under the radar, curtailing their outreach to clients and keeping emotions in check, in order to ensure their continued participation on the platform, which takes on the role of a shadow employer. Our study contributes to research on algorithmic management by (1) showing how workers adopt practices aimed at "pacifying" the platform algorithm; (2) outlining how workers engage in extra work; (3) showing how workers co-construct the power of algorithms through their anticipatory compliance practices.
At the University of Toronto, we’re embarking on a bold new initiative to bring together these four disciplines: law, business, engineering, and medicine, through what we call “sousveillant systems”—grassroots systems of “bottom up” facilitation of cross-, trans-, inter-, meta-, and anti-disciplinarity, or, more importantly, cross-, trans-, and inter-passionary efforts. Passion is a better master than discipline (to paraphrase Albert Einstein’s “Love is a better master than duty”). Our aim is not to eliminate “big science,” “big data,” and “big watching” (surveillance), but to complement these things with a balancing force. There will still be “ladder climbers,” but we aim to balance these entities and individuals with those who embody the “integrity of authenticity” and to provide a complete picture that is otherwise a half-truth when only the “big” end is present. This generalizes the notion of “open source,” where each instance of a system (e.g., computer operating system) contains or can contain its own seeds (e.g., source code). Sousveillant systems are an alternative to the otherwise sterile world of closed-source, specialist silos that are not auditable by end-users (i.e., are only auditable by authorities from “above”).
Workplace surveillance is traditionally conceived of as a dyadic process, with an observer and an observee. In this paper, I discuss the implications of an emerging form of workplace surveillance: surveillance with an algorithmic, as opposed to human, observer. Situated within the on-demand food-delivery context, I draw upon Henri Lefebvre’s spatial triad to provide in-depth conceptual examination of how platforms rely on conceived space, namely the virtual reality generated by data capture, while neglecting perceived and lived space in the form of the material embodied reality of workers. This paper offers a two-fold contribution. First, it applies Henri Lefebvre’s spatial triad to the techno-centric digital cartography used by platform-mediated organisations, assessing spatial power dynamics and opportunities for resistance. Second, this paper advances organisational research into workplace surveillance in situations where the observer and decision-maker can be a non-human agent.
Many workers have been drawn to the gig economy by the promise of flexible, autonomous work, but scholars have highlighted how independent working arrangements also come with the drawbacks of precarity. Digital platforms appear to provide an alternative to certain aspects of precarity by helping workers find work consistently and securely. However, these platforms also introduce their own demands and constraints. Drawing on 20 interviews with online freelancers, 19 interviews with corresponding clients and first-hand walkthrough of the Upwork platform, we identify critical literacies (what we call gig literacies), which are emerging around online freelancing. We find that gig workers must adapt their skills and work strategies in order to leverage platforms creatively and productively, and as a component of their 'personal holding environment.' This involves not only using the resources provided by the platform effectively, but also negotiating or working around its imposed structures and control mechanisms.
We advance the concept of platformic management, and the ways in which platforms help to structure project-based or “gig” work. We do so knowing that the popular press and a substantial number of the scholarly publications characterize the “rise of the gig economy” as advancing worker autonomy and flexibility, focusing attention to online digital labor platforms such as Uber and Amazon’s Mechanical Turk. Scholars have conceptualized the procedures of control exercised by these platforms as exerting “algorithmic management,” reflecting the use of extensive data collection to feed algorithms that structure work. In this paper, we broaden the attention to algorithmic management and gig-working control in two ways. First, we characterize the managerial functions of Upwork, an online platform that facilitates knowledge-intensive freelance labor - to advance discourse beyond ride-sharing and room-renting labor. Second, we advance the concept of platformic management as a means to convey a broader and sociotechnical premise of these platforms’ functions in structuring work. We draw on data collected from Upwork forum discussions, interviews with gig workers who use Upwork, and a walkthrough analysis of the Upwork platform to develop our analysis. Our findings lead us to articulate platformic management -- extending beyond algorithms -- and to present the platform as a ‘‘boundary resource” to illustrate the paradoxical affordances of Upwork and similar labor platforms. That is, the platform (1) enables the autonomy desired by gig workers, while (2) also serving as a means of control that helps maintain the viability of transactions and protects the platform from disintermediation.
The algorithm-based management exercised by digital gig platforms has created information and power asymmetries, which may undermine the stability of gig work. Although the design of these platforms may foster unbalanced relationships, in this paper, we outline how freelancers and clients on the gig platform Upwork can leverage a network of alliances with external digital platforms to repossess their displaced agency within the gig economy. Building on 39 interviews with Upwork freelancers and clients, we found a dynamic ecosystem of digital platforms that facilitate gig work through and around the Upwork platform. We use actor-network theory to: 1) delineate Upwork's strategy to establish a comprehensive and isolated platform within the gig economy, 2) track human and nonhuman alliances that run counter to Upwork's system design and control mechanisms, and 3) capture the existence of a larger ecosystem of external digital platforms that undergird online freelancing. This work explicates the tensions that Upwork users face, and also illustrates the multiplicity of actors that create alliances to work with, through, around, and against the platform's algorithmic management.
This article evaluates the job quality of work in the remote gig economy. Such work consists of the remote provision of a wide variety of digital services mediated by online labour platforms. Focusing on workers in Southeast Asia and Sub-Saharan Africa, the article draws on semi-structured interviews in six countries (N = 107) and a cross-regional survey (N = 679) to detail the manner in which remote gig work is shaped by platform-based algorithmic control. Despite varying country contexts and types of work, we show that algorithmic control is central to the operation of online labour platforms. Algorithmic management techniques tend to offer workers high levels of flexibility, autonomy, task variety and complexity. However, these mechanisms of control can also result in low pay, social isolation, working unsocial and irregular hours, overwork, sleep deprivation and exhaustion.
Uber is a ride-sharing platform that is part of the 'gig-economy,' where the platform supports and coordinates a labor market in which there are a large number of ephemeral, piecemeal jobs. Despite numerous efforts to understand the impacts of these platforms and their algorithms on Uber drivers, how to better serve and support drivers with these platforms remains an open challenge. In this paper, we frame Uber through the lens of Stakeholder Theory to highlight drivers' position in the workplace, which helps inform the design of a more ethical and effective platform. To this end, we analyzed Uber drivers' forum discussions about their lived experiences of working with the Uber platform. We identify and discuss the impact of the stakes that drivers have in relation to both the Uber corporation and their passengers, and look at how these stakes impact both the platform and drivers' practices.
Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.
This article suggests some basic terms for surveillance analysis. The analysis requires a map and a common language to explain and evaluate its fundamental properties, contexts, and behaviors. Surveillance is neither good nor bad but context and comportment make it so. Topics considered in this article include: a broad definition of surveillance, its strategic and nonstrategic forms, and the traditional and new surveillance. A family of related terms – privacy, publicity, confidentiality, and secrecy – is also considered. The discussion next focuses on characteristics of the social structures that organize behavior, the characteristics of the means used, and some value conflicts and social processes seen with the emergent, interactive character of much surveillance behavior.
A body of literature on self-tracking has been established in human-computer interaction studies. Contributors to this literature tend to take a cognitive or behavioural psychology approach to theorising and explaining selftracking. Such an approach is limited to understanding individual behaviour. Yet self-tracking is a profoundly social practice, both in terms of the enculturated meanings with which it is invested and the social encounters and social institutions that are part of the selftracking phenomenon. In this paper I contend that sociological perspectives can contribute some intriguing possibilities for human-computer interaction research, particularly in developing an understanding of the wider social, cultural and political dimensions of what I refer to as 'self-tracking cultures'. The discussion focuses on the following topics: self-optimisation and governing the self; entanglements of bodies and technologies; the valorisation of data; data doubles; and social inequalities and self-tracking. The paper ends with outlining some directions for future research on self-tracking cultures that goes beyond the individual to the social.
Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.
In this paper we define the notion of a privacy design strategy. These
strategies help IT architects to support privacy by design early in the
software development life cycle, during concept development and analysis. Using
current data protection legislation as point of departure we derive the
following eight privacy design strategies: minimise, hide, separate, aggregate,
inform, control, enforce, and demonstrate. The strategies also provide a useful
classification of privacy design patterns and the underlying privacy enhancing
technologies. We therefore believe that these privacy design strategies are not
only useful when designing privacy friendly systems, but also helpful when
evaluating the privacy impact of existing IT systems.
This paper traces the intellectual development of the workplace privacy construct in the course of American thinking. The role of technological development in this process is examined, particularly in regard to the information gathering/dissemination dilemmas faced by employers and employees alike. The paper concludes with some preliminary considerations toward a theory of workplace privacy.
Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people's activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity. Author Keywords
We are interested in desiging systems that support communication and collaboration among large groups of people over computing networks. We begin by asking what properties of the physical world support graceful human-human communication in face-to-face situations, and argue that it is possible to design digital systems that support coherent behavior by making participants and their activites visible to one another. We call such systems “socially translucent systems” and suggest that they have three characteristics—visbility, awareness, and accountability—which enable people to draw upon their experience and expertise to structure their interactions with one another. To motivate and focus our ideas we develop a vision of knowledge communities, conversationally based systems that support the creation, management and reuse of knowledge in a social context. We describe our experience in designing and deploying one layer of functionality for knowledge communities, embodied in a working system called “Barbie” and discuss research issues raised by a socially translucent approach to design.
Introduced the statistic kappa to measure nominal scale agreement between a fixed pair of raters. Kappa was generalized to the case where each of a sample of 30 patients was rated on a nominal scale by the same number of psychiatrist raters (n = 6), but where the raters rating 1 s were not necessarily the same as those rating another. Large sample standard errors were derived.
App-based, ride-hail drivers are a highly visible workforce, yet previous research has generally understood their visibility primarily in terms of surveillance. Using data from an ethnographic study of the New York City (NYC) ride-hail circuit, this article explores how drivers experience and negotiate their visibility. Findings reveal that constant monitoring on ride-hail apps feels oppressive to drivers, and it requires them to engage in significant unpaid labor in the form of reputation auditing. Nevertheless, drivers also find ways to "caption" surveillance outputs and thus shape their meanings. They engage in three strategies-juxtaposing existing metrics, expanding the field of vision, and requiring others to bear witness-to clarify, contextualize, and reclaim their visibility. The ability to reconfigure meanings of visibility, and specifically to navigate between the experience of being watched and that of being seen, represents an underexplored avenue of agency within studies of work surveillance.
As online fandom continues to grow, so do the public data created by fan creations and interactions. With researchers and journalists regularly engaging with those data (and not always asking permission), many fans are concerned that their content might end up in front of the wrong audience, which could lead to privacy violations or even harassment from within or outside of fandom. To better understand fan perspectives on the collection and analysis of public data as a methodology, we conducted both an interview study and a survey to solicit responses that would help provide a broader understanding of fandom's privacy norms as they relate to the ethical use of data. We use these findings to revisit and recommend best practices for working with public data within fandom.
Using machine learning and artificial intelligence, Uber has been disrupting the world taxi industry. However, the Uber algorithmic apparatus managed to perfectionize the scalable decentralized tracking and surveillance of mobile living bodies. This article examines the Uber surveillance machinery and discusses the determinants of its algorithmically powered ‘all-seeing power’. The latter is being figured as an Algopticon that reinvents Bentham’s panopticon in the era of the platform economy.
What does reliability mean for building a grounded theory? What about when writing an auto-ethnography? When is it appropriate to use measures like inter-rater reliability (IRR)? Reliability is a familiar concept in traditional scientific practice, but how, and even whether to establish reliability in qualitative research is an oft-debated question. For researchers in highly interdisciplinary fields like computer-supported cooperative work (CSCW) and human-computer interaction (HCI), the question is particularly complex as collaborators bring diverse epistemologies and training to their research. In this article, we use two approaches to understand reliability in qualitative research. We first investigate and describe local norms in the CSCW and HCI literature, then we combine examples from these findings with guidelines from methods literature to help researchers answer questions like: "should I calculate IRR?" Drawing on a meta-analysis of a representative sample of CSCW and HCI papers from 2016-2018, we find that authors use a variety of approaches to communicate reliability; notably, IRR is rare, occurring in around 1/9 of qualitative papers. We reflect on current practices and propose guidelines for reporting on reliability in qualitative research using IRR as a central example of a form of agreement. The guidelines are designed to generate discussion and orient new CSCW and HCI scholars and reviewers to reliability in qualitative research.
The safety of passengers of rideshare apps has received attention from researchers, yet there is a lack of research on safety of rideshare drivers in the context of CSCW and HCI. As drivers are also an important user in the ecosystem of the ridesharing systems, we conducted interviews with drivers in the U.S. to understand how they, individually and collaboratively, address safety related issues they face conducting their job. We identified the factors that contributed to drivers' feelings of safety and the strategies they engaged in to protect themselves. We found that drivers relied on methods that were technical, social, and physical, to ensure their safety and engaged in informal collaborative and communicative activities with other drivers inside and outside of the ridesharing system. We discuss implications for future design for ridesharing apps and other location-based computer-supported collaborative systems that have potential safety hazards.
Tasks on crowdsourcing platforms such as Amazon Mechanical Turk often request workers' personal information, raising privacy risks that may be exacerbated by requester-worker power dynamics. We interviewed 14 workers to understand how they navigate these risks. We found that Turkers' decisions to provide personal information during tasks were based on evaluations of the pay rate, the requester, the purpose, and the perceived sensitivity of the request. Participants also engaged in multiple privacy-protective behaviors, such as abandoning tasks or providing inaccurate data, though there were costs associated with these behaviors, such as wasted time and risk of rejection. Finally, their privacy concerns and practices evolved as they learned about both the platform and worker-designed tools and forums. These findings deepen our understanding of both privacy decision-making and invisible labor in paid crowdsourcing, and emphasize a general need to understand how privacy stances change over time.
We argue that modern technical and social infrastructures of surveillance have brought a novel subject position to prominence: the surveillant consumer. Surveillance has become a normalized mode of interpersonal relation that urges the person as consumer to manage others around her using surveillant products and services. We explore two configurations of this model: the consumer as observer, effectuated through products for use in the supervision of intimate relations as a component of a normalized duty of care; and the consumer as manager, effectuated through capacities for the customer to manage the labor of workers providing services to her. These models frequently intersect and hybridize as market logics overlap with intimate spheres: the surveillant consumer thus acts as an emotional manager of the experience of everyday surveillance. In turn, this managerial role reifies the equation of financial wealth with moral weight in a hierarchy of oversight, giving the wealthiest the most control and least accountability.
This article presents findings regarding collective organisation among online freelancers in middle‐income countries. Drawing on research in Southeast Asia and Sub‐Saharan Africa, we find that the specific nature of the online freelancing labour process gives rise to a distinctive form of organisation, in which social media groups play a central role in structuring communication and unions are absent. Previous research is limited to either conventional freelancers or ‘microworkers’ who do relatively low‐skilled tasks via online labour platforms. This study uses 107 interviews and a survey of 658 freelancers who obtain work via a variety of online platforms to highlight that Internet‐based communities play a vital role in their work experiences. Internet‐based communities enable workers to support each other and share information. This, in turn, increases their security and protection. However, these communities are fragmented by nationality, occupation and platform.
In this paper, we study online privacy lies: lies primarily aimed at protecting privacy. Going beyond privacy lenses that focus on privacy concerns or cost/benefit analyses, we explore how contextual factors, motivations, and individual-level characteristics affect lying behavior through a 356-person survey. We find that statistical models to predict privacy lies that include attitudes about lying, use of other privacy-protective behaviors (PPBs), and perceived control over information improve on models based solely on self-expressed privacy concerns. Based on a thematic analysis of open-ended responses, we find that the decision to tell privacy lies stems from a range of concerns, serves multiple privacy goals, and is influenced by the context of the interaction and attitudes about the morality and necessity of lying. Together, our results point to the need for conceptualizations of privacy lies-and PPBs more broadly-that account for multiple goals, perceived control over data, contextual factors, and attitudes about PPBs.
A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~7.25/h. While the average requester pays more than $11/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work.
Crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) are widely used by organizations, researchers, and individuals to outsource a broad range of tasks to crowd workers. Prior research has shown that crowdsourcing can pose privacy risks (e.g., de-anonymization) to crowd workers. However, little is known about the specific privacy issues crowd workers have experienced and how they perceive the state of privacy in crowdsourcing. In this paper, we present results from an online survey of 435 MTurk crowd workers from the US, India, and other countries and areas. Our respondents reported different types of privacy concerns (e.g., data aggregation, profiling, scams), experiences of privacy losses (e.g., phishing, malware, stalking, targeted ads), and privacy expectations on MTurk (e.g., screening tasks). Respondents from multiple countries and areas reported experiences with the same privacy issues, suggesting that these problems may be endemic to the whole MTurk platform. We discuss challenges, high-level principles and concrete suggestions in protecting crowd workers'; privacy on MTurk and in crowdsourcing more broadly.
The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).
From the Pinkerton private detectives of the 1850s, to the closed-circuit cameras and email monitoring of the 1990s, to new apps that quantify the productivity of workers, and to the collection of health data as part of workplace wellness programs, American employers have increasingly sought to track the activities of their employees. Starting with Taylorism and Fordism, American workers have become accustomed to heightened levels of monitoring that have only been mitigated by the legal counterweight of organized unions and labor laws. Thus, along with economic and technological limits, the law has always been presumed as a constraint on these surveillance activities. Recently, technological advancements in several fields-big data analytics, communications capture, mobile device design, DNA testing, and biometrics-have dramatically expanded capacities for worker surveillance both on and off the job. While the cost of many forms of surveillance has dropped significantly, new technologies make the surveillance of workers even more convenient and accessible, and labor unions have become much less powerful in advocating for workers. The American worker must now contend with an all-seeing Argus Panoptes built from technology that allows for the trawling of employee data from the Internet and the employer collection of productivity data and health data, with the ostensible consent of the worker. This raises the question of whether the law still remains a meaningful avenue to delineate boundaries for worker surveillance.
Since its inception, crowdsourcing has been considered a black-box approach to solicit labor from a crowd of workers. Furthermore, the "crowd" has been viewed as a group of independent workers dispersed all over the world. Recent studies based on in-person interviews have opened up the black box and shown that the crowd is not a collection of independent workers, but instead that workers communicate and collaborate with each other. Put another way, prior work has shown the existence of edges between workers. We build on and extend this discovery by mapping the entire communication network of workers on Amazon Mechanical Turk, a leading crowdsourcing platform. We execute a task in which over 10,000 workers from across the globe self-report their communication links to other workers, thereby mapping the communication network among workers. Our results suggest that while a large percentage of workers indeed appear to be independent, there is a rich network topology over the rest of the population. That is, there is a substantial communication network within the crowd. We further examine how online forum usage relates to network topology, how workers communicate with each other via this network, how workers' experience levels relate to their network positions, and how U.S. workers differ from international workers in their network characteristics. We conclude by discussing the implications of our findings for requesters, workers, and platform providers like Amazon.
Recent research on social media use in the Middle East and North Africa (MENA) region has focused on their role in the Arab Spring uprisings, but less work has examined the more mundane uses of these technologies. Yet exploring the way populations in the MENA region use social media in everyday life provides insight into how they are adapted to cultural contexts beyond those from which they originated. To better understand this process, we interviewed eleven Qatari nationals currently living in Doha, Qatar. Our analysis identifies ways users, particularly females, practice modesty, manage their own (and by extension) their familyâs reputation, and use social media to monitor and protect others. These findings are placed within a framework of social, or participatory surveillance, which challenges conventional notions of surveillance as a form of control and instead shows how surveillance has the potential to be empowering.
The so-called “gig-economy” has been growing exponentially in numbers and importance in recent years but its impact on labour rights has been largely overlooked. Forms of work in the “gig-economy” include “crowd work”, and “work-on-demand via apps”, under which the demand and supply of working activities is matched online or via mobile apps. These forms of work can provide a good match of job opportunities and allow flexible working schedules. However, they can also pave the way to a severe commodification of work. This paper discusses the implications of this commodification and advocates the full recognition of activities in the gig-economy as “work”. It shows how the gig-economy is not a separate silo of the economy and that is part of broader phenomena such as casualization and informalisation of work and the spread of non-standard forms of employment. It then addresses the issue of misclassification of the employment status of workers in the gig-economy. Current relevant trends are thus examined, such as the emergence of forms of self-organisation of workers. Finally, some policy proposals are critically analysed, such as the possibility of creating an intermediate category of worker between “employee” and “independent contractor” to classify work in the gig-economy, and other tentative proposals are put forward such extension of fundamental labour rights to all workers irrespective of employment status, and recognition of the role of social partners in this respect, whilst avoiding temptations of hastened deregulation.
Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers' perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. Unfair rejections can result from poorly-designed tasks, unclear instructions, technical errors, and malicious Requesters. Because the AMT policy and platform provide little recourse to Turkers, they adopt strategies to minimize risk: avoiding new and known bad Requesters, sharing information with other Turkers, and choosing low-risk tasks. Through a series of ideas inspired by these findings-including notifying Turkers and Requesters of a broken task, returning rejected work to Turkers for repair, and providing collective dispute resolution mechanisms-we argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.
The assertion that technologies have made life easier and consequently better, often lurks in how we evaluate technologies. It implies an idea of history linked to modernity with its idea of progress. With my point of departure in the concepts of space of experience and horizon of expectation, I try to develop another understanding of history and a reassessment of how we evaluate technologies. Horizons of expectation of information technologies and developmental work are described and examined in the light of its impact on women's work, and how we envision power and authority. Finally, possibilities and dilemmas in women's lives within this reconceptualization is stressed and the conditions under which women may change their lives.
Nothing says that the present reduces to presence. Why, in the transition from future to past, should the present not be the time of initiative—that is, the time when the weight of history that has already been made is deposited, suspended, and interrupted, and when the dream of history yet to be made is transposed into a responsible decision?
Therefore it is within the dimension of acting (and suffering, which is its corollary) that thought about history will bring together its perspectives, within the horizon of the idea of an imperfect mediation. (Ricoeur, 1988:208)
Crowdsourcing platforms such as Amazon Mechanical Turk and Google Consumer Surveys can profile users based on their inputs to online surveys. In this work we first demonstrate how easily user privacy can be compromised by collating information from multiple surveys. We then propose, develop, and evaluate a crowdsourcing survey platform called Loki that allows users to control their privacy loss via at-source obfuscation.
This paper introduces privacy and accountability techniques for crowd-powered systems. We focus on email task management: tasks are an implicit part of every inbox, but the overwhelming volume of incoming email can bury important requests. We present EmailValet, an email client that recruits remote assistants from an expert crowdsourcing marketplace. By annotating each email with its implicit tasks, EmailValet's assistants create a task list that is automatically populated from emails in the user's inbox. The system is an example of a valet approach to crowdsourcing, which aims for parsimony and transparency in access con-trol for the crowd. To maintain privacy, users specify rules that define a sliding-window subset of their inbox that they are willing to share with assistants. To support accountability, EmailValet displays the actions that the assistant has taken on each email. In a weeklong field study, participants completed twice as many of their email-based tasks when they had access to crowdsourced assistants, and they became increasingly comfortable sharing their inbox with assistants over time.
We often see the government or the corporation as the greatest threat to information privacy. But due to a nascent data practice called “self-surveillance,” the greatest threat may actually come from ourselves. Using various existing and emerging technologies, such as GPS-enabled smartphones, we are beginning voluntarily to measure ourselves in granular detail - how long we sleep, where we go, what we breathe, what we eat, how we spend our time. And we are storing these data casually in the “cloud,” and giving third-parties broad access. This practice of self-surveillance will decrease information privacy in troubling ways. To counter this trend, we recommend the creation of the Personal Data Guardian, a new professional who manages Personal Data Vaults, which are repositories for self-surveillance data.
While Amazon's Mechanical Turk (AMT) online workforce has been characterized by many people as being anonymous, we expose an aspect of AMT's system design that can be exploited to reveal a surprising amount of information about many AMT Workers, which may include personally identifying information (PII). This risk of PII exposure may surprise many Workers and Requesters today, as well as impact current institutional review board (IRB) oversight of human subjects research involving AMT Workers as participants. We assess the potential multi-faceted impact of such PII exposure for each stakeholder group: Workers, Requesters, and AMT itself. We discuss potential remedies each group may explore, as well as the responsibility of each group with regard to privacy protection. This discussion leads us to further situate issues of crowd worker privacy amidst broader ethical, economic, and regulatory issues, and we conclude by offering a set of recommendations to each stakeholder group.
This article focuses on innovative methods for protecting privacy in research of Internet-mediated social contexts. Traditional methods for protecting privacy by hiding or anonymizing data no longer suffice in situations where social researchers need to design studies, manage data, and build research reports in increasingly public, archivable, searchable, and traceable spaces. In such research environments, there are few means of adequately disguising details about the venue and the persons being studied. One practical method of data representation in contexts in which privacy protection is unstable is fabrication, involving creative, bricolage-style transfiguration of original data into composite accounts or representational interactions. This article traces some of the historical trends that have restricted such creative ethical solutions; emphasizes a researcher's obligation to protect research participants' privacy in mediated research contexts; and offers an introductory framework for reconsidering how to make case-based decisions to better protect the interests of participants in situations where vulnerability or potential harm is not easily determined.
There is growing interest in Europe in privacy impact assessment (PIA). The UK introduced the first PIA methodology in Europe in 2007, and Ireland followed in 2010. PIAs provide a way to detect potential privacy problems, take precautions and build tailored safeguards before, not after, the organisation makes heavy investments in the development of a new technology, service or product. This paper presents some findings from the Privacy Impact Assessment Framework (PIAP) project and, in particular, the project's first deliverable, which analyses the similarities and differences between PIA methodologies in Australia, Canada, Hong Kong, Ireland, New Zealand, the United Kingdom and the United States, with a view to picking out the best elements which could be used in constructing an optimised PIA methodology for Europe. The project, which began in January 2011, is being undertaken for the European Commission's Directorate General Justice. The first deliverable was completed in September. The paper provides some background on privacy impact assessment, identifies some of its benefits and discusses elements that can be used in construction of a state-of-the-art PIA methodology.
Privacy impact assessment (PIA) is a systematic process for evaluating the potential effects on privacy of a project, initiative or proposed system or scheme. Its use has become progressively more common from the mid-1990s onwards.On the one hand, privacy oversight agencies and privacy advocates see PIAs as an antidote to the serious privacy-intrusiveness of business processes in the public and private sectors and the ravages of rapidly developing information technologies. On the other, governments and business enterprises alike have struggled to encourage public acceptance and adoption of technologies that are very apparently privacy-invasive, and have been turning to PIAs as a means of understanding concerns and mitigating business risks.This paper distinguishes PIAs from other business processes, such as privacy issues analysis, privacy law compliance checking and privacy audit, and identifies key aspects of the development of PIA practice and policy from their beginnings through to the end of 2008.
Many applications benet from user location data, but lo- cation data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in dierent regions can be re-identied even more easily. Our results show that the threat of re-identication for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we oer guidance for obfuscating location traces before they are disclosed.