ArticlePDF Available

Abstract and Figures

The rapid development of machine-learning algorithms, which underpin contemporary artificial intelligence (AI) systems, has created new opportunities for the automation of work processes. While algorithmic management has been observed primarily within the platform-mediated gig economy, its transformative reach and consequences are also spreading to more standard work settings. Exploring algorithmic management as a sociotechnical concept, which reflects both technological infrastructures and organizational choices, we discuss how algorithmic management may influence existing power and social structures within organizations. We identify three key issues. Firstly, we explore how algorithmic management shapes pre-existing power dynamics between workers and managers. Secondly, we discuss how algorithmic management demands new roles and competencies while also fostering oppositional attitudes towards algorithms. Thirdly, we explain how algorithmic management impacts knowledge and information exchange within an organization, unpacking the concept of opacity on both a technical and organizational level. We conclude by situating this piece in broader discussions on the future of work, accountability, and identifying future research steps.
Content may be subject to copyright.
Forthcoming: Big Data & Society by SAGE.
1
Algorithmic Management in a Work Context
Mohammad Hossein Jarrahi, University of North Carolina at Chapel Hill
Gemma Newlands, University of Amsterdam
Min Kyung Lee, University of Texas at Austin
Christine Wolf, Independent Researcher
Eliscia Kinder, University of North Carolina at Chapel Hill
Will Sutherland, University of Washington, Seattle
Abstract
The rapid development of machine-learning algorithms, which underpin contemporary artificial
intelligence (AI) systems, has created new opportunities for the automation of work processes.
While algorithmic management has been observed primarily within the platform-mediated gig
economy, its transformative reach and consequences are also spreading to more standard work
settings. Exploring algorithmic management as a sociotechnical concept, which reflects both
technological infrastructures and organizational choices, we discuss how algorithmic
management may influence existing power and social structures within organizations. We
identify three key issues. Firstly, we explore how algorithmic management shapes pre-existing
power dynamics between workers and managers. Secondly, we discuss how algorithmic
management demands new roles and competencies while also fostering oppositional attitudes
towards algorithms. Thirdly, we explain how algorithmic management impacts knowledge and
information exchange within an organization, unpacking the concept of opacity on both a
technical and organizational level. We conclude by situating this piece in broader discussions on
the future of work, accountability, and identifying future research steps.
Keywords: Algorithmic Competencies, Algorithmic Management, Artificial Intelligence, Opacity,
Power Dynamics, Future of Work
Forthcoming: Big Data & Society by SAGE.
2
Introduction
From restaurants to try, movies to watch, or routes to take, machine learning algorithms (ML)1
increasingly shape many aspects of everyday human experiences through the
recommendations they make and actions they suggest. Algorithms also shape organizational
activity through semi- or fully automating the management, coordination, and administration of a
workforce (Crowston and Bolici, 2019). Termed ‘algorithmic management’ or ‘management-by-
algorithm’, this trend has come to be understood as the delegation of managerial functions to
algorithms (Lee 2018; Lee et al., 2015; Noponen, 2019). A defining feature of algorithmic
management is the data which fuel the predictive modelling techniques, with many
acknowledging that the political economy of data capture is a significant driver in transforming
labor norms (Dourish, 2016; Newlands, 2020; Shestakofsky, 2017).
Prior research on algorithmic management has focused on platform-mediated gig work, where
workers on non-standard contracts usually commit only short-term to a given organization
(Harms and Han, 2019; Jarrahi and Sutherland, 2019). In what Huws (2016) describes as a
‘new paradigm of work’, algorithmic systems in the gig economy track worker performance,
perform job matching, generate employee rankings, and can even resolve disputes between
workers (Duggan et al., 2019; Wood et al., 2018). Operating as the primary mechanism of
coordination in the gig economy (Lee et al. 2015; Bucher et al., 2021), platforms can support
millions of transactions a day across disaggregated workforces (Mateescu and Nguyen, 2019).
Much of what we know about algorithmic management comes from nascent research in this
domain (e.g., Meijerink and Keegan, 2019; Sutherland and Jarrahi, 2018), where a particular
focus has been placed on how algorithmic management both substitutes and complements
traditional managerial oversight (Cappelli, 2018; Newlands, 2020).
However, algorithmic management is not isolated to platform-mediated gig work (Möhlmann and
Henfridsson, 2019). Recent years have also witnessed the parallel development of algorithmic
management in more standard work settings, referring to work arrangements that are stable,
continuous, full time and embrace a direct relationship between the employee and their unitary
employer (typically organizations with clearer structures and boundaries) (Schoukens, and
Barrio, 2017). In contrast to most gig work settings, algorithmic systems in standard
organisations emerge within pre-existing power dynamics between managers and workers. As a
sociotechnical process emerging from the continuous interaction of organizational members and
1 Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably to refer to an array
of computational techniques that underlie contemporary “smart” computational systems. AI is an umbrella
term to broadly refer to the discipline, of which ML is one branch (including techniques such as Deep
Learning and Random Decision Forests). Unless referring specifically to an ML technique, we use the
broader colloquial term ‘algorithms’ throughout for clarity.
Forthcoming: Big Data & Society by SAGE.
3
algorithmic systems, algorithmic management in standard work settings reflects and redefines
pre-existing roles, relationships, power dynamics, and information exchanges. Deeply
embedded in pre-existing social, technical, and organisational structures of the workplace,
algorithmic management emerges at the intersection of managers, workers, and algorithms. As
von Krogh (2018) explains, both traditional and non-traditional work settings will be increasingly
shaped by the ‘interaction of human and machine authority regimes’ (406).
For instance, algorithms can assist Human Resources (HR) to filter job applicants (Leicht-
Deobald et al., 2019), fire inadequately fast warehouse workers, and improve work morale
through fine-grained people analytics (Gal et al., 2020). The growth of algorithmic management
should therefore be understood as a plurality of decisions made by human managers to alter
work processes. Each decision occurs not in a vacuum but entwined with diverse considerations
and consequences. However, in-depth understanding of the in-situ interactions between
workers, managers, and algorithms in standard work contexts are still relatively uncharted
(Duggan et al., 2019; Jarrahi, 2019; Wolf and Blomberg, 2019). As algorithmic management
systems move from cutting-edge research to routine aspects of everyday organizations,
research is needed to explore the moral implications of algorithmic management and labor
conditions such systems create (Leicht-Deobald et al., 2019).
This article’s key contribution, therefore, is to look at how the emergence of algorithmic
management across both standard and non-standard work settings interfaces with pre-existing
organizational dynamics, roles, and competencies. This article is structured as follows. Firstly,
we will establish algorithmic management as a socio-technical concept, discussing how it
develops through organisational choices. Following, we will explain how algorithmic
management impacts power dynamics at work, both increasing the power of managers over
workers, while simultaneously decreasing managerial authority. Next, we will explore how
algorithmic management shapes organisational roles through the development of both
algorithmic competencies and oppositional attitudes, such as algorithm aversion and cognitive
complacency. In this, we directly address how both the intertwined technical and organisational
opacity of algorithms shapes workers’ competencies, as well as information and knowledge
exchange. We will conclude by situating this piece in broader discussions on the future of work,
accountability, and identifying future steps.
Algorithmic management as a sociotechnical concept
Discourse around algorithmic management often translates into a simplified narrative of
algorithmic systems progressively replacing human roles (Jabagi et al., 2019). However,
examining algorithmic management in standard organizational contexts means looking beyond
the idea that algorithms will operate autonomously as technological entities (von Krogh, 2018).
Algorithmic management should rather be understood as a sociotechnical process emerging
from the continuous interaction of organizational members and the algorithms that mediate their
Forthcoming: Big Data & Society by SAGE.
4
work (Jarrahi and Sutherland, 2019). This sociotechnical perspective underscores the mutual
constitution of technological systems and social actors, where relationships are socially
constructed and enacted (Sawyer and Jarrahi, 2014). In this, humans and algorithms form ‘an
assemblage in which the components of their differing origins and natures are put together and
relationships between them are established’ (Bader and Kaiser, 2019: 656). Mutually constituted
with organizational surroundings, algorithmic management both reflects and redefines existing
relationships between managers and workers. The boundaries between the responsibilities of
managers, workers, and algorithms are not fixed and are constantly negotiated and enacted in
management practices. In other words, understanding the emerging role of algorithms in
organizations means taking a sociotechnical perspective, and moving from questions of
replacement or substitution towards questions of balance, coordination, contestation, and
negotiation.
Standard organizations implement algorithmic management by drawing on a variety of data-
driven technological infrastructures, such as automated scheduling, people analytics, or
recruitment systems. Automated scheduling systems, for instance, have been used widely in the
retail and service industries to predict labor demand and schedule workers based on data
regarding customer demands, seasonal patterns, and past sales data (Pignot, 2021). In the
case of people analytics, algorithmic systems leverage data on worker behavior to offer
actionable recommendations for managers regarding key decisions such as motivation,
performance appraisal, and promotion (Gal et al., 2020). Through systems such as Microsoft
Teams, organizations can also collect data about the minutiae of a worker’s activity,
productivity, and other granular aspects of their performance ranging from daily patterns of a
specific employee’s behaviors all the way to macro trends of how an organization manages its
human resources over time towards strategic goals (Galliers et al., 2017). Voice analysis
algorithms, for example, are used in call centers to decide whether workers express adequate
empathy. Meanwhile, algorithms in Amazon warehouses closely monitor workers'
performances, rate their speed, and terminate workers if they fall behind (Dzieza, 2020).
Algorithmic decision-making is also becoming popular for recruitment as manifested through CV
screening and algorithmic evaluations of telephone or video interviews (Köchling and Wehner,
2020; Yarger et al., 2020).
Focusing primarily on the technology involved in algorithmic management, some perceive
algorithms as ‘technocratic and dispassionate’ decision makers that can provide more objective
and consistent decisions than humans (Kahneman et al., 2016; Kleinberg et al., 2018).
However, it is important to underline how algorithms lack any sense of individual purpose: they
must have their objectives defined and algorithmic systems must be deliberately fed and trained
on organisational data. As such, there is the significant element of organisational choice in
where algorithmic management is implemented and in terms of which processes are replaced or
augmented. Moreover, there is considerable organisational choice around whether to implement
Forthcoming: Big Data & Society by SAGE.
5
the suggestions and recommendations made by the algorithmic systems. In this, automated
implementation of algorithmic suggestions can be regarded, itself, as a distinct organisational
choice. As Fleming (2019: 27) notes, ‘It is not technology that determines employment patterns
or organizational design but the other way around. The specific use of machinery is informed by
socio-organizational forces, with power being a particularly salient factor.’ We go one step
further and as illustrated in Figure 1, present algorithmic management as a sociotechnical
phenomenon shaped by both social and organizational forces. In the rest of this article, we
broach how algorithmic management may shift organizational roles and power structures while
being shaped by organizational choices and competencies.
Figure 1: Algorithmic management as a sociotechnical phenomenon representing both
social and technological forces
Considering algorithms within a broader notion of management thus makes visible possible
ramifications of algorithmic management. It requires that we think about interactions between
humans and algorithms in terms of the direction and development of organizations over the long
term, and not just in the day-to-day experiences of the worker. Understanding that algorithmic
systems are socially constructed and embedded in existing power dynamics also means these
systems are neither created nor function outside the biases rooted in the organizational cultures
within which they are implemented (Kellogg et al., 2020). For this reason, algorithmic systems
can inherit biases embedded in previous decisions, and reenact historical patterns of bias,
discrimination, and inequalities (Chan and Wang, 2018; Lepri et al., 2018). If an organisation
has demonstrated historical biases in hiring, firing, pay, or other managerial decisions, these
biases will be ‘learned’ by the algorithm (Keding, 2021). Lambrecht and Tucker (2019), for
example, discovered fewer women than men were shown ads about STEM careers even
though the ads were gender neutral by design. Amazon also scrapped its AI-powered recruiting
engine since it did not rank applicants for technical posts in a gender-neutral manner (The
Guardian, 2019). The company discovered the underlying algorithm would build on the historical
Forthcoming: Big Data & Society by SAGE.
6
trends of resumes submitted to the company over a 10-year period, which were reflective of the
broader male-dominated IT industry.
Algorithmic management shapes power relationships
Increased power to managers
Primarily, algorithmic systems aid managers to overcome cognitive limitations in dealing with
data overload (Jarrahi, 2018). Algorithmic systems, such as for CV filtering or time-scheduling
can streamline work processes and overall improve job quality for those involved. However, the
use of advanced algorithms creates new opportunities for managers to exercise control over the
workforce (Kellogg et al., 2020; Shapiro, 2018). Similar to the implementation of organizational
information systems, often it is the workers who are the subjects of algorithmic management,
while managers are those implementing the decisions. In tune with the long-standing spirit of
Taylorism and scientific management, algorithmic management carries the risk of treating
workers like mere ‘programmable cogs in machines’ (Frischmann and Selinger, 2018). As
Bucher et al. (2019) note in their study of Amazon Mechanical Turk workers, this
commodification and alienation of workers is a widespread issue particularly in digitally-
mediated work where power imbalances can have wide social impacts.
Platform-mediated gig work is critically dependent upon algorithmic systems for control and
there is a clear hierarchy of power between the workers, the platform, and the service recipients
(organizations in some cases). Current research highlights implicit and explicit power
asymmetries between the workers and service recipients, as well as between the workers and
platforms (Newlands, 2020; Shapiro 2018). As clients, service recipients enjoy an upper hand in
transactions. Likewise, platforms use different mechanisms of control to withhold information
from the workers or even remove them from the platform. These control mechanisms are visible
and often focus on three major outcomes: ensuring the integrity of transactions, protecting the
platform from disintermediation, and monitoring workers’ performance (Jarrahi et al., 2019;
Newlands, 2020). Since many work activities are mediated via platforms, gig work organisations
can utilize ‘soft’ forms of workforce surveillance and control to monitor how workers spend their
time and where they are located (Duggan et al., 2019; Shapiro 2018). Wood et al. (2018) also
found that algorithmic management creates symbolic power structures around features such as
reputation and ratings. Yet, while ‘analogue’ parallels to gig work exist, such as standard taxi
driving, food-delivery work, or creative freelancing, gig work platforms usually emerged as new
organisations without pre-existing relationships among the management and the workforce.
Deliveroo, for instance, was founded ex nihilo rather than as the digital transformation of a pre-
existing food-delivery company. Uber, similarly, was never a taxi company with ‘standard’
workplace relationships. Since research to date on algorithmic management has primarily
focused on gig work settings, it is therefore important to highlight how key lessons and findings
on gig work are not perfectly translatable to a standard context.
Forthcoming: Big Data & Society by SAGE.
7
In more standard work settings, where digital platforms and algorithmic systems are not the
primary means of organizing work, algorithmic control adds to pre-existing power dynamics and
regimes of control. In these contexts, most workers are connected to the organization through
more conventional coordination mechanisms such as organizational hierarchies and traditional
employment arrangements. As such, this sociotechnical emergence is a critical factor in
determining the motivation, scope, and implementation of algorithmic systems. Rather than
mass-scale algorithmic management, as observed in gig work, in standard organisations only a
small number of managerial functions may be replaced or augmented with algorithmic systems.
One key reason is the high-cost and uncertain returns of algorithmic systems, which usually
have to be sourced from a third-party vendor (Christin, 2017). The extra effort needed to draw
on algorithmic decisions also comes with heavy investments of time and energy, such as
aligning human and algorithmic cognitive systems (Burton et al., 2020).
A primary consideration for the more gradual roll-out of algorithmic management in standard
work settings is that augmenting or replacing managerial processes has direct implications for
either increasing or decreasing the managerial prerogatives already in place. As such, we can
observe that the roll-out of algorithmic systems is swiftest when increasing managerial power.
Automated scheduling systems, widely adopted in the retail industry, shift more power from
workers to managers. The dynamic and ‘just-in-time’ approach presented by these systems
makes a justification for allocating shifts in short notices and smaller increments in response to
changes in customer demands. Such fluctuating schedules imposed by the automated systems
create negative consequences for workers, such as higher stress, income instability and work-
family conflicts (Mateescu and Nguyen, 2019).
Using algorithms to nudge workers’ behaviors can be more subtle, but no less effective. For
example, organizations may use sentiment analysis algorithms in their people analytics efforts
to assess the ‘vibes’ of teams and to identify ways to increase productivity and compliance (Gal
et al., 2020). Inspired by the gig economy, where the gig workers may get pay raises based on
client reviews, firms such as JP Morgan build on algorithms to collect and analyze constant
feedback to evaluate the performance of employees and ascertain compensation (Kessler,
2017). Darr (2019) also describes the use of ‘automatons’ to control workers in a computer
chain store, where the company leverages algorithmically generated sales contests to control
workers’ behavior.
Algorithmic surveillance techniques have also emerged in more totalizing organizations, which
are using wearables such as wristbands or harnesses to analyze workers’ performance and
nudge them towards desirable behaviors through vibration (Newlands, 2020). In warehouses,
for example, algorithms can automatically enforce pace of work, and in some cases result in
demoralization of the workers or even physical injuries (Dzieza, 2020). Truckers’ locations and
behaviors can also be monitored through GPS systems, allowing the dispatchers to
algorithmically evaluate their performance (Levy 2015). Algorithmic surveillance has also been
Forthcoming: Big Data & Society by SAGE.
8
implemented to manage flexible and remote workforces. During the Covid-19 pandemic, which
required millions of workers to work remotely, organizations started to roll out systems such as
InterGuard to monitor the remote workforce by collecting and analyzing computer activities such
as screenshots, login times, and keystrokes as well as measuring productivity and idle time
(Laker et al., 2020; Newlands et al., 2020). Although researchers have identified strategies that
workers use to work around algorithmic control and reclaim their agency (Bucher et al., 2021),
workers often find them difficult to confront and change (Watkins, 2020).
Surveillance through algorithms in the workplace can also take a ‘refractive’ form, which refers
to the approach in which ‘monitoring of one party can facilitate control over another party that is
not the direct target of data collection.’ (Levy and Barocas, 2018: 1166). Retailers, for example,
are known to leverage customer-derived data (gathered based on many data points such as
close monitoring of foot traffic) and algorithm-empowered operational analysis to optimize
scheduling decisions, automate self services and even replace workers. Brayne (2020)
examines how in the rollout of big data policing strategies and algorithmic surveillance
technologies introduced to monitor crime came to be used to surveil police officers themselves.
The officers' reaction against this effort is telling of the ways that surveillance can reconfigure
work and professional identity. Officers reacted not only against the surveillance as an
entrenchment of managerial oversight and a threat to their independence, but also as a move
away from the kind of experiential knowledge that an officer brings to their work.
Decreased power to managers
While emerging as a powerful mechanism to increase the control of management as a whole,
algorithmic management can also decrease the power and agency of individual managers. For
years, research has documented the decline of middle managers in post-bureaucratic
organizations (Lee and Edmondson, 2017; Pinsonneault and Kraemer, 1997) and has sought to
define what functions and identity these positions may entail (Foss and Klein, 2014). Recent
advances in AI and the prospect of introducing algorithmic management may further complicate
these roles (Noponen, 2019).
In many traditional organizational settings, the implementation of algorithms is still based on
dashboards or 'decision support' systems, which generate recommendations to managers on
actions to take. Explaining how managers may work alongside algorithms, Shrestha et al.
(2019) describes three categories: full delegation, sequential decision-making, and aggregated
decision-making. In cases of full delegation, managerial agency is almost entirely subsumed
into the algorithmic systems. The key moment of managerial intervention becomes the design
and development of the system.
Forthcoming: Big Data & Society by SAGE.
9
Yet, delegating decision-making to algorithms may deprive managers of critical opportunities to
develop tacit knowledge, which primarily derives from experiential practices. Tacit knowledge
often derives from opportunities to practice judgment when directly involved in decision making.
Social practices of decision making through trial-and-error help humans retain and internalize
practical knowledge. A recent empirical study about recruiters using AI-based hiring software
provides a nuanced picture (Li et al., 2021). While AI-based sourcing tools provided
opportunities to identify new talent pools and newer keywords and skill sets associated with
different jobs, recruiters did not have enough control over algorithmic recommendation criteria.
The lack of control put recruiters in a position where they merely accepted good
recommendations and sometimes deprived them of the chance to develop their own strategies.
Taking away these first-hand experiences may risk turning managers into ‘artificial humans’ that
are shaped and used by the smart technology, not the other way around (Demetis and Lee,
2018). While a significant amount of work on algorithmic management has focused on decision-
making, contemporary notions of management imply a large array of other skills and
responsibilities. This provokes us to think about what it would mean for an algorithm to attempt
the role of liaison, spokesperson, or figurehead, for instance.
An interesting research direction along these lines focuses on whether and how algorithmic
management may transcend to the realm of what Harms and Hans (2019) called ‘algorithmic
leadership,’ where smart machines assume leadership activities of managers such as
motivating, supporting, and transforming workers. Future research may ask whether algorithms
can go beyond effective handling of task-related elements of leadership and automate relational
aspects as well, given the self-learning capacities of cutting-edge techniques such as
AlphaZero.2 Future research may explore ‘how humans come to accept and follow a computer
leader,’ (Wesche and Sonderegger, 2019: 197) or ‘whether, or when, humans will prefer to work
with AIs that appear to be human or are clearly artificial’ (Harms and Han, 2019: 75). There is
an alternative argument that there will be a premium on soft skills and human intuition (Ferràs -
Hernandez, 2018). In a more collective cultural sense, workers and their supervisors would
enter relationships based on social exchange and would view their supervisor’s decisions in the
context of norms of reciprocity and commitment (Cappelli et al., 2019). However, such an
empathetic relationship based on goodwill is almost impossible to develop with an algorithmic
manager (Duggan et al., 2019).
2 The AlphaZero algorithm is seen as ushering in a new era of learning algorithms which builds on Deep
Learning. AlphaZero was designed to play the centuries-old game of Go and uses reinforcement learning,
an ML technique capable of iteratively learning how to make a sequence of decisions to reach an optimal
outcome.
Forthcoming: Big Data & Society by SAGE.
10
Competencies for an algorithmic organisation
Shifting Roles
The emergence of algorithmic management is changing the roles of workers and managers
based on the need to manage and interact with algorithmic systems. A sociotechnical
perspective argues that social actors play an active role in shaping and appropriating
technological systems. In other words, workers and managers are not passive recipients of
algorithmic results; they could find ways to develop a functioning understanding of algorithmic
systems, work around issues such as trust and align the system to their needs and interests.
Addressing the need for new organizational roles to handle algorithmic systems, Wilson et al.
(2017) identify three emerging jobs: trainers, explainers, and sustainers. Workers need to teach
algorithms how to perform organizational tasks, open the black-boxes’ of algorithmic systems
by explaining their decision making approach to business leaders, and finally ensure the
fairness and effectiveness of algorithms to minimize their unintended consequences. Gal et al.
(2020) similarly suggest that organizations need to establish what they term ‘algorithmists’, who
monitor the algorithmic ecosystems. They are ‘not just data scientists, but the human
translators, mediators, and agents of algorithmic logic’ (Gal et al., 2020: 10). Since a common
culture obsessed with efficiency and extraction of maximum value from workers has long
existed, there is a risk of siloing algorithmic competencies into a small number of organisational
members. A consequence of this culture is a mixture of upskilling and deskilling disparate sets
workers with regard to the managerial systems. While some groups in organisations will be
trained to use the new algorithmic systems, thus increasing their power and position, others will
not be trained and rather be forced to implement decisions that previously they had the power to
shape. Both the organization and worker may come to view the deskilled manager as only an
’appendage to the system’ (Zuboff, 1988).
Indeed, years of research demonstrates common system design mindsets that perceive
workers' roles and responsibilities based on automation demands; meaning the designer seeks
to automate as many sub-components of the sociotechnical work system as possible and leave
the rest to human operators. As a result, an unintended consequence of such common design
approaches is to deskill humans and to relegate them to uninspiring roles. Insights from the use
of algorithmic management in the gig economy reflect the same findings where workers are
considered replaceable ‘human computers’ and may engage in (ghost) work that is repetitive,
does not result in any new learning, and only intends to train AI systems (Gray and Suri 2019).
Algorithmic competencies
In a workplace where algorithmic management is implemented, it is essential for managers and
workers to develop algorithmic competencies (Jarrahi and Sutherland, 2019). However,
research on algorithmic competencies, particularly in standard work contexts, is still in its
Forthcoming: Big Data & Society by SAGE.
11
infancy. Algorithmic competencies can be understood as skills that help workers in developing
symbiotic relationships with algorithms. In addition to data-centered analytical skills that facilitate
interactions between workers and algorithms, algorithmic competencies involve critical thinking.
Workers ‘have a growing need to understand how to challenge the outputs of algorithms, and
not just assume system decisions are always right’ (Bersin and Zao-Sanders, 2020). Since
algorithmic management involves a complicated network of people, data, and computational
systems, the ability to understand, manipulate, and address the algorithmic systems on both a
technical and organisational level will determine an individual’s agency and power at work.
Grønsund and Aanestad (2020), for instance, have observed that the role of human workers
shifted with introduction of algorithmic systems, and that algorithmic competencies include
auditing and altering algorithms.
An individual’s power in relation to the algorithm, and in relation to the organisation as a whole,
will therefore depend on their ability to understand and interact with the algorithmic systems. A
lack of competency with the tools of work can reduce workers' sense of autonomy over their
work as well as their ability to make informed decisions and self-reflect (Jarrahi et al., 2019).
Without algorithmic competencies and the active role of workers in constructing them, artificial
and human intelligence do not yield ‘an assemblage of human and algorithmic intelligence’
(Bade and Kaiser, 2019). As such, algorithmic opacity can be employed as a control mechanism
to maximize the organization's objectives. Indeed, recent research shows how gig work
platforms withhold information about how algorithms operate to maintain soft control of the
workforce (Shapiro, 2018). Information on how platforms’ payment algorithms operate, for
instance, is used to prevent gig workers from collectivizing as detailed by Van Doorn (2020) in
his study of Berlin-based delivery workers.
Attitudes towards algorithms (algorithm aversion and cognitive complacency)
The development of algorithmic competencies is, however, bounded by individual attitudes
towards algorithms (Lichtenthaler, 2019). These attitudes, namely algorithm aversion and
cognitive complacency, can be viewed as oppositional attitudes as they reflect a resistance to
either using algorithms or to understanding and actively shaping their deployment.
Algorithm aversion refers to people’s hesitance to use algorithmic results, usually after
observing imperfect performance by algorithms; this can even occur in situations where the
algorithm may outperform humans in certain accuracy measures (Dietvorst et al., 2016; Prahl
and Van Swol, 2017). Algorithm aversion reveals a lack of trust in algorithm-generated advice
and has been observed in the work of professional forecasters in various domains where
managers resisted the integration of available forecasting algorithms in their work practices. The
full array of reasons behind algorithm aversion is not completely known. Recent experimental
studies suggest that after receiving bad advice from human and computer advisors utilization of
computer-provided advice decreased more significantly, and human decision-makers still find
Forthcoming: Big Data & Society by SAGE.
12
more common ground with their human counterparts than with non-human, artificial systems
(Prahl and Van Swol, 2017). Dietvorst et al. (2016) also suggest that algorithm aversion
decreases if decision makers can retain some control over the outcome and the option to modify
the algorithms’ inferences. Algorithmic aversion can lead to a lack of functioning understanding
about how algorithms actually operate and their impact on one’s own work.
However, we can observe a critical distinction between aversion to algorithms in general, and
an aversion to specific algorithmic systems. For instance, acceptance for workplace monitoring
is more likely if it enhances labour productivity, but there is a tendency to reject it if it is used for
monitoring health and performance (Abraham et al., 2019). It must be underlined that the
implementation of algorithmic management is usually top-down and imposed upon the majority
of the workforce who have limited power to resist. Kolbjornsrud et al. (2017), for instance,
demonstrate that support for the introduction of algorithmic management correlates with the
rank and is impacted by cultural and national differences. As they explain, ‘top managers relish
the opportunity to integrate AI into work practices, but mid-level and front-line managers are less
optimistic’. Isomorphic pressures to ‘keep up’ with rival organizations, or to maintain a sense of
digital innovation, can thus result in the implementation of algorithmic management systems by
top management that are neither welcomed nor necessary.
Balancing aversion to algorithms is, however, a trend for cognitive complacency (Logg et al.,
2019). Cognitive complacency may manifest itself when human decision-makers do not inquire
into the factors driving inferences made by algorithms (Newell and Marabelli, 2015). In
organizational contexts, algorithmic decision-making can ‘lead to very superficial
understandings of why things happen, and this will definitely not help managers, as well as ‘end
users’ build cumulative knowledge on phenomena’ (Newell and Marabelli, 2015: 10). Previous
research suggests that organizations posing increased workload and time pressure may raise
the possibility of workers’ overreliance on automated systems and overuse of automated advice.
That is, as workload increases, the workers may cut back on actively monitoring automated
systems and uncritically accept the decision outputs to keep up with task demands (Chien et al.,
2018). The procedural character of the algorithm may also be regarded as a kind of neutrality or
objectivity, leading its decisions to be taken as authoritative by default.
This oppositional attitude towards either understanding or critiquing algorithmic systems echoes
similar observations regarding a ‘machine heuristic’, which refers to mental shortcuts used to
imply that interactions with machines (as opposed to another human being) can be more
trustworthy, unbiased, or altruistic (Sundar and Kim, 2019). As such, people may overthrust
intelligent agents and defer to them in high-stakes decision making such as in emergency
situations (Wagner et al., 2018). Earlier research on automation suggests, in some contexts,
that humans may assign more value and less bias to decisions generated by automated
systems compared to other sources of expertise (Parasuraman and Manzey, 2010). However,
specific elements of an organizational context can fuel cognitive complacency. What is termed
Forthcoming: Big Data & Society by SAGE.
13
cognitive complacency may, in this regard, often be a manifestation of helplessness due to
power imbalances.
Opacity
In a context where knowledge is power, how algorithmic management develops in standard
work settings is fundamentally shaped by access to knowledge, specifically information and
understanding regarding the algorithmic systems. This is primarily an issue of algorithmic
opacity which operates sociotechnically through an intertwinement of technical and
organizational features. For example, Burrell (2016) draws distinctions between opacity as
intentional concealment, such as with corporate trade secrets, opacity due to a lack of technical
literacy, and opacity due to algorithmic complexity and scale. Here, we discuss two important
intertwined facets of opacity that impact the performance of algorithmic management in
organizations: technical opacity and organizational opacity. While technical opacity is rooted in
the specific material features and design of emerging algorithmic systems, organizational
opacity has to do with how opaque algorithms may reinforce the opacity of the broader
organizational choices.
Technical opacity
In terms of technical opacity, a frequent concern around algorithmic management is the opaque
or ‘black box’ character of AI systems (Burke, 2019). This term should be disaggregated,
however, as the literature discusses a number of different ways in which algorithms are ‘black-
boxed’ (Obar, 2020). Firstly, the algorithm may be black-boxed in the way that any
computational or bureaucratic system can be. Once in place, its functioning is taken for granted,
and the decisions or contingencies that went into designing the algorithm may not be engaged
again. Newell and Marabelli (2015: 5) found that: ‘discriminations are increasingly being made
by an algorithm, with few individuals actually understanding what is included in the algorithm or
even why.’ There is also an aspect of black-boxing that has been ascribed to algorithms as an
inherent property of the computationally convoluted ways in which they operate. Many of the
techniques used in ML, such as neural networks, are immensely complex, often involving many
layers of computational processing in high dimensionality. Such complexities often make it
difficult for those who are not AI experts to understand, even at a high level, how these
techniques operate (Shrestha et al., 2019).
In response to this need for greater human comprehension of AI systems, there has been a
rapidly growing subfield within the technical AI community called ‘Explainable AI’ (XAI) (Adadi
and Berrada, 2018). Fueled by the U.S. Defense Advanced Research Projects Agency
(DARPA) program of the same name, XAI techniques attempt to render immensely complex
models more understandable by humans through various technical methods. These can
produce ‘local’ or ‘global’ explanations, with a local explanation offering a particular input/output
Forthcoming: Big Data & Society by SAGE.
14
prediction (e.g., why did the model assign X label to Y input?) and a global explanation relating
to the model overall (e.g., what are the prominent features the model will take into account when
reaching a prediction) (Adadi and Berrada, 2018). Such methods are largely aimed at the
version of black-boxing associated with ML’s inherent complexity. Yet, scholars have noted a
gap between technical XAI advances (which often are most useful to AI developers in
processes of debugging) and the types of explanations that workers might need or want in
actual use settings (Gilpin et al., 2019). In considering algorithmic management, different sets of
concerns around explainability become salient, one of which is the varying technical literacies of
workers who might interact with AI systems (Wang et al., 2019).
Organizational opacity
In addition to technical opacity there is the issue of organizational opacity, a lack of information
by the organisation due to strategic interests and intellectual property. Algorithmic opacity can
be compounded by organizational relationships or power dynamics within and outside the
organization. Developing algorithmic management systems relies on high upfront costs with
uncertain long-term benefits (Keding, 2021). The scarcity and high-cost of AI talent explains why
most gig-work platforms require extensive venture capital funding to develop the underlying
technological infrastructures. While gig work platforms are opaque in their processes, most of
their algorithms are developed ‘in-house’, giving their management the ability to continuously
‘tweak’ the algorithm. Indeed, Uber has become notorious for continuous changes to their
algorithms, to the detriment of both workers and customers. In standard organisations (non-
platform based), however, technological infrastructures are often provided through the Cloud by
third party AI suppliers on an AI-as-a-service basis (Parsaeefard et al., 2019). Organisations
using Microsoft Teams, for instance, may rely on the ‘365 productivity score’ to rate worker
performance based on how much they use Microsoft 365 products (e.g., Word, Excel, or
Teams). This externalization, however, can not only restrict managerial agency in ‘tweaking’ the
algorithms, it also prevents them from understanding how the algorithmic systems operate.
From an intra-organizational perspective, a major concern is also the opacity of the broader
organizational workflow within which AI systems are embedded and the ‘high’ or ‘low’ stakes
involved along that flow (Rudin, 2019). The overarching workflow itself may be complex, with
several processes unfolding across time and space. Entwined with the concept of algorithmic
authority, Polack (2020) refers to how professional gatekeeping may hinder algorithmic
transparency since only ML developers or top managers may feel the need (or get permission)
to understand how the algorithmic system works.
Overcoming opacity
Recently, regulatory effort has been put into figuring out how to make algorithms more
transparent and explainable (Felzmann et al., 2019). Such techniques can be applied to some
degree within organisational settings. One approach is the algorithmic audit, which is predicated
Forthcoming: Big Data & Society by SAGE.
15
on the idea that algorithmic biases may produce a record that can be read, understood, and
altered in future iterations (Diakopoulos, 2016; Silva and Kenney, 2018). Conversely, Polack
(2020) has proposed that, as opposed to reverse-engineering algorithms, effort is invested in
‘forward-engineering’ algorithms to establish how their design is shaped by varying constraints.
Auditing is especially important in enterprise applications of AI, such as in regulated industries
(e.g., health/medical, finance), as well as in the European Union where the GDPR grants
individuals a ‘Right to Explanation’ (Casey et al., 2019). There are several significant types of
information, the disclosure of which could help provide a kind of algorithmic literacy (including
human involvement, data, the model, inferencing, and algorithmic presence) (Diakopoulos,
2016; Jarrahi and Sutherland, 2019). As organizations increasingly adopt (or even rely on)
algorithms as the backbone for people management, workplace-related concerns may even
raise algorithmic management as a pressing concern for contemporary labor laws.
However, in the absence of labor-specific and organisation-specific policies that govern
algorithmic management, the access to knowledge about which algorithms are deployed, how
they are enacted, and what impact they have on each worker, is limited. In platform-mediated
gig work, considerable attention has been levied to ‘unboxing’ the algorithms, trying to ascertain
how the different algorithmic systems function (Chen et al., 2015; Van Doorn, 2020). Algorithmic
management has become a target of intense analysis and critique among the disparate
workforce, as well as by the media and research community. As a result, more information is
available about how gig work platforms operate than about how the more everyday instances of
algorithmic management function, which are easier to overlook and not centralised within a
single platform. Continuous audits of algorithmic systems, within an organisation, may provide
an internal form of inquiry into how algorithms are organizing work-related processes (Buhmann
et al., 2020). Organizations can also organize regular communication and deliberation
opportunities to allow stakeholders to collectively assess the development and impact of
algorithmic management systems. The transparency by design framework (Felzmann et al.,
2020) or a participatory AI design framework (Lee et al., 2019) can provide procedures and
tools that enable such stakeholder participation. Yet, one caveat is that developing such forms
of transparency still needs the active engagement of workers to uncover the information. As
demonstrated from the gig work setting, algorithmic opacity is not overcome without struggle,
effort, and risk.
Forthcoming: Big Data & Society by SAGE.
16
Discussion and conclusion
Algorithmic management is changing the workplace and the relationships between workers,
managers, and algorithmic systems. Three key trends have supported the rise of algorithmic
management. Firstly, shifting norms of what constitutes work (e.g., project-centric work
arrangements, platform work, and non-standard contracts); secondly, the expanding technical
capabilities of machine-learning based algorithms to replace discrete managerial tasks (Khan et
al., 2019); and thirdly, wide-scale micro-instances of organizational choice to use algorithmic
management, due to local economic and strategic goals.
While intelligent systems continue to reshape our conceptions of work knowledge, boundaries,
power structures, and overall organization, self-learning algorithms have the potential to chip
away at several foundational notions of work and organization. For example, the deployment of
intelligent systems will usher in new work systems with different divisions of work between
machines and humans (where more mundane and data-centric tasks are assigned to machines
while humans engage in tasks requiring social intelligence, tacit understanding or imagination),
Moreover, new roles will be defined for managers as being strategic and creative thinkers rather
than only coordinators of organizational transactions. While public concerns today focus on
machines taking jobs from humans, care and concern is also needed around how these
machines will contribute to the management of human actions.
Adopting a sociotechnical perspective, we have argued that algorithmic management embodies
both social and technological elements that interact with one another and together shape
algorithmic outcomes. Algorithms must be recognized in the social and political context of
employment relationships (Orlikowski and Scott 2016). Issues such as opacity around the
application of algorithms in organization derive from the interaction of technological and material
characteristics of algorithms, as well as by their surrounding organizational dynamics. For
example, unique characteristics of the emerging AI algorithms (powered by deep learning) set
them apart from the previous generations of AI systems, and render their inferences intrinsically
opaque (due to the complicated nature of underlying neural networks). However, organizational
politics and interests in minimizing public disclosure of decision-making processes can further
deepen or even build from the opacity of algorithmic management to deflect accountability. As
such, it is often hard to pinpoint the origin of these problems and separate the two elements as
both the social and technical are entangled in current practices and are considered co-
constitutive.
A driving concern throughout the implementation of algorithmic management is organisational
accountability: holding algorithmically-driven decisions accountable to various stakeholders
within and outside the boundaries of the organisation (Diakopoulos, 2016; Mateescu and
Nguyen, 2019). Discussions surrounding algorithmic accountability frequently refer to the
transparency of algorithms, how organizations practically engage with opaque algorithms, and
Forthcoming: Big Data & Society by SAGE.
17
how to develop a sense of trustworthiness among organizational stakeholders (Buhmann et al.,
2020). When addressing systemic discrimination, even simple oversight of algorithmic functions
is problematic due to a multiplicity of stakeholders, interests, and sociopolitical factors innate to
algorithms. For example, blame was placed on a distribution ‘algorithm’ in Stanford Medical
Center when controversy erupted on misallocating Covid-19 vaccines to different stakeholders.
Administrators, in this case, were prioritized over frontline doctors. However, the algorithm was
in fact not a complicated deep learning one, but quite a simple decision-tree algorithm designed
by a committee (Lum and Chowdhury 2021). A technocentric culture often fails to provide full
rationales for decision making and present algorithmic results as ‘facts’ to the users rather than
probabilistic predictions. Rubel et al. (2019) refer to this as agency laundering wherein human
decision makers launder their agency by distancing themselves from morally suspect decisions
and by assigning the fault to automated systems. This organizational approach goes against
algorithmic accountability, which ‘rejects the common deflection of blame to an automated
system by ensuring those who deploy an algorithm cannot eschew responsibility for its actions’
(Garfinkel et al., 2017).
Algorithmic management touches upon many stakeholders, but no one inside the work context
is impacted by algorithmic management more than managers and workers themselves. The
relationship between the two parties will continue to be reconfigured and negotiated through
uses of algorithms in organizations. Not only does algorithmic management reconfigure the
power dynamics in the workplace but it also embeds itself in already existing power and social
structures of the organization. In short, the deployment of algorithmic management in
organizational work introduces novel ‘humanmachine configurations’ that could transform
relationships between managers and workers and their respective roles (Grønsund and
Aanestad 2020). For example, algorithmic management may translate into cost-cutting and
consequently labor-cutting measures (Mateescu and Nguyen 2019).
With the rise of algorithmic systems, human managers are tasked with deciding what kinds of
algorithmic software to adopt in their organization, whether it is for performance reviews,
incentive amounts, or departure alerts. While these do not entirely remove human decision-
making from the equation, they do encourage new ways of approaching, understanding, and
acting upon such information. Workers, though, who are faced with algorithmic management
processes within their workplace, as of now have little recourse to detect, comprehensively
understand, or work around undesirable outcomes. Protecting the dignity and the wherewithal of
workers - and those who manage their work - remains a critical concern as algorithmic
management becomes a more commonplace phenomenon. Yet, if the gig economy is any
template, reward-oriented algorithmic management processes are unlikely to favor workers over
other stakeholders. While this scenario is hypothetical, more public awareness and scrutiny is
needed about the role of learning and self-learning algorithms in influencing algorithmic
management trends.
Forthcoming: Big Data & Society by SAGE.
18
Throughout this article, we argued that algorithmic management is a sociotechnical
phenomenon. This means that while algorithms have a central bearing on how work is
managed, their outcomes in transforming management and relationship between workers and
managers are socially constructed and enacted. Future research is, however, needed to
investigate the mutual shaping of algorithms that manage work and unique dynamics of different
work contexts. In this regard, we envision three specific directions for future research.
First, as noted, much of the discourse around algorithmic management is rooted in platform-
mediated gig work. Gig work’s unique characteristics, such as a lack of pre-existing
management systems and non-standard work arrangements, limit the transferability of current
findings to standard work settings. It is therefore important to explore contextual variations in
how algorithmic management unfolds and corresponding sociocultural differences across
industries and organizational contexts. For example, algorithmic management may appear
differently in organizations with flat hierarchies and more democratic organizational cultures
than bureaucratic organizations. Studies of non-platform parallel industries to popular gig work
industries would be particularly valuable, such as food delivery and freelancing, since we could
observe the differences in power dynamics when algorithmic management is implemented from
the start or later in an organization’s history.
Second, the use of intelligent systems empowered by deep learning in more traditional
organizations is still embryonic. As such, most research on algorithmic management and impact
of algorithms in organizations concerns previous generation of AI (i.e. rule based AI); so this
work many not fully reflect some of the technological characteristics of emerging AI systems and
their growing applications in algorithmic management. Future research is needed to examine
the interplay of these technological capabilities and ways that organizations manage work in
practice.
Thirdly, empirical research into the worker perspective, such as through ethnography, is vitally
important in determining how algorithmic management is experienced ‘on the front lines’ in
standard organizations. Participatory action research, to understand the multi-stakeholder and
socio-technical nature of algorithmic management implementation, would also offer
considerable value in determining how to ensure a fair and democratic roll-out of algorithmic
management. Conversely, direct participatory research may identify normative parameters of
algorithmic management, highlighting where organizations should resist isomorphic pressures
and avoid implementing algorithmic management at all.
Forthcoming: Big Data & Society by SAGE.
19
References
Abraham M, Niessen C, Schnabel C, Lorek K, Grimm V, Möslein K and Wrede M (2019)
Electronic monitoring at work: The role of attitudes, functions, and perceived control for the
acceptance of tracking technologies. Human Resource Management Journal 29(4): 657-675.
Adadi A and Berrada M (2018) Peeking inside the black-box: A Survey on Explainable Artificial
Intelligence (XAI). IEEE Access 6: 5213852160.
Bader V and Kaiser S (2019) Algorithmic decision-making? The user interface and its role for
human involvement in decisions supported by artificial intelligence. Organization 26(5): 655-672.
Bersin J and Zao-Sanders M (2020, Feburary 12) Boost your team’s data literacy. Harvard
Business Review. Available at: https://hbr.org/2020/02/boost-your-teams-data-literacy
(accessed 9 March 2021).
Bucher E, Schou P and Waldkirch M (2021) Pacifying the algorithm: Anticipatory compliance in
the face of algorithmic management in the gig economy. Organization 28(1): 44-67.
Bucher E, Fieseler C and Lutz C (2019) Mattering in digital labor. Journal of Managerial
Psychology 34(4): 307-324.
Buhmann A, Paßmann J and Fieseler C (2020) Managing algorithmic accountability: Balancing
reputational concerns, engagement strategies, and the potential of rational discourse. Journal of
Business Ethics 163: 265-280.
Burke A (2019) Occluded algorithms. Big Data & Society 6(2): 1-15.
Burrell J (2016) How the machine ‘thinks’: Understanding opacity in machine learning
algorithms. Big Data & Society 3(1): 1-12.
Burton JW, Stein M and Jensen TB (2020) A systematic review of algorithm aversion in
augmented decision making. Journal of Behavioral Decision Making 33(2): 220239.
Forthcoming: Big Data & Society by SAGE.
20
Cappelli P (2018, February 20) Are Algorithms Good Managers? Human Resource Executive.
Available at: http://hrexecutive.com/are-algorithms-good-managers/ (accessed 9 March 2021).
Cappelli P, Tambe P and Yakubovich V (2019) Artificial Intelligence in Human Resources
Management: Challenges and a path forward. California Management Review 61(4): 1542.
Casey B, Farhangi A and Vogl R (2019) Rethinking explainable machines: The GDPR’s right to
explanation debate and the rise of algorithmic audits in enterprise. Berkeley Tech. Law Journal
34: 143.
Chan J and Wang J (2018) Hiring preferences in online labor markets: Evidence of a female
hiring bias. Management Science 64(7): 29732994.
Chen, L., Mislove, A., & Wilson, C. (2015, October). Peeking beneath the hood of uber. In
Proceedings of the 2015 internet measurement conference (pp. 495-508).
Chien S-Y, Lewis M, Sycara K, Liu J-S and Kumru A (2018) The effect of culture on trust in
automation: Reliability and workload. ACM Transactions on Interactive Intelligent Systems 29
(No. 4).
Christin A (2017) Algorithms in practice: Comparing web journalism and criminal justice. Big
Data & Society 4(2): 1-14.
Crowston K and Bolici F (2019) Impacts of machine learning on work. In Proceedings of the
52nd Hawaii International Conference on System Sciences. Haiwaii, USA.
Darr A (2019) Automatons, sales-floor control and the constitution of authority. Human Relations
72(5): 889-909.
Demetis D and Lee A (2018) When humans using the IT artifact becomes IT using the human
artifact. Journal of the Association for Information Systems 19(19): 929952.
Forthcoming: Big Data & Society by SAGE.
21
Diakopoulos N (2016) Accountability in algorithmic decision making. Communications of the
ACM 59(2): 5662.
Dietvorst BJ, Simmons JP and Massey C (2016) Overcoming algorithm aversion: People will
use imperfect algorithms if they can (even slightly) modify them. Management Science 64(3):
11551170.
Dourish P (2016) Algorithms and their others: Algorithmic culture in context. Big Data & Society
3(2): 1-11.
Duggan J, Sherman U, Carbery R and McDonnell A (2019) Algorithmic management and app
work in the gig economy: A research agenda for employment relations and HRM. Human
Resource Management Journal 30(1): 114-132.
Dzieza J (2020, 27 February) How hard will the robots make us work? The Verge. Available at:
https://www.theverge.com/2020/2/27/21155254/automation-robots-unemployment-jobs-vs-
human-google-amazon (accessed 9 March 2021).
Felzmann H, Fosch Villaronga E, Lutz C and Tamò-Larrieux A (2019) Transparency you can
trust: Transparency requirements for artificial intelligence between legal norms and contextual
concerns. Big Data & Society 6(1): 114.
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards
transparency by design for artificial intelligence. Science and Engineering Ethics, 1-29.
Ferràs-Hernández X (2018) The future of management in a world of electronic brains. Journal of
Management Inquiry 27(2): 260-263.
Fleming P (2019) Robots and organization studies: Why robots might not want to steal your job.
Organization Studies 40(1): 2338.
Foss NJ and Klein PG (2014) Why managers still matter. MIT Sloan Management Review 56(1):
73.
Forthcoming: Big Data & Society by SAGE.
22
Frischmann B and Selinger E (2018) Re-Engineering Humanity. Cambridge, UK: Cambridge
University Press.
Gal U, Jensen TB and Stein M-K (2020) Breaking the vicious cycle of algorithmic management:
A virtue ethics approach to people analytics. Information and Organization 30(2): 100301.
Galliers RDS, Newell S, Shanks G and Topi H (2017) Datification and its human, organizational
and societal effects: The strategic opportunities and challenges of algorithmic decision-making.
The Journal of Strategic Information Systems 26(3): 18590.
Garfinkel S, Matthews J, Shapiro SS and Smith JM (2017) Toward algorithmic transparency and
accountability. Communications of the ACM 60(9): 5.
Gilpin LH, Testart C, Fruchter N and Adebayo J (2019) Explaining explanations to society. arXiv
Repository. Available at: https://arxiv.org/abs/1901.06560 (accessed 9 March 2021).
Gray ML and Suri S (2019) Ghost work: How to stop Silicon Valley from building a new global
underclass. San Francisco, CA: Houghton Mifflin Harcourt.
Grønsund T and Aanestad M (2020) Augmenting the algorithm: Emerging human-in-the-loop
work configurations. The Journal of Strategic Information Systems 29(2): 101614.
Harms PD and Han G (2019) Algorithmic leadership: The future is now. Journal of Leadership
Studies 12(4): 7475.
Huws U (2016) Logged labor: A new paradigm of work organisation? Work Organization,
Labour & Globalisation 10(1): 7-26.
Jabagi N, Croteau AM, Audebrand LK and Marsan J (2019) Gig-workers’ motivation: Thinking
beyond carrots and sticks. Journal of Managerial Psychology 34(4): 192-213.
Forthcoming: Big Data & Society by SAGE.
23
Jarrahi MH (2018) Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making. Business Horizons 61(4): 577-586.
Jarrahi MH (2019) In the age of the smart artificial intelligence: AI’s dual capacities for
automating and informating work. Business Information Review 36(4): 178187.
Jarrahi MH and Sutherland W (2019) Algorithmic management and algorithmic competencies:
Understanding and appropriating algorithms in gig work. International Conference on
Information. Springer, Cham: 578-589.
Jarrahi MH, Sutherland W, Nelson SB and Sawyer S (2019) Platformic management, boundary
resources for gig work, and worker autonomy. Computer Supported Cooperative Work (CSCW)
29: 153189.
Keding C (2021) Understanding the interplay of artificial intelligence and strategic management:
four decades of research in review. Management Review Quarterly 71: 91-134.
Kahneman D, Rosenfield AM, Gandhi L and Blaser T (2016) Noise: How to overcome the high,
hidden cost of inconsistent decision making. Harvard Business Review 94: 3846.
Kellogg KC, Valentine MA and Christin A (2020) Algorithms at work: The new contested terrain
of control. Academy of Management Annals 14(1): 366410.
Kessler S (2017, March 13) The influence of Uber ratings is about to be felt in the hallways of
one of the world’s largest banks. Quartz. Available at: https://qz.com/930080/jp-morgan-chase-
is-developing-a-tool-for-constant-performance-reviews/ (accessed 9 March 2021).
Khan M, Jan B and Farman H (2019) Deep learning: Convergence to big data analytics.
Singapore: Springer.
Kleinberg J, Ludwig J, Mullainathan S and Sunstein CR (2018) Discrimination in the age of
algorithms. Journal of Legal Analysis 10: 113174.
Forthcoming: Big Data & Society by SAGE.
24
Köchling A and Wehner MC (2020) Discriminated by an algorithm: A systematic review of
discrimination and fairness by algorithmic decision-making in the context of HR recruitment and
HR development. Business Research 13: 795-848.
Kolbjørnsrud V, Amico R and Thomas RJ (2017) Partnering with AI: how organizations can win
over skeptical managers. Strategy & Leadership 45(1): 37-43.
Laker B, Godley W, Patel C and Cobb D (2020, November 20). How to monitor remote workers
- ethically. MIT Sloan Management Review. Available at: https://sloanreview.mit.edu/article/how-
to-monitor-remote-workers-ethically/ (accessed 9 March 2021).
Lambrecht A and Tucker C (2019). Algorithmic bias? An empirical study of apparent gender-
based discrimination in the display of STEM career ads. Management Science 65(7): 2966
2981.
Lee MK (2018) Understanding perception of algorithmic decisions: Fairness, trust, and emotion
in response to algorithmic management. Big Data & Society 5(1): 1-16.
Lee MK, Kusbit D, Kahng A et al. (2019) WeBuildAI: Participatory framework for algorithmic
governance. In: Proceedings of the 22nd Annual ACM Conference on Human-Computer
Interaction (CSCW) 3, pp. 1-35.
Lee MK, Kusbit D, Metsky E and Dabbish L (2015) Working with machines: The impact of
algorithmic and data-driven management on human workers. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems, pp. 16031612.
Lee MY and Edmondson AC (2017) Self-managing organizations: Exploring the limits of less-
hierarchical organizing. Research in Organizational Behavior 37: 3558.
Leicht-Deobald U, Busch T, Schank C, Weibel A, Schafheitle S, Wildhaber I and Kasper G
(2019) The challenges of algorithm-based HR decision-making for personal integrity. Journal of
Business Ethics 160: 377-392.
Forthcoming: Big Data & Society by SAGE.
25
Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable
algorithmic decision-making processes. Philosophy & Technology 31(4): 611627.
Levy K (2015) The contexts of control: Information, power, and truck-driving work. Information
Society 31: 160174.
Levy K and Barocas S (2018) Privacy at the margins - refractive surveillance: Monitoring
customers to manage workers. International Journal of Communication Systems 12(0): 23.
Li L, Lassiter T, Oh J, and Lee, M. K. (2021). Algorithmic Hiring in Practice: Recruiter and HR
Professional’s Perspectives on AI Use in Hiring. In the Proceedings of the AAAI/ACM
Conference on Artificial Intelligence, Ethics, and Society (AIES 2021).
Lichtenthaler U (2018) Substitute or synthesis: the interplay between human and artificial
intelligence. Research-Technology Management 61(5): 12-14.
Logg JM, Minson JA and Moore DA (2019) Algorithm appreciation: People prefer algorithmic to
human judgment. Organizational Behavior and Human Decision Processes 151: 90103.
Lum K and Chowdhury R (2021, February 26). What is an ‘algorithm’? It depends whom you
ask. MIT Technology Review. Available at:
https://www.technologyreview.com/2021/02/26/1020007/what-is-an-algorithm/ (accessed 9
March 2021).
Mateescu A and Nguyen A (2019) Algorithmic management in the workplace. Data & Society.
Available at: https://datasociety.net/wp-
content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf (accessed 9 March
2021).
Meijerink J and Keegan A (2019) Conceptualizing human resource management in the gig
economy: Toward a platform ecosystem perspective. Journal of Managerial Psychology 34(4):
214-232.
Forthcoming: Big Data & Society by SAGE.
26
Möhlmann M and Henfridsson O (2019) What people hate about being managed by algorithms,
according to a study of Uber drivers. Harvard Business Review. Available at:
https://hbr.org/2019/08/what-people-hate-about-being-managed-by-algorithms-according-to-a-
study-of-uber-drivers (accessed 9 March 2021).
Newell S and Marabelli M (2015) Strategic opportunities (and challenges) of algorithmic
decision-making: A call for action on the long-term societal effects of “datification.” The Journal
of Strategic Information Systems 24(1): 314.
Newlands G (2020) Algorithmic surveillance in the gig economy: The organization of work
through Lefebvrian conceived space. Organization Studies. Epub ahead of print 9 July 2020.
DOI: 10.1177/0170840620937900.
Newlands G, Lutz C, Tamò-Larrieux A, Villaronga EF, Harasgama R and Scheitlin G (2020)
Innovation under pressure: Implications for data privacy during the Covid-19 pandemic. Big
Data & Society 7(2), 2053951720976680.
Noponen N (2019) Impact of artificial intelligence on management. Electronic Journal of
Business Ethics and Organization Studies 24(2): 43-50.
Obar JA (2020) Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data
black boxes (without assistance). Big Data & Society. Epub ahead of online print 23 June 2020.
DOI: 10.1177/2053951720935615.
Orlikowski WJ and Scott SV (2016) Digital work: A research agenda. In Czarniawska B (ed) A
Research Agenda for Management and Organization Studies. Edward Elgar Publishing, pp.88-
96.
Parasuraman R and Manzey DH (2010) Complacency and bias in human use of automation: An
attentional integration. Human Factors 52(3): 381410.
Forthcoming: Big Data & Society by SAGE.
27
Parsaeefard S, Tabrizian I and Leon-Garcia A (2019) Artificial intelligence as a service (AI-aaS)
on software-defined infrastructure. In 2019 IEEE Conference on Standards for Communications
and Networking (CSCN), Granada, Spain, pp.1-7.
Pignot E (2021) Who is pulling the strings in the platform economy? Accounting for the dark and
unexpected sides of algorithmic control. Organization 28(1): 208-235.
Pinsonneault A and Kraemer KL (1997) Middle management downsizing: An empirical
investigation of the impact of information technology. Management Science 43(5): 659679.
Polack P (2020) Beyond algorithmic reformism: Forward engineering the designs of algorithmic
systems. Big Data & Society. Epub ahead of online print 20 March 2020. DOI:
10.1177/2053951720913064.
Prahl A and Van Swol L (2017) Understanding algorithm aversion: When is advice from
automation discounted? Journal of Forecasting 36(6): 691702.
Rubel AP, Castro C, Pham A (2019) Agency laundering and information technologies. Ethical
Theory and Moral Practice: An International Forum 22: 10171041.
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions
and use interpretable models instead. Nature Machine Intelligence 1(5): 206215.
Sawyer S and Jarrahi M (2014) Sociotechnical approaches to the study of information systems.
In Topi H and Tucker A (eds) Computing Handbook. Chapman and Hall/CRC, pp.1-27..
Schoukens P and Barrio A (2017). The changing concept of work: When does typical work
become atypical?. European Labour Law Journal 8(4): 306-332.
Shapiro A (2018) Between autonomy and control: Strategies of arbitrage in the “on-demand”
economy. New Media & Society 20(8): 29542971.
Shestakofsky B (2017) Working algorithms: Software automation and the future of work. Work
and Occupations 44(4): 376423.
Forthcoming: Big Data & Society by SAGE.
28
Shrestha YR, Ben-Menahem SM and von Krogh G (2019) Organizational decision-making
structures in the age of artificial intelligence. California Management Review 61(4): 6683.
Silva S and Kenney M (2018) Algorithms, platforms, and ethnic bias: An integrative essay.
Phylon 55(1 & 2): 937.
Sundar SS and Jinyoun K (2019) Machine heuristic: When we trust computers more than
humans with our personal information. In: Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems, pp. 1-9.
Sutherland W and Jarrahi MH (2018) The sharing economy and digital platforms: A review and
research agenda. International Journal of Information Management 43: 328-341.
The Guardian (2018, October 10). Amazon ditched AI recruiting tool that favored men for
technical jobs. The Guardian. Available at:
https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-
engine (accessed 9 March 2021).
Van Doorn N (2020) At what price? Labour politics and calculative power struggles in on-demand
food delivery. Work Organisation, Labour & Globalisation 14(1): 136-149.
Von Krogh G (2018) Artificial intelligence in organizations: New opportunities for phenomenon-
based theorizing. Academy of Management Discoveries 4(4): 404409.
Wagner AR, Borenstein J and Howard A (2018) Overtrust in the robotic age. Communications of
the ACM 61(9): 22-24.
Wang D, Yang Q, Abdul A and Lim BY (2019) Designing theory-driven user-centric explainable
AI. In: Proceedings of the 2019 CHI Conference in Human Factors in Computing Systems 601,
pp. 1-15.
Forthcoming: Big Data & Society by SAGE.
29
Watkins EA (2020) The “crooked set up”: Algorithmic fairness and the organizational citizen.
Available at: http://fair-ai.owlstown.com/publications/1428 (accessed 9 March 2021).
Wesche JS and Sonderegger A (2019) When computers take the lead: The automation of
leadership. Computers in Human Behavior 101: 197209.
Wilson HJ, Daugherty P and Bianzino N (2017) The jobs that artificial intelligence will create.
MIT Sloan Management Review 58(4): 14-16.
Wolf CT and Blomberg JL (2019) Evaluating the promise of human-algorithm collaborations in
everyday work practices. In: Proceedings of the ACM on Human-Computer Interaction 143.
DOI: 10.1145/3359245.
Wood AJ, Graham M, Lehdonvirta V and Hjorth I (2019) Good gig, bad gig: autonomy and
algorithmic control in the global gig economy. Work, Employment and Society 33(1): 5675.
Yarger L, Payton FC and Neupane B (2019) Algorithmic equity in the hiring of underrepresented
IT job candidates. Online Information Review 44(4): 383-395.
Zuboff S (1988) In the age of the smart machine: The future of work and power. New York NY:
Basic books.
... It should be emphasized that algorithmic management relies on digital technologies and data, but always entails organisational and institutional choices on the specific use of those technologies for work coordination purposes. As a socio-technical process (Jarrahi et al., 2021), algorithmic management is shaped by socio-institutional and organisational factors, which contribute to its development and outcomes. ...
... It should be understood as a specific way of combining and using those technologies to automate or at least support some of the functions previously carried out by human management for the coordination of work. In this sense, algorithmic management is a socio-technical process (Jarrahi et al., 2021), always entailing a technical side (i.e. the technologies available and adopted) and a social or organisational side (i.e. the ways those technologies are used and the institutional and organisation context in which they are adopted). ...
... After all, digital labour platforms have been built on algorithmic management practices since their inception, and that might be why algorithmic management is most advanced and "pure" in this context. In regular workplaces, instead, algorithmic management practices have to be implemented over pre-existing structures and processes of work organisation (Jarrahi et al., 2021;Orlikowski, 2007), which can make algorithmic management less easy to identify, at least at a first glimpse. ...
Article
Full-text available
This paper provides a conceptual framework for the emerging phenomenon of algorithmic management and outlines some of the implications for work, from work organisation to working conditions (job quality). The paper defines algorithmic management as the use of computer-programmed procedures for the coordination of labour input in an organisation and puts it into context to discuss its usage in both digital labour platforms and ‘regular’ workplaces and companies, exploring its implications and providing a few policy suggestions. The paper argues that while algorithmic management should be understood as the digital evolution of certain pre-existing trends that have long characterised the organisation of economic activity, it is potentially disruptive. This is because it increases considerably the organisational ability of controlling complex economic and work processes, as it benefits from the massive capacity to collect, store and process information of digital technologies. In algorithmic management, these technological developments are combined and used for re-organising control and re-shaping power balances in the workplace. This paper contributes to the growing academic and policy literature on algorithmic management, proposing a conceptual framework for empirical investigations and a basic compass for policy making in this area.
... By using algorithms to monitor and control workforce and optimize the efficiency of matching processes (Kellogg et al., 2020;Lee et al., 2015;Rosenblat & Stark, 2016), online labor platforms have enabled the study of a new form of organizing. Scholars term this new form of organizing algorithmic management 3 (Gal et al., 2020;Jarrahi et al., 2021;Kellogg et al., 2020;Newlands, 2021;Wiener et al., 2021). It refers to the "large-scale collection and use of data on a platform to develop and improve learning algorithms that carry out coordination and control functions traditionally performed by managers" (Möhlmann et al., , p. 2001. ...
... Platforms are often reluctant to disclose detailed information about them to safeguard their company secrets. Workers tend to perceive respective algorithms as highly opaque "black-boxes," which impede the understanding of their inner workings (Benbya et al., 2021;Burrell, 2016;Gal et al., 2020;Jarrahi et al., 2021;Kellogg et al., 2020;Marabelli et al., 2021). Previous work has shown that algorithmic opacity may cause Uber drivers to experience uncertainties about financial compensation and ride assignments . ...
... Despite these efforts in previous research, which have significantly contributed to our understanding, we still have limited knowledge about the role of technology sensemaking (Mesgari & Okoli, 2019). Although scholars have acknowledged the value of enactment theory (Iannacci, 2006) and have begun to explore algorithm sensemaking (Jarrahi et al., 2021), existing studies provide a superficial and descriptive account of this process. Thus, prior research leaves unexamined how workers make sense of algorithmic actions. ...
Article
Full-text available
Algorithmic management may create work environment tensions that are detrimental to workplace well-being and productivity. One specific type of tension originates from the fact that algorithms often exhibit limited transparency and are perceived as highly opaque, which impedes workers' understanding of their inner workings. While algorithmic transparency may facilitate sensemaking, the algorithm's opaqueness may aggravate sensemaking. By conducting an empirical case study in the context of the Uber platform, we explore how platform workers make sense of the algorithms managing them. Drawing on Weick's enactment theory, we theorize a new form of sensemaking-algorithm sensemaking-and unpack its three sub-elements: (1) focused enactment, (2) selection modes, and (3) retention sources. The sophisticated, multi-step process of algorithm sensemaking allows platform workers to keep up with algorithmic instructions systematically. We add to previous literature by theorizing algorithm sensemaking as a mediator linking workers' perceptions about tensions in their work environment and their behavioral responses.
... However, recent work suggests that the success of enhancing accountability relationships through transparency may be limited. Perceived agency of stakeholders [69,81], their education levels [43], and their optimism in AI [73], could complicate the rhetoric of 'stakeholder distrust in ADS. ' Further, the ecacy of transparency mechanisms towards accountability depends on the presence of a critically-aware public, legislative support, watchdog journalism, and the responsiveness of technology providers [17,60,76]. ...
... Unfortunately, such surrounding conditions for accountability are not universally available [109]. Organizationallevel changes from technology providers most often occur as a result of regulatory and user pressures [103], low-powered users may nd it challenging to regain their agency displaced by platforms [69,81], and mechanisms may have limited ecacy where there is power asymmetry [17,76]. This line of work calls for examining platform-user power relations for designing mechanisms. ...
Preprint
Full-text available
Accountability, a requisite for responsible AI, can be facilitated through transparency mechanisms such as audits and explainability. However, prior work suggests that the success of these mechanisms may be limited to Global North contexts; understanding the limitations of current interventions in varied socio-political conditions is crucial to help policymakers facilitate wider accountability. To do so, we examined the mediation of accountability in the existing interactions between vulnerable users and a 'high-risk' AI system in a Global South setting. We report on a qualitative study with 29 financially-stressed users of instant loan platforms in India. We found that users experienced intense feelings of indebtedness for the 'boon' of instant loans, and perceived huge obligations towards loan platforms. Users fulfilled obligations by accepting harsh terms and conditions, over-sharing sensitive data, and paying high fees to unknown and unverified lenders. Users demonstrated a dependence on loan platforms by persisting with such behaviors despite risks of harms such as abuse, recurring debts, discrimination, privacy harms, and self-harm to them. Instead of being enraged with loan platforms, users assumed responsibility for their negative experiences, thus releasing the high-powered loan platforms from accountability obligations. We argue that accountability is shaped by platform-user power relations, and urge caution to policymakers in adopting a purely technical approach to fostering algorithmic accountability. Instead, we call for situated interventions that enhance agency of users, enable meaningful transparency, reconfigure designer-user relations, and prompt a critical reflection in practitioners towards wider accountability. We conclude with implications for responsibly deploying AI in FinTech applications in India and beyond.
... Moving forward to present day, current workplace IT incorporates data driven machine learning tools for overt or covert profiling and management of the workforce (Edwards, Martin and Henderson, 2019). As Jarrahi et al (2021) highlight, we are moving to an age of 'algorithmic management' in workplaces. Yet lessons from 40 years ago have not been learned, and many ethical and legal concerns remain. ...
Preprint
Full-text available
This paper explores public perceptions around the role of affective computing in the workplace. It uses a series of design fictions with 46 UK based participants, unpacking their perspectives on the advantages and disadvantages of tracking the emotional state of workers. The scenario focuses on mundane uses of biometric sensing in a sales environment, and how this could shape management approaches with workers. The paper structure is as follows: section 1 provides a brief introduction; section 2 provides an overview of the innovative design fiction methodology; section 3 explores wider shifts around IT in the workplace; section 4 provides some legal analysis exploring emergence of AI in the workplace; and section 5 presents themes from the study data. The latter section includes discussion on concerns around functionality and accuracy of affective computing systems, and their impacts on surveillance, human agency, and worker/management interactions.
... In addition to the traditionally more passive oversight of workers offered by company ICT systems, new AI systems actively directed workers by restricting and recommending information or actions, such as generating scripts for call centre staff to use without deviation or specifying travel routes and times that gig workers in the delivery industry are expected to follow and meet. This phenomenon of AI systems, rather than managers, directing workers is now termed "algorithmic management" (Schildt 2017;Lee 2018;Noponen 2019;Jarrahi et al. 2021). It describes employers using AI to evaluate how workers performed tasks and assessed their behavioural patterns, determining which employees were best suited for different tasks. ...
Article
Full-text available
Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer look at how AI may be reshaping traditional workplace relations and, more specifically, workplace health and safety. To address this gap, we outline a conceptual model for developing an AI Work Health and Safety (WHS) Scorecard as a tool to assess and manage the potential risks and hazards to workers resulting from AI use in a workplace. A qualitative, practice-led research study of AI adopters was used to generate and test a novel list of potential AI risks to worker health and safety. Risks were identified after cross-referencing Australian AI Ethics Principles and Principles of Good Work Design with AI ideation, design and implementation stages captured by the AI Canvas, a framework otherwise used for assessing the commercial potential of AI to a business. The unique contribution of this research is the development of a novel matrix itemising currently known or anticipated risks to the WHS and ethical aspects at each AI adoption stage.
Article
Full-text available
This review seeks to present a comprehensive picture of recent discussions in the social sciences of the anticipated impact of AI on the world of work. Issues covered include: technological unemployment, algorithmic management, platform work and the politics of AI work. The review identifies the major disciplinary and methodological perspectives on AI’s impact on work, and the obstacles they face in making predictions. Two parameters influencing the development and deployment of AI in the economy are highlighted: the capitalist imperative and nationalistic pressures.
Article
Full-text available
This paper reviews the individual and organizational implications of gig work using the emerging psychological contract between gig workers and employing organizations as a lens. We first examine extant definitions of gig work and provide a conceptually clear definition. We then outline why both organizations and individuals may prefer gig work, offer an in-depth analysis of the ways in which the traditional psychological contract has been altered for both organizations and gig workers, and detail the impact of that new contract on gig workers. Specifically, organizations deconstruct jobs into standardized tasks and gig workers adapt by engaging in job crafting and work identity management. Second, organizational recruitment of gig workers alters the level and type of commitment gig workers feel towards an employing organization. Third, organizations use a variety of non-traditional practices to manage gig workers (e.g., including by digital algorithms) and gig workers adapt by balancing autonomy and dependence. Fourth, compensation tends to be project-based and typically lacks benefits, causing gig workers to learn to be a “jack-of-all trades” and learn to deal with pay volatility. Fifth, organizational training of gig workers is limited and they adapt by engaging in self-development. Sixth, gig workers develop alternative professional and social relationships to work in blended teams assembled by organizations and/or adapt to social isolation. Challenges associated with these practices and possible solutions are discussed and we develop propositions for testing in future research. Finally, we highlight specific areas for further exploration in future research.
Article
Full-text available
Given the widespread contribution of independent contractors to organizational innovation and competitive advantage, it is timely to reassess assumptions about the HRM practices appropriate to their management and the rationale for organizations to work with them. In the original and highly influential HR architecture model of Lepak and Snell (1999), contractor status is viewed as an outcome of the low value and/or low uniqueness of human capital resulting in the proposition to externalize and manage them using either none or minimal compliance‐based HRM practices. Developments in digital technologies and algorithmic management epitomized by online labor platforms prompt us to reconsider these assumptions and to challenge the proposed links between value/uniqueness of human capital, employment mode and HRM practices that are assumed by the HR architecture model. Using insights from online labor platforms, we argue that the significant benefits to firms of working with contractors, coupled with the possibilities offered by algorithmic management to efficiently monitor and regulate their behavior, provide a compelling reason for organizations to choose external employment modes even when workers are key to value creation. We challenge the alignment and stability of the relationships proposed by the HR architecture model, and offer propositions to extend the model by reconsidering the rationale for, and nature of, HRM practices associated with contractors. This reassessment is both timely and relevant given the growing prominence of business models where externalizing workers is central alongside the development of new forms of algorithmic human resource management to control them.
Article
Full-text available
This paper aims to address the dark side perspective on digital control and surveillance by emphasizing the affective grip of ideological control, namely the process that silently ensures the subjugation of digital labour, and which keeps the 'unexpectedness' of algorithmic practices at bay: that is, the propensity of users to contest digital prescriptions. In particular, the theoretical contribution of this paper is to combine Labour Process with psychoanalytically-informed, post-structuralist theory, in order to connect to, and further our understanding of, how and why digital workers assent to, or oppose, the interpellations of algorithmic ideology at work. To illustrate the operation of affective control in the Platform Economy, the emblematic example of ride-hailing platforms, such as Uber, and their algorithmic management, is revisited. Thus, the empirical section describes the way drivers are glued to the algorithm (e.g. for one more fare, or for the next surge pricing) in a way that prevents them, although not always, from considering genuine resistance to management. Finally, the paper discusses the central place of ideological fantasy and cynical enjoyment in the Platform Economy, as well as the ethical implications of the study.
Article
Full-text available
While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies (a) that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; (b) that the way in which organizations practically engage with emergent expectations about algorithms may be manipulative, adaptive, or moral; and (c) that when accountability relationships are heavily burdened by the opacity and fluidity of complex algorithmic systems, the emphasis of engagement should shift to a rational communication process through which a continuous and tentative assessment of the development, workings, and consequences of algorithms can be achieved over time. The degree to which such engagement is, in fact, rational can be assessed based on four discourse-ethical principles of participation, comprehension, multivocality, and responsiveness. We conclude that the framework may help organizations and their environments to jointly work toward greater accountability for complex algorithms. It may further help organizations in reputational positioning surrounding accountability issues. The discourse-ethical principles introduced in this article are meant to elevate these positioning contests to extend beyond mere adaption or compliance and help guide organizations to find moral and forward-looking solutions to accountability issues.
Article
Full-text available
Algorithmic management is used to govern digital work platforms such as Upwork or Fiverr. However, algorithmic decision-making is often non-transparent and rapidly evolving, forcing workers to constantly adapt their behavior. Extant research focuses on how workers experience algorithmic management, while often disregarding the agency that workers exert in dealing with algorithmic management. Following a sociomateriality perspective, we investigate the practices that workers develop to comply with (assumed) mechanisms of algorithmic management on digital work platforms. Based on a systematic content analysis of 12,294 scraped comments from an online community of digital freelancers, we show how workers adopt direct and indirect "anticipatory compliance practices", such as undervaluing their own work, staying under the radar, curtailing their outreach to clients and keeping emotions in check, in order to ensure their continued participation on the platform, which takes on the role of a shadow employer. Our study contributes to research on algorithmic management by (1) showing how workers adopt practices aimed at "pacifying" the platform algorithm; (2) outlining how workers engage in extra work; (3) showing how workers co-construct the power of algorithms through their anticipatory compliance practices.
Article
Full-text available
In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant con-textual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.
Article
Full-text available
The global Covid-19 pandemic has resulted in social and economic disruption unprecedented in the modern era. Many countries have introduced severe measures to contain the virus, including travel restrictions, public event bans, non-essential business closures, and remote work policies. While digital technologies help governments and organizations to enforce protection measures, such as contact tracing, their rushed deployment and adoption also raises profound concerns about surveillance, privacy, and data protection. This article presents two critical cases on digital surveillance technologies implemented during the Covid-19 pandemic and delineates the privacy implications thereof. We explain the contextual nature of privacy trade-offs during a pandemic and explore how regulatory and technical responses are needed to protect privacy in such circumstances. By providing a multi-disciplinary conversation on the value of privacy and data protection during a global pandemic, this article reflects on the implications digital solutions have for the future and raises the question of whether there is a way to have expedited privacy assessments that could anticipate and help mitigate adverse privacy implications these may have on society.
Article
Full-text available
Algorithmic decision-making is becoming increasingly common as a new source of advice in HR recruitment and HR development. While firms implement algorithmic decision-making to save costs as well as increase efficiency and objectivity, algorithmic decision-making might also lead to the unfair treatment of certain groups of people, implicit discrimination, and perceived unfairness. Current knowledge about the threats of unfairness and (implicit) discrimination by algorithmic decision-making is mostly unexplored in the human resource management context. Our goal is to clarify the current state of research related to HR recruitment and HR development, identify research gaps, and provide crucial future research directions. Based on a systematic review of 36 journal articles from 2014 to 2020, we present some applications of algorithmic decision-making and evaluate the possible pitfalls in these two essential HR functions. In doing this, we inform researchers and practitioners, offer important theoretical and practical implications, and suggest fruitful avenues for future research.
Article
Full-text available
The law forbids discrimination. But the ambiguity of human decision-making often makes it hard for the legal system to know whether anyone has discriminated. To understand how algorithms affect discrimination, we must understand how they affect the detection of discrimination. With the appropriate requirements in place, algorithms create the potential for new forms of transparency and hence opportunities to detect discrimination that are otherwise unavailable. The specificity of algorithms also makes transparent tradeoffs among competing values. This implies algorithms are not only a threat to be regulated; with the right safeguards, they can be a potential positive force for equity.
Book
The scope of criminal justice surveillance, from policing to incarceration, has expanded rapidly in recent decades. At the same time, the use of big data has spread across a range of fields, including finance, politics, health, and marketing. While law enforcement’s use of big data is hotly contested, very little is known about how the police actually use it in daily operations and with what consequences. This book offers an inside look at how police use big data and new surveillance technologies, leveraging on-the-ground fieldwork with one of the most technologically advanced law enforcement agencies in the world—the Los Angeles Police Department. Drawing on original interviews and ethnographic observations from over two years of fieldwork with the LAPD, the text examines the causes and consequences of big data and algorithmic control. It reveals how the police use predictive analytics and new surveillance technologies to deploy resources, identify criminal suspects, and conduct investigations; how the adoption of big data analytics transforms police organizational practices; and how the police themselves respond to these new data-driven practices. While big data analytics has the potential to reduce bias, increase efficiency, and improve prediction accuracy, the book argues that it also reproduces and deepens existing patterns of inequality, threatens privacy, and challenges civil liberties.