Content uploaded by Mohammad Hossein Jarrahi
Author content
All content in this area was uploaded by Mohammad Hossein Jarrahi on Oct 31, 2022
Content may be subject to copyright.
Original Research Article
Algorithmic management in a work
context
Mohammad Hossein Jarrahi
1
, Gemma Newlands
2
,
Min Kyung Lee
3
, Christine T. Wolf
4
, Eliscia Kinder
1
and
Will Sutherland
5
Abstract
The rapid development of machine-learning algorithms, which underpin contemporary artificial intelligence systems, has
created new opportunities for the automation of work processes and management functions. While algorithmic manage-
ment has been observed primarily within the platform-mediated gig economy, its transformative reach and consequences
are also spreading to more standard work settings. Exploring algorithmic management as a sociotechnical concept, which
reflects both technological infrastructures and organizational choices, we discuss how algorithmic management may influ-
ence existing power and social structures within organizations. We identify three key issues. First, we explore how algo-
rithmic management shapes pre-existing power dynamics between workers and managers. Second, we discuss how algo-
rithmic management demands new roles and competencies while also fostering oppositional attitudes toward algorithms.
Third, we explain how algorithmic management impacts knowledge and information exchange within an organization,
unpacking the concept of opacity on both a technical and organizational level. We conclude by situating this piece in broader
discussions on the future of work, accountability, and identifying future research steps.
Keywords
Algorithmic competencies, algorithmic management, artificial intelligence, opacity, power dynamics, future of work
Introduction
From restaurants to try, movies to watch, or routes to
take, machine learning (ML) algorithms
1
increasingly
shape many aspects of everyday human experiences
through the recommendations they make and actions
they suggest. Algorithms also shape organizational
activity through semi- or fully automating the manage-
ment, coordination, and administration of a workforce
(Crowston and Bolici, 2019). Termed “algorithmic
management” or “management-by-algorithm”, this
trend has come to be understood as the delegation of
managerial functions to algorithms (Lee, 2018; Lee
et al., 2015; Noponen, 2019). A defining feature of
algorithmic management is the data which fuels the
predictive modeling techniques, with many acknowl-
edging that the political economy of data capture is a
significant driver in transforming labor norms
(Dourish, 2016; Newlands, 2020; Shestakofsky, 2017).
Prior research on algorithmic management has
focused on platform-mediated gig work, where workers
on non-standard contracts usually commit only
short-term to a given organization (Harms and Han,
2019; Jarrahi and Sutherland, 2019). In what Huws
(2016) describes as a “new paradigm of work”, algo-
rithmic systems in the gig economy track worker per-
formance, perform job matching, generate employee
rankings, and can even resolve disputes between work-
ers (Duggan et al., 2019; Wood et al., 2018). Operating
as the primary mechanism of coordination in the gig
economy (Bucher et al., 2021; Lee et al., 2015), plat-
forms can support millions of transactions a day across
1
University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
2
Bi Norwegian Business School, Oslo, Norway
3
The University of Texas at Austin, Austin, TX, USA
4
Independent Researcher, San Jose, CA, USA
5
University of Washington, Seattle, WA, USA
Corresponding author:
Gemma Newlands, BI Norwegian Business School, Oslo 0442, Norway.
Email: gemma.e.newlands@bi.no
Big Data & Society
July–December: 1–14
!The Author(s) 2021
DOI: 10.1177/20539517211020332
journals.sagepub.com/home/bds
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://
creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission
provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
disaggregated workforces (Mateescu and Nguyen,
2019). Much of what we know about algorithmic man-
agement comes from nascent research in this domain
(e.g. Meijerink and Keegan, 2019; Sutherland and
Jarrahi, 2018), where a particular focus has been
placed on how algorithmic management both substi-
tutes and complements traditional managerial over-
sight (Cappelli, 2018; Newlands, 2020).
However, algorithmic management is not isolated to
platform-mediated gig work (M€
ohlmann and
Henfridsson, 2019). Recent years have also witnessed
the parallel development of algorithmic management in
more standard work settings, referring to work
arrangements that are stable, continuous, full time,
and embrace a direct relationship between the employ-
ee and their unitary employer (typically organizations
with clearer structures and boundaries) (Schoukens and
Barrio, 2017). In contrast to most gig work settings,
algorithmic systems in standard organizations emerge
within pre-existing power dynamics between managers
and workers. As a sociotechnical process emerging
from the continuous interaction of organizational
members and algorithmic systems, algorithmic man-
agement in standard work settings reflects and rede-
fines pre-existing roles, relationships, power
dynamics, and information exchanges. Deeply embed-
ded in pre-existing social, technical, and organizational
structures of the workplace, algorithmic management
emerges at the intersection of managers, workers, and
algorithms. As von Krogh (2018) explains, both tradi-
tional and non-traditional work settings will be increas-
ingly shaped by the “interaction of human and machine
authority regimes” (p.406).
For instance, algorithms can assist Human
Resources (HR) to filter job applicants (Leicht-
Deobald et al., 2019), fire inadequately fast warehouse
workers, and improve work morale through fine-
grained people analytics (Gal et al., 2020). The
growth of algorithmic management should therefore
be understood as a plurality of decisions made by
human managers to alter work processes. Each deci-
sion occurs not in a vacuum but entwined with diverse
considerations and consequences. However, in-depth
understanding of the in-situ interactions between work-
ers, managers, and algorithms in standard work con-
texts are still relatively uncharted (Duggan et al., 2019;
Jarrahi, 2019; Wolf and Blomberg, 2019). As algorith-
mic management systems move from cutting-edge
research to routine aspects of everyday organizations,
research is needed to explore the moral implications of
algorithmic management and labor conditions such
systems create (Leicht-Deobald et al., 2019).
This article’s key contribution, therefore, is to look
at how the emergence of algorithmic management
across both standard and non-standard work settings
interfaces with pre-existing organizational dynamics,
roles, and competencies. This article is structured as
follows. First, we will establish algorithmic manage-
ment as a socio-technical concept, discussing how it
develops through organizational choices. Following,
we will explain how algorithmic management impacts
power dynamics at work, both increasing the power of
managers over workers, while simultaneously decreas-
ing managerial authority. Next, we will explore how
algorithmic management shapes organizational roles
through the development of both algorithmic compe-
tencies and oppositional attitudes, such as algorithm
aversion and cognitive complacency. In this, we direct-
ly address how both the intertwined technical and orga-
nizational opacity of algorithms shapes workers’
competencies, as well as information and knowledge
exchange. We will conclude by situating this piece in
broader discussions on the future of work, accountabil-
ity, and identifying future steps.
Algorithmic management as a
sociotechnical concept
Discourse around algorithmic management often
translates into a simplified narrative of algorithmic sys-
tems progressively replacing human roles (Jabagi et al.,
2019). However, examining algorithmic management in
standard organizational contexts means looking
beyond the idea that algorithms will operate autono-
mously as technological entities (von Krogh, 2018).
Algorithmic management should rather be understood
as a sociotechnical process emerging from the continu-
ous interaction of organizational members and the
algorithms that mediate their work (Jarrahi and
Sutherland, 2019). This sociotechnical perspective
underscores the mutual constitution of technological
systems and social actors, where relationships are
socially constructed and enacted (Sawyer and Jarrahi,
2014). In this, humans and algorithms form “an assem-
blage in which the components of their differing origins
and natures are put together and relationships between
them are established” (Bader and Kaiser, 2019: 656).
Mutually constituted with organizational surround-
ings, algorithmic management both reflects and rede-
fines existing relationships between managers and
workers. The boundaries between the responsibilities
of managers, workers, and algorithms are not fixed
and are constantly negotiated and enacted in manage-
ment practices. In other words, understanding the
emerging role of algorithms in organizations means
taking a sociotechnical perspective and moving from
questions of replacement or substitution toward ques-
tions of balance, coordination, contestation, and
negotiation.
2Big Data & Society
Standard organizations implement algorithmic man-
agement by drawing on a variety of data-driven tech-
nological infrastructures, such as automated
scheduling, people analytics, or recruitment systems.
Automated scheduling systems, for instance, have
been used widely in the retail and service industries to
predict labor demand and schedule workers based on
data regarding customer demands, seasonal patterns,
and past sales data (Pignot, 2021). In the case of
people analytics, algorithmic systems leverage data on
worker behavior to offer actionable recommendations
for managers regarding key decisions such as motiva-
tion, performance appraisal, and promotion (Gal et al.,
2020). Through systems such as Microsoft Teams,
organizations can also collect data about the minutiae
of a worker’s activity, productivity, and other granular
aspects of their performance ranging from daily pat-
terns of a specific employee’s behaviors all the way to
macro trends of how an organization manages its HR
over time toward strategic goals (Galliers et al., 2017).
Voice analysis algorithms, for example, are used in call
centers to decide whether workers express adequate
empathy. Meanwhile, algorithms in Amazon ware-
houses closely monitor workers’ performances, rate
their speed, and terminate workers if they fall behind
(Dzieza, 2020). Algorithmic decision-making is also
becoming popular for recruitment as manifested
through CV screening and algorithmic evaluations of
telephone or video interviews (K€
ochling and Wehner,
2020; Yarger et al., 2020).
Focusing primarily on the technology involved in
algorithmic management, some perceive algorithms as
“technocratic and dispassionate” decision makers that
can provide more objective and consistent decisions
than humans (Kahneman et al., 2016; Kleinberg
et al., 2018). However, it is important to underline
how algorithms lack any sense of individual purpose:
they must have their objectives defined and algorith-
mic systems must be deliberately fed and trained on
organizational data. As such, there is a significant ele-
ment of organizational choice in where algorithmic
management is implemented and in terms of which
processes are replaced or augmented. Moreover,
there is considerable organizational choice around
whether to implement the suggestions and recommen-
dations made by the algorithmic systems. In this,
automated implementation of algorithmic suggestions
can be regarded as a distinct organizational choice. As
Fleming (2019: 27) notes: “It is not technology that
determines employment patterns or organizational
design but the other way around. The specific use of
machinery is informed by socio-organizational forces,
with power being a particularly salient factor.” We go
one step further, and as illustrated in Figure 1, present
algorithmic management as a sociotechnical
phenomenon shaped by both social and organization-
al forces. In the rest of this article, we broach how
algorithmic management may shift organizational
roles and power structures while being shaped by
organizational choices and competencies.
Considering algorithms within a broader notion of
management thus makes visible possible ramifications
of algorithmic management. It requires that we think
about interactions between humans and algorithms in
terms of the direction and development of organiza-
tions over the long term, and not just in the day-to-
day experiences of the worker. Understanding that
algorithmic systems are socially constructed and
embedded in existing power dynamics also means
these systems are neither created nor function outside
the biases rooted in the organizational cultures within
which they are implemented (Kellogg et al., 2020). For
this reason, algorithmic systems can inherit biases
embedded in previous decisions, and reenact historical
patterns of bias, discrimination, and inequalities (Chan
and Wang, 2018; Lepri et al., 2018). If an organization
has demonstrated historical biases in hiring, firing, pay,
or other managerial decisions, these biases will be
“learned” by the algorithm (Keding, 2021).
Lambrecht and Tucker (2019), for example, discovered
fewer women than men were shown ads about STEM
careers even though the ads were gender neutral by
design. Amazon also scrapped its AI-powered recruit-
ing engine since it did not rank applicants for technical
posts in a gender-neutral manner (The Guardian,
2018). The company discovered the underlying algo-
rithm would build on the historical trends of resumes
submitted to the company over a 10-year period, which
were reflective of the broader male-dominated IT
industry.
Algorithmic management shapes power
relationships
Increased power to managers
Primarily, algorithmic systems aid managers to over-
come cognitive limitations in dealing with data
Figure 1. Algorithmic management as a sociotechnical phe-
nomenon representing both social and technological forces.
Jarrahi et al. 3
overload (Jarrahi, 2018). Algorithmic systems, such as
for CV filtering or time-scheduling can streamline work
processes and overall improve job quality for those
involved. However, the use of advanced algorithms cre-
ates new opportunities for managers to exercise control
over the workforce (Kellogg et al., 2020; Shapiro,
2018). Similar to the implementation of organizational
information systems, often it is the workers who are the
subjects of algorithmic management, while managers
are those implementing the decisions. In tune with
the long-standing spirit of Taylorism and scientific
management, algorithmic management carries the risk
of treating workers like mere “programmable cogs in
machines” (Frischmann and Selinger, 2018). As Bucher
et al. (2019) note in their study of Amazon Mechanical
Turk workers, this commodification and alienation of
workers is a widespread issue particularly in digitally-
mediated work where power imbalances can have wide
social impacts.
Platform-mediated gig work is critically dependent
upon algorithmic systems for control and there is a
clear hierarchy of power between the workers, the plat-
form, and the service recipients (organizations in some
cases). Current research highlights implicit and explicit
power asymmetries between the workers and service
recipients, as well as between the workers and plat-
forms (Newlands, 2020; Shapiro, 2018). As clients, ser-
vice recipients enjoy an upper hand in transactions.
Likewise, platforms use different mechanisms of con-
trol to withhold information from the workers or even
remove them from the platform. These control mecha-
nisms are visible and often focus on three major out-
comes: ensuring the integrity of transactions,
protecting the platform from disintermediation, and
monitoring workers’ performance (Jarrahi et al.,
2019; Newlands, 2020). Since many work activities
are mediated via platforms, gig work organizations
can utilize “soft” forms of workforce surveillance and
control to monitor how workers spend their time and
where they are located (Duggan et al., 2019; Shapiro,
2018). Wood et al. (2018) also found that algorithmic
management creates symbolic power structures around
features such as reputation and ratings. Yet, while
“analogue” parallels to gig work exist, such as standard
taxi driving, food-delivery work, or creative freelanc-
ing, gig work platforms usually emerged as new organ-
izations without pre-existing relationships among the
management and the workforce. Deliveroo, for
instance, was founded ex nihilo rather than as the dig-
ital transformation of a pre-existing food-delivery com-
pany. Uber, similarly, was never a taxi company with
“standard” workplace relationships. Since research to
date on algorithmic management has primarily focused
on gig work settings, it is therefore important to
highlight how key lessons and findings on gig work
are not perfectly translatable to a standard context.
In more standard work settings, where digital plat-
forms and algorithmic systems are not the primary
means of organizing work, algorithmic control adds
to pre-existing power dynamics and regimes of control.
In these contexts, most workers are connected to the
organization through more conventional coordination
mechanisms such as organizational hierarchies and tra-
ditional employment arrangements. As such, this socio-
technical emergence is a critical factor in determining
the motivation, scope, and implementation of algorith-
mic systems. Rather than mass-scale algorithmic man-
agement, as observed in gig work, in standard
organizations, only a small number of managerial func-
tions may be replaced or augmented with algorithmic
systems. One key reason is the high-cost and uncertain
returns of algorithmic systems, which usually have to
be sourced from a third-party vendor (Christin, 2017).
The extra effort needed to draw on algorithmic deci-
sions also comes with heavy investments of time and
energy, such as aligning human and algorithmic cogni-
tive systems (Burton et al., 2020).
A primary consideration for the more gradual roll-
out of algorithmic management in standard work
settings is that augmenting or replacing managerial
processes has direct implications for either increasing
or decreasing the managerial prerogatives already in
place. As such, we can observe that the roll-out of algo-
rithmic systems is swiftest when increasing managerial
power. Automated scheduling systems, widely adopted
in the retail industry, shift more power from workers to
managers. The dynamic and “just-in-time” approach
presented by these systems makes a justification for
allocating shifts in short notices and smaller increments
in response to changes in customer demands. Such fluc-
tuating schedules imposed by the automated systems
create negative consequences for workers, such as
higher stress, income instability, and work–family con-
flicts (Mateescu and Nguyen, 2019).
Using algorithms to nudge workers’ behaviors can
be more subtle, but no less effective. For example,
organizations may use sentiment analysis algorithms
in their people analytics efforts to assess the “vibes”
of teams and to identify ways to increase productivity
and compliance (Gal et al., 2020). Inspired by the gig
economy, where the gig workers may get pay raises
based on client reviews, firms such as JP Morgan
build on algorithms to collect and analyze constant
feedback to evaluate the performance of employees
and ascertain compensation (Kessler, 2017). Darr
(2019) also describes the use of “automatons” to con-
trol workers in a computer chain store, where the com-
pany leverages algorithmically generated sales contests
to control workers’ behavior.
4Big Data & Society
Algorithmic surveillance techniques have also
emerged in more totalizing organizations, which are
using wearables such as wristbands or harnesses to ana-
lyze workers’ performance and nudge them toward
desirable behaviors through vibration (Newlands,
2020). In warehouses, for example, algorithms can
automatically enforce pace of work, and in some
cases result in demoralization of the workers or even
physical injuries (Dzieza, 2020). Truckers’ locations
and behaviors can also be monitored through GPS sys-
tems, allowing the dispatchers to algorithmically eval-
uate their performance (Levy, 2015). Algorithmic
surveillance has also been implemented to manage flex-
ible and remote workforces. During the Covid-19 pan-
demic, which required millions of workers to work
remotely, organizations started to roll out systems
such as InterGuard to monitor the remote workforce
by collecting and analyzing computer activities such as
screenshots, login times, and keystrokes as well as mea-
suring productivity and idle time (Laker et al., 2020;
Newlands et al., 2020). Although researchers have
identified strategies that workers use to work around
algorithmic control and reclaim their agency (Bucher
et al., 2021), workers often find them difficult to con-
front and change (Watkins, 2020).
Surveillance through algorithms in the workplace
can also take a “refractive” form, which refers to the
approach in which “monitoring of one party can facil-
itate control over another party that is not the direct
target of data collection.” (Levy and Barocas, 2018:
1166). Retailers, for example, are known to leverage
customer-derived data (gathered based on many data
points such as close monitoring of foot traffic) and
algorithm-empowered operational analysis to optimize
scheduling decisions, automate self services, and even
replace workers. Brayne (2020) examines how the roll-
out of big data policing strategies and algorithmic sur-
veillance technologies introduced to monitor crime
came to be used to surveil police officers themselves.
The officers’ reaction against this effort is telling of the
ways that surveillance can reconfigure work and pro-
fessional identity. Officers reacted not only against the
surveillance as an entrenchment of managerial over-
sight and a threat to their independence, but also as a
move away from the kind of experiential knowledge
that an officer brings to their work.
Decreased power to managers
While emerging as a powerful mechanism to increase
the control of management as a whole, algorithmic
management can also decrease the power and agency
of individual managers. For years, research has docu-
mented the decline of middle managers in post-
bureaucratic organizations (Lee and Edmondson,
2017; Pinsonneault and Kraemer, 1997) and has
sought to define what functions and identity these posi-
tions may entail (Foss and Klein, 2014). Recent advan-
ces in AI and the prospect of introducing algorithmic
management may further complicate these roles
(Noponen, 2019).
In many traditional organizational settings, the
implementation of algorithms is still based on dash-
boards or “decision support” systems, which generate
recommendations to managers on actions to take.
Explaining how managers may work alongside algo-
rithms, Shrestha et al. (2019) describes three categories:
full delegation, sequential decision-making, and aggre-
gated decision-making. In cases of full delegation, man-
agerial agency is almost entirely subsumed into the
algorithmic systems. The key moment of managerial
intervention becomes the design and development of
the system.
Yet, delegating decision-making to algorithms may
deprive managers of critical opportunities to develop
tacit knowledge, which primarily emerges from experi-
ential practices. Tacit knowledge often derives from
opportunities to practice judgment when directly
involved in decision-making. Social practices of
decision-making through trial and error help humans
retain and internalize practical knowledge. A recent
empirical study about recruiters using AI-based hiring
software provides a nuanced picture (Lin et al., 2021).
While AI-based sourcing tools provided opportunities
to identify new talent pools and newer keywords and
skill sets associated with different jobs, recruiters did
not have enough control over algorithmic recommen-
dation criteria. The lack of control put recruiters in a
position where they merely accepted good recommen-
dations and sometimes deprived them of the chance to
develop their own strategies. Taking away these first-
hand experiences may risk turning managers into
“artificial humans” that are shaped and used by the
smart technology, not the other way around (Demetis
and Lee, 2018). While a significant amount of work on
algorithmic management has focused on decision-
making, contemporary notions of management imply
a large array of other skills and responsibilities. This
provokes us to think about what it would mean for an
algorithm to attempt the role of liaison, spokesperson,
or figurehead, for instance.
An interesting research direction along these lines
focuses on whether and how algorithmic management
may transcend to the realm of what Harms and Han’s
(2019) called “algorithmic leadership,” where smart
machines assume leadership activities of managers
such as motivating, supporting, and transforming
workers. Future research may ask whether algorithms
can go beyond effective handling of task-related ele-
ments of leadership and automate relational aspects
Jarrahi et al. 5
as well, given the self-learning capacities of cutting-
edge techniques such as AlphaZero.
2
Future research
may explore “how humans come to accept and follow a
computer leader,” (Wesche and Sonderegger, 2019:
197) or “whether, or when, humans will prefer to
work with AIs that appear to be human or are clearly
artificial” (Harms and Han, 2019: 75). There is an alter-
native argument that there will be a premium on soft
skills and human intuition (Ferra
`s-Herna
´ndez, 2018).
In a more collective cultural sense, workers and their
supervisors would enter relationships based on social
exchange, and would view their supervisor’s decisions
in the context of norms of reciprocity and commitment
(Cappelli et al., 2019). However, such an empathetic
relationship based on goodwill is almost impossible
to develop with an algorithmic manager (Duggan
et al., 2019).
Competencies for an algorithmic
organization
Shifting roles
The emergence of algorithmic management is changing
the roles of workers and managers based on the need to
manage and interact with algorithmic systems. A socio-
technical perspective argues that social actors play an
active role in shaping and appropriating technological
systems. In other words, workers and managers are not
passive recipients of algorithmic results; they could find
ways to develop a functioning understanding of algo-
rithmic systems, work around issues such as trust, and
align the system to their needs and interests.
Addressing the need for new organizational roles to
handle algorithmic systems, Wilson et al. (2017) iden-
tify three emerging jobs: trainers, explainers, and sus-
tainers. Workers need to teach algorithms how to
perform organizational tasks, open the “black-boxes”
of algorithmic systems by explaining their decision-
making approach to business leaders, and finally
ensure the fairness and effectiveness of algorithms to
minimize their unintended consequences. Gal et al.
(2020) similarly suggest that organizations need to
establish what they term “algorithmists”, who monitor
the algorithmic ecosystems. They are “not just data
scientists, but the human translators, mediators, and
agents of algorithmic logic” (Gal et al., 2020: 10).
Since a common culture obsessed with efficiency and
extraction of maximum value from workers has long
existed, there is a risk of siloing algorithmic competen-
cies into a small number of organizational members. A
consequence of this culture is a mixture of upskilling
and deskilling disparate sets of workers with regard to
the managerial systems. While some groups in organ-
izations will be trained to use the new algorithmic
systems, thus increasing their power and position,
others will not be trained and rather be forced to imple-
ment decisions that previously they had the power to
shape. Both the organization and worker may come to
view the deskilled manager as only an “appendage to
the system” (Zuboff, 1988).
Indeed, years of research demonstrates common
system design mindsets that perceive workers’ roles
and responsibilities based on automation demands;
meaning the designer seeks to automate as many sub-
components of the sociotechnical work system as pos-
sible and leave the rest to human operators. As a result,
an unintended consequence of such common design
approaches is to deskill humans and to relegate them
to uninspiring roles. Insights from the use of algorith-
mic management in the gig economy reflect the same
findings where workers are considered replaceable
“human computers” and may engage in (ghost) work
that is repetitive, does not result in any new learning,
and only intends to train AI systems (Gray and Suri,
2019).
Algorithmic competencies
In a workplace where algorithmic management is
implemented, it is essential for managers and workers
to develop algorithmic competencies (Jarrahi and
Sutherland, 2019). However, research on algorithmic
competencies, particularly in standard work contexts,
is still in its infancy. Algorithmic competencies can be
understood as skills that help workers in developing
symbiotic relationships with algorithms. In addition
to data-centered analytical skills that facilitate interac-
tions between workers and algorithms, algorithmic
competencies involve critical thinking. Workers “have
a growing need to understand how to challenge the
outputs of algorithms, and not just assume system deci-
sions are always right” (Bersin and Zao-Sanders, 2020).
Since algorithmic management involves a complicated
network of people, data, and computational systems,
the ability to understand, manipulate, and address the
algorithmic systems on both a technical and organiza-
tional level will determine an individual’s agency and
power at work. Grønsund and Aanestad (2020), for
instance, have observed that the role of human workers
shifted with the introduction of algorithmic systems,
and that algorithmic competencies include auditing
and altering algorithms.
An individual’s power in relation to the algorithm,
and in relation to the organization as a whole, will
therefore depend on their ability to understand and
interact with the algorithmic systems. A lack of com-
petency with the tools of work can reduce workers’
sense of autonomy over their work as well as their abil-
ity to make informed decisions and self-reflect (Jarrahi
6Big Data & Society
et al., 2019). Without algorithmic competencies and the
active role of workers in constructing them, artificial
and human intelligence do not yield “an assemblage of
human and algorithmic intelligence” (Bader and
Kaiser, 2019). As such, algorithmic opacity can be
employed as a control mechanism to maximize the
organization’s objectives. Indeed, recent research
shows how gig work platforms withhold information
about how algorithms operate to maintain soft control
of the workforce (Shapiro, 2018). Information on how
platforms’ payment algorithms operate, for instance, is
used to prevent gig workers from collectivizing as
detailed by Van Doorn (2020) in his study of Berlin-
based delivery workers.
Attitudes toward algorithms (algorithm aversion
and cognitive complacency)
The development of algorithmic competencies is, how-
ever, bounded by individual attitudes toward algo-
rithms (Lichtenthaler, 2018). These attitudes, namely
algorithm aversion and cognitive complacency, can be
viewed as oppositional attitudes as they reflect a resis-
tance to either using algorithms or to understanding
and actively shaping their deployment.
Algorithm aversion refers to people’s hesitance to
use algorithmic results, usually after observing imper-
fect performance by algorithms; this can even occur in
situations where the algorithm may outperform
humans in certain accuracy measures (Dietvorst
et al., 2016; Prahl and Van Swol, 2017). Algorithm
aversion reveals a lack of trust in algorithm-generated
advice and has been observed in the work of profes-
sional forecasters in various domains where managers
resisted the integration of available forecasting algo-
rithms in their work practices. The full array of reasons
behind algorithm aversion is not completely known.
Recent experimental studies suggest that after receiving
bad advice from human and computer advisors, utili-
zation of computer-provided advice decreased more
significantly, and human decision-makers still find
more common ground with their human counterparts
than with non-human, artificial systems (Prahl and
Van Swol, 2017). Dietvorst et al. (2016) also suggest
that algorithm aversion decreases if decision makers
can retain some control over the outcome and the
option to modify the algorithms’ inferences.
Algorithmic aversion can lead to a lack of functioning
understanding about how algorithms actually operate
and their impact on one’s own work.
However, we can observe a critical distinction
between aversion to algorithms in general, and an aver-
sion to specific algorithmic systems. For instance,
acceptance for workplace monitoring is more likely if
it enhances labor productivity, but there is a tendency
to reject it if it is used for monitoring health and per-
formance (Abraham et al., 2019). It must be underlined
that the implementation of algorithmic management is
usually top-down and imposed upon the majority of
the workforce who have limited power to resist.
Kolbjørnsrud et al. (2017), for instance, demonstrate
that support for the introduction of algorithmic man-
agement correlates with rank and is impacted by cul-
tural and national differences. As they explain: “top
managers relish the opportunity to integrate AI into
work practices, but mid-level and front-line managers
are less optimistic”. Isomorphic pressures to “keep up”
with rival organizations, or to maintain a sense of dig-
ital innovation, can thus result in the implementation
of algorithmic management systems by top manage-
ment that are neither welcomed nor necessary.
Balancing aversion to algorithms is, however, a
trend for cognitive complacency (Logg et al., 2019).
Cognitive complacency may manifest itself when
human decision-makers do not inquire into the factors
driving inferences made by algorithms (Newell and
Marabelli, 2015). In organizational contexts, algorith-
mic decision-making can “lead to very superficial
understandings of why things happen, and this will def-
initely not help managers, as well as ‘end users’ build
cumulative knowledge on phenomena” (Newell and
Marabelli, 2015: 10). Previous research suggests that
organizations posing increased workload and time
pressure may raise the possibility of workers’ overre-
liance on automated systems and overuse of automated
advice. That is, as workload increases, the workers may
cut back on actively monitoring automated systems
and uncritically accept the decision outputs to keep
up with task demands (Chien et al., 2018). The proce-
dural character of the algorithm may also be regarded
as a kind of neutrality or objectivity, leading its deci-
sions to be taken as authoritative by default.
This oppositional attitude toward either understand-
ing or critiquing algorithmic systems echoes similar
observations regarding a “machine heuristic”, which
refers to mental shortcuts used to imply that interac-
tions with machines (as opposed to another human
being) can be more trustworthy, unbiased, or altruistic
(Sundar and Kim, 2019). As such, people may over-
thrust intelligent agents and defer to them in high-
stakes decision-making such as in emergency situations
(Wagner et al., 2018). Earlier research on automation
suggests, in some contexts, that humans may assign
more value and less bias to decisions generated by
automated systems compared to other sources of
expertise (Parasuraman and Manzey, 2010). However,
specific elements of an organizational context can fuel
cognitive complacency. What is termed cognitive com-
placency may, in this regard, often be a manifestation
of helplessness due to power imbalances.
Jarrahi et al. 7
Opacity
In a context where knowledge is power, how algorith-
mic management develops in standard work settings is
fundamentally shaped by access to knowledge, specifi-
cally information and understanding regarding the
algorithmic systems. This is primarily an issue of algo-
rithmic opacity which operates sociotechnically
through an intertwinement of technical and organiza-
tional features. For example, Burrell (2016) draws dis-
tinctions between opacity as intentional concealment,
such as with corporate trade secrets, opacity due to a
lack of technical literacy, and opacity due to algorith-
mic complexity and scale. Here, we discuss two impor-
tant intertwined facets of opacity that impact the
performance of algorithmic management in organiza-
tions: technical opacity and organizational opacity.
While technical opacity is rooted in the specific mate-
rial features and design of emerging algorithmic sys-
tems, organizational opacity entails how algorithms
may reinforce the opacity of broader organizational
choices.
Technical opacity
In terms of technical opacity, a frequent concern
around algorithmic management is the opaque or
“black box” character of AI systems (Burke, 2019).
This term should be disaggregated, however, as the lit-
erature discusses a number of different ways in which
algorithms are “black-boxed” (Obar, 2020). First, the
algorithm may be black-boxed in the way that any
computational or bureaucratic system can be. Once in
place, its functioning is taken for granted, and the deci-
sions or contingencies that went into designing the
algorithm may not be engaged again. Newell and
Marabelli (2015: 5) found that: “discriminations are
increasingly being made by an algorithm, with few indi-
viduals actually understanding what is included in the
algorithm or even why.” There is also an aspect of
black-boxing that has been ascribed to algorithms as
an inherent property of the computationally convolut-
ed ways in which they operate. Many of the techniques
used in ML, such as neural networks, are immensely
complex, often involving many layers of computational
processing in high dimensionality. Such complexities
often make it difficult for those who are not AI experts
to understand, even at a high level, how these techni-
ques operate (Shrestha et al., 2019).
In response to this need for greater human compre-
hension of AI systems, there has been a rapidly grow-
ing subfield within the technical AI community called
“Explainable AI” (XAI) (Adadi and Berrada, 2018).
Fueled by the U.S. Defense Advanced Research
Projects Agency program of the same name, XAI
techniques attempt to render immensely complex
models more understandable by humans through var-
ious technical methods. These can produce “local” or
“global” explanations, with a local explanation offer-
ing a particular input/output prediction (e.g. why did
the model assign X label to Y input?) and a global
explanation relating to the model overall (e.g. what
are the prominent features the model will take into
account when reaching a prediction) (Adadi and
Berrada, 2018). Such methods are largely aimed at
the version of black-boxing associated with ML’s
inherent complexity. Yet, scholars have noted a gap
between technical XAI advances (which often are
most useful to AI developers in processes of debugging)
and the types of explanations that workers might need
or want in actual use settings (Gilpin et al., 2019). In
considering algorithmic management, different sets of
concerns around explainability become salient, one of
which is the varying technical literacies of workers who
might interact with AI systems (Wang et al., 2019).
Organizational opacity
In addition to technical opacity there is the issue of
organizational opacity, a lack of information by the
organization due to strategic interests and intellectual
property. Algorithmic opacity can be compounded by
organizational relationships or power dynamics within
and outside the organization. Developing algorithmic
management systems relies on high upfront costs with
uncertain long-term benefits (Keding, 2021). The scar-
city and high-cost of AI talent explains why most
gig-work platforms require extensive venture capital
funding to develop the underlying technological infra-
structures. While gig work platforms are opaque in
their processes, most of their algorithms are developed
“in-house”, giving their management the ability to con-
tinuously “tweak” the algorithm. Indeed, Uber has
become notorious for continuous changes to their algo-
rithms, to the detriment of both workers and custom-
ers. In standard organizations (non-platform based),
however, technological infrastructures are often pro-
vided through the Cloud by third party AI suppliers
on an AI-as-a-service basis (Parsaeefard et al., 2019).
Organizations using Microsoft Teams, for instance,
may rely on the “365 productivity score” to rate
worker performance based on how much they use
Microsoft 365 products (e.g. Word, Excel, or Teams).
This externalization, however, can not only restrict
managerial agency in “tweaking” the algorithms, it
also prevents them from understanding how the algo-
rithmic systems operate.
From an intra-organizational perspective, a major
concern is also the opacity of the broader organization-
al workflow within which AI systems are embedded
8Big Data & Society
and the “high” or “low” stakes involved along that
flow (Rudin, 2019). The overarching workflow itself
may be complex, with several processes unfolding
across time and space. Entwined with the concept of
algorithmic authority, Polack (2020) refers to how pro-
fessional gatekeeping may hinder algorithmic transpar-
ency since only ML developers or top managers may
feel the need (or get permission) to understand how the
algorithmic system works.
Overcoming opacity
Recently, regulatory effort has attempted to make
algorithms more transparent and explainable
(Felzmann et al., 2019). Such techniques can be applied
to some degree within organizational settings. One
approach is the algorithmic audit, which is predicated
on the idea that algorithmic biases may produce a
record that can be read, understood, and altered in
future iterations (Diakopoulos, 2016; Silva and
Kenney, 2018). Conversely, Polack (2020) has pro-
posed that, as opposed to reverse-engineering algo-
rithms, effort is invested in “forward-engineering”
algorithms to establish how their design is shaped by
varying constraints. Auditing is especially important in
enterprise applications of AI, such as in regulated
industries (e.g. health/medical, finance), as well as in
the European Union where the GDPR grants individ-
uals a “Right to Explanation” (Casey et al., 2019).
There are several significant types of information, the
disclosure of which could help provide a kind of algo-
rithmic literacy (including human involvement, data,
the model, inferencing, and algorithmic presence)
(Diakopoulos, 2016; Jarrahi and Sutherland, 2019).
As organizations increasingly adopt (or even rely on)
algorithms as the backbone for people management,
workplace-related concerns may even raise algorithmic
management as a pressing concern for contemporary
labor laws.
However, in the absence of labor-specific and
organization-specific policies that govern algorithmic
management, access to knowledge about which algo-
rithms are deployed, how they are enacted, and what
impact they have on each worker is limited. In
platform-mediated gig work, considerable attention
has been levied to “unboxing” the algorithms, trying
to ascertain how the different algorithmic systems func-
tion (Chen et al., 2015; Van Doorn, 2020). Algorithmic
management has become a target of intense analysis
and critique among the disparate workforce, as well
as by the media and research community. As a result,
more information is available about how gig work plat-
forms operate than about how the more everyday
instances of algorithmic management function, which
are easier to overlook and not centralized within a
single platform. Continuous audits of algorithmic sys-
tems within an organization may provide an internal
form of inquiry into how algorithms organize work-
related processes (Buhmann et al., 2020).
Organizations can also coordinate regular communica-
tion and deliberation opportunities to allow stakehold-
ers to collectively assess the development and impact of
algorithmic management systems. The transparency by
design framework (Felzmann et al., 2020) or a partic-
ipatory AI design framework (Lee et al., 2019) can
provide procedures and tools that enable such stake-
holder participation. Yet, one caveat is that developing
such forms of transparency still needs the active
engagement of workers to uncover the information.
As demonstrated from the gig work setting, algorith-
mic opacity is not overcome without struggle, effort,
and risk.
Discussion and conclusion
Algorithmic management is changing the workplace
and the relationships between workers, managers, and
algorithmic systems. Three key trends have supported
the rise of algorithmic management. First, shifting
norms of what constitutes work (e.g. project-centric
work arrangements, platform work, and non-
standard contracts); second, the expanding technical
capabilities of machine-learning based algorithms to
replace discrete managerial tasks (Khan et al., 2019);
and third, wide-scale micro-instances of organizational
choice to use algorithmic management due to local eco-
nomic and strategic goals.
While intelligent systems continue to reshape our
conceptions of work knowledge, boundaries, power
structures, and overall organization, self-learning algo-
rithms have the potential to chip away at several foun-
dational notions of work and organization. For
example, the deployment of intelligent systems will
usher in new work systems with different divisions of
work between machines and humans (where more
mundane and data-centric tasks are assigned to
machines while humans engage in tasks requiring
social intelligence, tacit understanding, or imagina-
tion). Moreover, new roles will be defined for managers
as being strategic and creative thinkers rather than only
coordinators of organizational transactions. While
public concerns today focus on machines taking jobs
from humans, care and concern is also needed around
how these machines will contribute to the management
of human actions.
Adopting a sociotechnical perspective, we have
argued that algorithmic management embodies both
social and technological elements that interact with
one another and together shape algorithmic outcomes.
Algorithms must be recognized in the social and
Jarrahi et al. 9
political context of employment relationships
(Orlikowski and Scott, 2016). Issues such as opacity
around the application of algorithms in organization
derive from the interaction of technological and mate-
rial characteristics of algorithms, as well as by their
surrounding organizational dynamics. For example,
unique characteristics of emerging AI algorithms (pow-
ered by deep learning) set them apart from previous
generations of AI systems, and render their inferences
intrinsically opaque due to the complicated nature of
underlying neural networks. However, organizational
politics and interests in minimizing public disclosure
of decision-making processes can further deepen or
even build from the opacity of algorithmic manage-
ment to deflect accountability. As such, it is often
hard to pinpoint the origin of these problems and sep-
arate the two elements as both the social and technical
are entangled in current practices and are considered
co-constitutive.
A driving concern throughout the implementation
of algorithmic management is organizational account-
ability: holding algorithmically-driven decisions
accountable to various stakeholders within and outside
the boundaries of the organization (Diakopoulos,
2016; Mateescu and Nguyen, 2019). Discussions sur-
rounding algorithmic accountability frequently refer
to the transparency of algorithms, how organizations
practically engage with opaque algorithms, and how to
develop a sense of trustworthiness among organiza-
tional stakeholders (Buhmann et al., 2020). When
addressing systemic discrimination, even simple over-
sight of algorithmic functions is problematic due to a
multiplicity of stakeholders, interests, and sociopoliti-
cal factors innate to algorithms. For example, blame
was placed on a distribution “algorithm” in Stanford
Medical Center when controversy erupted on misallo-
cating Covid-19 vaccines to different stakeholders.
Administrators, in this case, were prioritized over
frontline doctors. However, the algorithm was in fact
not a complicated deep learning one, but quite a simple
decision-tree algorithm designed by a committee (Lum
and Chowdhury, 2021). A technocentric culture often
fails to provide full rationales for decision-making and
presents algorithmic results as “facts” to users rather
than probabilistic predictions. Rubel et al. (2019) refer
to this as agency laundering wherein human decision
makers launder their agency by distancing themselves
from morally suspect decisions and by assigning the
fault to automated systems. This organizational
approach goes against algorithmic accountability,
which “rejects the common deflection of blame to an
automated system by ensuring those who deploy an
algorithm cannot eschew responsibility for its actions”
(Garfinkel et al., 2017).
Algorithmic management touches upon many stake-
holders, but no one inside the work context is impacted
by algorithmic management more than managers and
workers themselves. The relationship between the two
parties will continue to be reconfigured and negotiated
through uses of algorithms in organizations. Not only
does algorithmic management reconfigure the power
dynamics in the workplace but it also embeds itself in
pre-existing power and social structures of the organi-
zation. In short, the deployment of algorithmic man-
agement in organizational work introduces novel
“human–machine configurations” that could transform
relationships between managers and workers and their
respective roles (Grønsund and Aanestad, 2020). For
example, algorithmic management may translate into
cost-cutting and consequently labor-cutting measures
(Mateescu and Nguyen, 2019).
With the rise of algorithmic systems, human managers
are tasked with deciding what kinds of algorithmic soft-
ware to adopt in their organization, whether it is for per-
formance reviews, incentive amounts, or departure
alerts. While these do not entirely remove human
decision-making from the equation, they do encourage
new ways of approaching, understanding, and acting
upon such information. Workers, though, who are
faced with algorithmic management processes within
their workplace, as of now have little recourse to detect,
comprehensively understand, or work around undesir-
able outcomes. Protecting the dignity and the wherewith-
al of workers—and those who manage their work—
remains a critical concern as algorithmic management
becomes a more commonplace phenomenon. Yet, if the
gig economy is any template, reward-oriented algorith-
mic management processes are unlikely to favor workers
over other stakeholders. While this scenario is hypothet-
ical, more public awareness and scrutiny is needed about
the role of learning and self-learning algorithms in
influencing algorithmic management trends.
Throughout this article, we argued that algorithmic
management is a sociotechnical phenomenon. This
means that while algorithms have a central bearing on
how work is managed, their outcomes in transforming
management and relationships between workers and
managers are socially constructed and enacted. Future
research is, however, needed to investigate the mutual
shaping of algorithms that manage work and unique
dynamics of different work contexts. In this regard, we
envision three specific directions for future research.
First, as noted, much of the discourse around algo-
rithmic management is rooted in platform-mediated gig
work. Gig work’s unique characteristics, such as a lack
of pre-existing management systems and non-standard
work arrangements, limit the transferability of current
findings to standard work settings. It is therefore
important to explore contextual variations in how
10 Big Data & Society
algorithmic management unfolds and corresponding
sociocultural differences across industries and organi-
zational contexts. For example, algorithmic manage-
ment may appear differently in organizations with
flat hierarchies and more democratic organizational
cultures than bureaucratic organizations. Studies of
non-platform parallel industries to popular gig work
industries would be particularly valuable, such as
food delivery and freelancing, since we could observe
the differences in power dynamics when algorithmic
management is implemented from the start or later in
an organization’s history.
Second, the use of intelligent systems empowered by
deep learning in more traditional organizations is still
embryonic. As such, most research on algorithmic
management and the impact of algorithms in organiza-
tions concerns previous generations of AI (i.e. rule
based AI); so this work many not fully reflect some
of the technological characteristics of emerging AI sys-
tems and their growing applications in algorithmic
management. Future research is needed to examine
the interplay of these technological capabilities and
ways that organizations manage work in practice.
Third, empirical research into the worker perspec-
tive, such as through ethnography, is vitally important
in determining how algorithmic management is experi-
enced “on the front lines” in standard organizations.
Participatory action research, to understand the multi-
stakeholder and socio-technical nature of algorithmic
management implementation, would also offer consid-
erable value in determining how to ensure a fair and
democratic roll-out of algorithmic management.
Conversely, direct participatory research may identify
normative parameters of algorithmic management,
highlighting where organizations should resist isomor-
phic pressures and avoid implementing algorithmic
management at all.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
This work was supported by the Research Council of Norway
within the FRIPRO TOPPFORSK (275347) project ‘Future
Ways of Working in the Digital Economy’ and by National
Science Foundation Award CNS-1952085.
ORCID iDs
Mohammad Hossein Jarrahi https://orcid.org/0000-0002-
5685-7156
Gemma Newlands https://orcid.org/0000-0003-0851-384X
Min Kyung Lee https://orcid.org/0000-0002-2696-6546
Will Sutherland https://orcid.org/0000-0002-3731-3129
Notes
1. Artificial Intelligence (AI) and Machine Learning (ML)
are often used interchangeably to refer to an array of com-
putational techniques that underlie contemporary “smart”
computational systems. AI is an umbrella term to broadly
refer to the discipline, of which ML is one branch (includ-
ing techniques such as Deep Learning and Random
Decision Forests). Unless referring specifically to an ML
technique, we use the broader colloquial term
“algorithms” throughout for clarity.
2. The AlphaZero algorithm is seen as ushering in a new era
of learning algorithms which builds on Deep Learning.
AlphaZero was designed to play the centuries-old game
of Go and uses reinforcement learning, an ML technique
capable of iteratively learning how to make a sequence of
decisions to reach an optimal outcome.
References
Abraham M, Niessen C, Schnabel C, et al. (2019) Electronic
monitoring at work: The role of attitudes, functions, and
perceived control for the acceptance of tracking technolo-
gies. Human Resource Management Journal 29(4):
657–675.
Adadi A and Berrada M (2018) Peeking inside the black-box:
A survey on explainable artificial intelligence (XAI). IEEE
Access 6: 52138–52160.
Bader V and Kaiser S (2019) Algorithmic decision-making?
The user interface and its role for human involvement in
decisions supported by artificial intelligence. Organization
26(5): 655–672.
Bersin J and Zao-Sanders M (2020) Boost your team’s data
literacy. Harvard Business Review, 12 February. Available
at: https://hbr.org/2020/02/boost-your-teams-data-literacy
(accessed 9 March 2021).
Brayne S (2020) Predict and surveil: Data, discretion, and the
future of policing. New York: Oxford University Press.
Bucher E, Fieseler C and Lutz C (2019) Mattering in digital
labor. Journal of Managerial Psychology 34(4): 307–324.
Bucher E, Schou P and Waldkirch M (2021) Pacifying the
algorithm: Anticipatory compliance in the face of algorith-
mic management in the gig economy. Organization 28(1):
44–67.
Buhmann A, Paßmann J and Fieseler C (2020) Managing
algorithmic accountability: Balancing reputational con-
cerns, engagement strategies, and the potential of rational
discourse. Journal of Business Ethics 163: 265–280.
Burke A (2019) Occluded algorithms. Big Data & Society
6(2): 1–15.
Burrell J (2016) How the machine ‘thinks’: Understanding
opacity in machine learning algorithms. Big Data &
Society 3(1): 12.
Burton JW, Stein M and Jensen TB (2020) A systematic
review of algorithm aversion in augmented decision
making. Journal of Behavioral Decision Making 33(2):
220–239.
Cappelli P (2018) Are algorithms good managers? Human
Resource Executive, 20 February. Available at: http://h
Jarrahi et al. 11
rexecutive.com/are-algorithms-good-managers/ (accessed
9 March 2021).
Cappelli P, Tambe P and Yakubovich V (2019) Artificial
intelligence in human resources management: Challenges
and a path forward. California Management Review 61(4):
15–42.
Casey B, Farhangi A and Vogl R (2019) Rethinking explain-
able machines: The GDPR’s right to explanation debate
and the rise of algorithmic audits in enterprise. Berkeley
Tech. Law Journal 34: 143.
Chan J and Wang J (2018) Hiring preferences in online labor
markets: Evidence of a female hiring bias. Management
Science 64(7): 2973–2994.
Chen L, Mislove A and Wilson C (2015) Peeking beneath the
hood of Uber. In: Proceedings of the 2015 ACM Internet
Measurement Conference, Tokyo, Japan: The Association
of Computing Machinery (ACM), October 28–30, pp.
495–508.
Chien S-Y, Lewis M, Sycara K, et al. (2018) The effect of cul-
ture on trust in automation: Reliability and workload. ACM
Transactions on Interactive Intelligent Systems 8(4): 29.
Christin A (2017) Algorithms in practice: Comparing web
journalism and criminal justice. Big Data & Society 4(2):
1–14.
Crowston K and Bolici F (2019) Impacts of machine learning
on work. In: Proceedings of the 52nd Hawaii international
conference on system sciences, Haiwaii, USA.
Darr A (2019) Automatons, sales-floor control and the con-
stitution of authority. Human Relations 72(5): 889–909.
Demetis D and Lee A (2018). When humans using the IT
artifact becomes IT using the human artifact. Journal of
the Association for Information Systems 19(10): 929–952.
Diakopoulos N (2016) Accountability in algorithmic decision
making. Communications of the ACM 59(2): 56–62.
Dietvorst BJ, Simmons JP and Massey C (2016) Overcoming
algorithm aversion: People will use imperfect algorithms if
they can (even slightly) modify them. Management Science
64(3): 1155–1170.
Dourish P (2016) Algorithms and their others: Algorithmic
culture in context. Big Data & Society 3(2): 1–11.
Duggan J, Sherman U, Carbery R, et al. (2019) Algorithmic
management and app-work in the gig economy: A
research agenda for employment relations and HRM.
Human Resource Management Journal 30(1): 114–132.
Dzieza J (2020) How hard will the robots make us work? The
Verge, 27 February. Available at: https://theverge.com/
2020/2/27/21155254/automation-robots-unemployment-
jobs-vs-human-google-amazon (accessed 9 March 2021).
Felzmann H, Fosch-Villaronga E, Lutz C, et al. (2020)
Towards transparency by design for artificial intelligence.
Science and Engineering Ethics 26: 3333–3361.
Felzmann H, Villaronga EF, Lutz C, et al. (2019)
Transparency you can trust: Transparency requirements
for artificial intelligence between legal norms and contex-
tual concerns. Big Data & Society 6(1): 1–14.
Ferra
`s-Herna
´ndez X (2018) The future of management in a
world of electronic brains. Journal of Management Inquiry
27(2): 260–263.
Fleming P (2019) Robots and organization studies: Why
robots might not want to steal your job. Organization
Studies 40(1): 23–38.
Foss NJ and Klein PG (2014) Why managers still matter.
MIT Sloan Management Review 56(1): 73.
Frischmann B and Selinger E (2018) Re-Engineering
Humanity. Cambridge, UK: Cambridge University Press.
Gal U, Jensen TB and Stein M-K (2020) Breaking the vicious
cycle of algorithmic management: A virtue ethics
approach to people analytics. Information and
Organization 30(2): 100301.
Galliers RDS, Newell S, Shanks G, et al. (2017) Datification
and its human, organizational and societal effects: The
strategic opportunities and challenges of algorithmic deci-
sion-making. The Journal of Strategic Information Systems
26(3): 185–190.
Garfinkel S, Matthews J, Shapiro SS, et al. (2017) Toward
algorithmic transparency and accountability.
Communications of the ACM 60(9): 5.
Gilpin LH, Testart C, Fruchter N, et al. (2019) Explaining
explanations to society. arXiv Repository. Available at:
https://arxiv.org/abs/1901.06560 (accessed 9 March 2021).
Gray ML and Suri S (2019) Ghost work: How to stop Silicon
Valley from Building a New Global Underclass. San
Francisco, CA: Houghton Mifflin Harcourt.
Grønsund T and Aanestad M (2020) Augmenting the algo-
rithm: Emerging human-in-the-loop work configurations.
The Journal of Strategic Information Systems 29(2):
101614.
Harms PD and Han G (2019) Algorithmic leadership: The
future is now. Journal of Leadership Studies 12(4): 74–75.
Huws U (2016) Logged labor: A new paradigm of work orga-
nisation? Work Organization, Labour & Globalisation
10(1): 7–26.
Jabagi N, Croteau AM, Audebrand LK, et al. (2019) Gig-
workers’ motivation: Thinking beyond carrots and sticks.
Journal of Managerial Psychology 34(4): 192–213.
Jarrahi MH (2018) Artificial intelligence and the future of
work: Human-AI symbiosis in organizational decision
making. Business Horizons 61(4): 577–586.
Jarrahi MH (2019) In the age of the smart artificial intelli-
gence: AI’s dual capacities for automating and informat-
ing work. Business Information Review 36(4): 178–187.
Jarrahi MH and Sutherland W (2019) Algorithmic manage-
ment and algorithmic competencies: Understanding and
appropriating algorithms in gig work. In: International
Conference on Information,Washington, DC, USA, Cham,
Switzerland: Springer, March 31–April 3, pp.578–589.
Jarrahi MH, Sutherland W, Nelson SB, et al. (2019)
Platformic management, boundary resources for gig
work, and worker autonomy. Computer Supported
Cooperative Work (CSCW) 29: 153–189.
Kahneman D, Rosenfield AM, Gandhi L, et al. (2016) Noise:
How to overcome the high, hidden cost of inconsistent
decision making. Harvard Business Review 94: 38–46.
Keding C (2021) Understanding the interplay of artificial
intelligence and strategic management: Four decades of
12 Big Data & Society
research in review. Management Review Quarterly 71:
91–134.
Kellogg KC, Valentine MA and Christin A (2020)
Algorithms at work: The new contested terrain of control.
Academy of Management Annals 14(1): 366–410.
Kessler S (2017) The influence of Uber ratings is about to be
felt in the hallways of one of the world’s largest banks.
Quartz, 13 March. Available at: https://qz.com/930080/jp-
morgan-chase-is-developing-a-tool-for-constant-perfor
mance-reviews/ (accessed 9 March 2021).
Khan M, Jan B and Farman H (2019) Deep Learning:
Convergence to Big Data Analytics. Singapore: Springer.
Kleinberg J, Ludwig J, Mullainathan S, et al. (2018)
Discrimination in the age of algorithms. Journal of Legal
Analysis 10: 113–174.
K€
ochling A and Wehner MC (2020) Discriminated by an
algorithm: A systematic review of discrimination and fair-
ness by algorithmic decision-making in the context of HR
recruitment and HR development. Business Research 13:
795–848.
Kolbjørnsrud V, Amico R and Thomas RJ (2017) Partnering
with AI: How organizations can win over skeptical man-
agers. Strategy & Leadership 45(1): 37–43.
Laker B, Godley W, Patel C, et al. (2020) How to monitor
remote workers – ethically. MIT Sloan Management
Review, 20 November. Available at: https://sloanreview.
mit.edu/article/how-to-monitor-remote-workers-ethically/
(accessed 9 March 2021).
Lambrecht A and Tucker C (2019) Algorithmic bias? An
empirical study of apparent gender-based discrimination
in the display of STEM career ads. Management Science
65(7): 2966–2981.
Lee MK (2018) Understanding perception of
algorithmic decisions: Fairness, trust, and emotion in
response to algorithmic management. Big Data &
Society 5(1): 1–16.
Lee MK, Kusbit D, Kahng A, et al. (2019) WeBuildAI:
Participatory framework for algorithmic governance. In:
Proceedings of the 22nd annual ACM conference on
human–computer interaction (CSCW) 3, Austin, Texas,
USA. New York: The Association for Computing
Machinery (ACM), November 9th–13th, pp. 1–35.
Lee MK, Kusbit D, Metsky E, et al. (2015) Working with
machines: The impact of algorithmic and data-driven
management on human workers. In: Proceedings of the
33rd annual ACM conference on human factors in
computing systems, Seoul, Korea, New York: The
Association for Computing Machinery (ACM), April
18–23, pp.1603–1612.
Lee MY and Edmondson AC (2017) Self-managing organi-
zations: Exploring the limits of less-hierarchical organiz-
ing. Research in Organizational Behavior 37: 35–58.
Leicht-Deobald U, Busch T, Schank C, et al. (2019) The
challenges of algorithm-based HR decision-making for
personal integrity. Journal of Business Ethics: JBE 160:
377–392.
Lepri B, Oliver N, Letouz
e E, et al. (2018) Fair, transparent,
and accountable algorithmic decision-making processes.
Philosophy & Technology 31(4): 611–627.
Levy K (2015) The contexts of control: Information, power,
and truck-driving work. The Information Society 31:
160–174.
Levy K and Barocas S (2018) Privacy at the margins –
Refractive surveillance: Monitoring customers to manage
workers. International Journal of Communication Systems
12: 23.
Lichtenthaler U (2018) Substitute or synthesis: The interplay
between human and artificial intelligence. Research-
Technology Management 61(5): 12–14.
Lin L, Lassiter T, Oh J, et al. (2021) Algorithmic hiring in
practice: Recruiter and HR Professional’s perspectives on
AI use in hiring. In: The Proceedings of the AAAI/ACM
Conference on Artificial Intelligence, Ethics, and Society
(AIES 2021). A virtual conference. New York: Association
for Computing Machinery (ACM), May 19–21.
Logg JM, Minson JA and Moore DA (2019) Algorithm
appreciation: People prefer algorithmic to human judg-
ment. Organizational Behavior and Human Decision
Processes 151: 90–103.
Lum K and Chowdhury R (2021) What is an ‘algorithm’? It
depends whom you ask. MIT Technology Review,26
February. Available at: https://technologyreview.com/
2021/02/26/1020007/what-is-an-algorithm/ (accessed 9
March 2021).
Mateescu A and Nguyen A (2019) Algorithmic management
in the workplace. Data & Society. Available at: https://
datasociety.net/wp-content/uploads/2019/02/DS_Algorith
mic_Management_Explainer.pdf (accessed 9 March 2021).
Meijerink J and Keegan A (2019) Conceptualizing human
resource management in the gig economy: Toward a plat-
form ecosystem perspective. Journal of Managerial
Psychology 34(4): 214–232.
M€
ohlmann M and Henfridsson O (2019) What people hate
about being managed by algorithms, according to a study
of Uber drivers. Harvard Business Review. Available at:
https://hbr.org/2019/08/what-people-hate-about-being-
managed-by-algorithms-according-to-a-study-of-uber-
drivers (accessed 9 March 2021).
Newell S and Marabelli M (2015) Strategic opportunities (and
challenges) of algorithmic decision-making: A call for
action on the long-term societal effects of “datification”.
The Journal of Strategic Information Systems 24(1): 3–14.
Newlands G (2020) Algorithmic surveillance in the gig econ-
omy: The organization of work through Lefebvrian con-
ceived space. Organization Studies 42(5): 719–737.
Newlands G, Lutz C, Tamo
`-Larrieux A, et al. (2020)
Innovation under pressure: Implications for data privacy
during the covid-19 pandemic. Big Data & Society 7(2):
2053951720976680.
Noponen N (2019) Impact of artificial intelligence on man-
agement. Electronic Journal of Business Ethics and
Organization Studies 24(2): 43–50.
Obar JA (2020) Sunlight alone is not a disinfectant: Consent
and the futility of opening Big Data black boxes (without
assistance). Big Data & Society. Epub ahead of print 23
June 2020. https://journals.sagepub.com/doi/full/10.1177/
2053951720935615
Jarrahi et al. 13
Orlikowski WJ and Scott SV (2016) Digital work: A research
agenda. In: Czarniawska B (ed) A Research Agenda for
Management and Organization Studies. Cheltenham, UK:
Edward Elgar Publishing, pp.88–96.
Parasuraman R and Manzey DH (2010) Complacency and
bias in human use of automation: An attentional integra-
tion. Human Factors 52(3): 381–410.
Parsaeefard S, Tabrizian I and Leon-Garcia A (2019)
Artificial intelligence as a service (AI-aaS) on software-
defined infrastructure. In: 2019 IEEE conference on stand-
ards for communications and networking (CSCN),
Granada, Spain: Institute of Electrical and Electronics
Engineers (IEEE), October 28–30, pp.1–7.
Pignot E (2021) Who is pulling the strings in the platform
economy? Accounting for the dark and unexpected sides
of algorithmic control. Organization 28(1): 208–235.
Pinsonneault A and Kraemer KL (1997) Middle management
downsizing: An empirical investigation of the impact of
information technology. Management Science 43(5):
659–679.
Polack P (2020) Beyond algorithmic reformism: Forward
engineering the designs of algorithmic systems. Big Data
& Society. Epub ahead of print 20 March 2020. https://
journals.sagepub.com/doi/pdf/10.1177/2053951720913064
Prahl A and Van Swol L (2017) Understanding algorithm
aversion: When is advice from automation discounted?
Journal of Forecasting 36(6): 691–702.
Rubel AP, Castro C and Pham A (2019) Agency laundering
and information technologies. Ethical Theory and Moral
Practice 22: 1017–1041.
Rudin C (2019) Stop explaining black box machine learning
models for high stakes decisions and use interpretable
models instead. Nature Machine Intelligence 1(5): 206–215.
Sawyer S and Jarrahi M (2014) Sociotechnical approaches to
the study of information systems. In: Topi H and Tucker
A (eds) Computing Handbook. Boca Raton, FL: Chapman
and Hall/CRC, pp.1–27.
Schoukens P and Barrio A (2017) The changing concept of
work: When does typical work become atypical? European
Labour Law Journal 8(4): 306–332.
Shapiro A (2018) Between autonomy and control: Strategies
of arbitrage in the “on-demand” economy. New Media &
Society 20(8): 2954–2971.
Shestakofsky B (2017) Working algorithms: Software auto-
mation and the future of work. Work and Occupations
44(4): 376–423.
Shrestha YR, Ben-Menahem SM and von Krogh G (2019)
Organizational decision-making structures in the age of
artificial intelligence. California Management Review
61(4): 66–83.
Silva S and Kenney M (2018) Algorithms, platforms, and
ethnic bias: An integrative essay. Phylon 55(1 & 2): 9–37.
Sundar SS and Jinyoun K (2019) Machine heuristic: When we
trust computers more than humans with our personal
information. In: Proceedings of the 2019 CHI Conference
on human factors in computing systems, Glasgow,
Scotland, UK, New York: The Association for
Computing Machinery (ACM), May 2019, pp.1–9.
Sutherland W and Jarrahi MH (2018) The sharing economy
and digital platforms: A review and research agenda.
International Journal of Information Management 43:
328–341.
The Guardian (2018) Amazon ditched AI recruiting tool that
favored men for technical jobs. The Guardian, 10 October.
Available at: https://theguardian.com/technology/2018/
oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
(accessed 9 March 2021).
Van Doorn N (2020) At what price? Labour politics and
calculative power struggles in on-demand food delivery.
Work Organisation, Labour & Globalisation 14(1):
136–149.
Von Krogh G (2018) Artificial intelligence in organizations:
New opportunities for phenomenon-based theorizing.
Academy of Management Discoveries 4(4): 404–409.
Wagner AR, Borenstein J and Howard A (2018) Overtrust in
the robotic age. Communications of the ACM 61(9): 22–24.
Wang D, Yang Q, Abdul A, et al. (2019) Designing theory-
driven user-centric explainable AI. In: Proceedings of the
2019 CHI conference in human factors in computing systems,
Glasgow, Scotland, UK, New York: The Association for
Computing Machinery (ACM), 601, pp.1–15.
Watkins EA (2020) The “crooked set up”: Algorithmic fair-
ness and the organizational citizen. Available at: http://
fair-ai.owlstown.com/publications/1428 (accessed 9
March 2021).
Wesche JS and Sonderegger A (2019) When computers take
the lead: The automation of leadership. Computers in
Human Behavior 101: 197–209.
Wilson HJ, Daugherty P and Bianzino N (2017) The jobs that
artificial intelligence will create. MIT Sloan Management
Review 58(4): 14–16.
Wolf CT and Blomberg JL (2019) Evaluating the promise of
human-algorithm collaborations in everyday work practi-
ces. Proceedings of the ACM on Human-Computer
Interaction. Epub ahead of print. https://dl.acm.org/doi/
abs/10.1145/3359245
Wood AJ, Graham M, Lehdonvirta V, et al. (2018) Good gig,
bad gig: Autonomy and algorithmic control in the global
gig economy. Work, Employment & Society: A Journal of
the British Sociological Association 33(1): 56–75.
Yarger L, Payton FC and Neupane B (2020) Algorithmic
equity in the hiring of underrepresented IT job candidates.
Online Information Review 44(2): 383–395.
Zuboff S (1988) In the Age of the Smart Machine: The Future
of Work and Power. New York NY: Basic books.
14 Big Data & Society