Content uploaded by Luc Becker
Author content
All content in this area was uploaded by Luc Becker on Jan 09, 2025
Content may be subject to copyright.
How Transparency and Value Affect Platform Workers’ Workaround Use
and Continuance Intention
Luc Becker
LMU Munich School of Management
Becker.luc@lmu.de
Abstract
Online labor platforms use algorithmic
management systems and algorithmic control to
automate management tasks traditionally performed by
humans. However, these platforms face a control-
performance dilemma: while granting workers control
over tasks increases satisfaction, lacking algorithmic
control of workers’ behavior impedes the system’s
performance. Although workers’ reactions to
algorithmic control, such as using workarounds or
leaving the platform, are well understood, the
underlying causes remain unclear. Addressing this
shortcoming, we tested our proposed research model
using survey data. We apply agency theory to examine
how transparency and value impact workaround use
and continuance intention. Our study reveals that
transparency reduces workaround use while its impact
is moderated by value, which also increases
continuance intention. However, we could not confirm a
significant relationship between workaround use and
continuance intention. We address this by introducing
the concepts of benevolent and malevolent
workarounds, prompting further discussion on
workarounds in the context of algorithmic management.
Keywords: Algorithmic Management, Algorithmic
Control, Workaround, Transparency
1. Introduction
Algorithms increasingly challenge our
understanding of what is human capability and what is
not (Schuetz et al., 2020). Digital systems armed with
algorithms assume traditional management
responsibilities (Möhlmann et al., 2021, p. 2001), such
as the coordination and control of business activities
(Tsoukas, 1994). Algorithmic management is most
prevalent in platform work, such as ride-hailing or food
delivery, where these systems can efficiently manage a
large number of workers (Benlian et al., 2022).
Currently, algorithmic management systems deployed
on platforms manage about 180 million workers
worldwide (Benlian et al., 2022; Manyika et al., 2016).
Platforms operate in a competitive market,
balancing customer demand and worker supply
(Möhlmann et al., 2021). When using algorithmic
management systems, platforms encounter a
performance control dilemma (Pregenzer et al., 2021;
Rühr, 2020): On the one hand, algorithmic management
systems allow workers to work autonomously and
efficiently by providing task-related information
(Göttel, 2021; Pregenzer et al., 2021; Tarafdar et al.,
2022). These systems ensure high-quality services
(Möhlmann et al., 2021) by controlling workers,
monitoring their behavior (Veen et al., 2020), and
sanctioning misbehavior (Wood et al., 2019). The
dilemma arises because granting workers too much
control can backfire in low work performance and
misuse (Chan et al., 2023; Pregenzer et al., 2021), while
excessive control by the platform can alienate workers
from the platform and reduce their intention to continue
working for it. This threatens the sustainability of
platforms, as they depend on the consistent work
performance of platform workers to provide service to
customers (Lee et al., 2015; Pregenzer et al., 2021;
Wiener et al., 2023).
The antecedents leading to workers’ reactions to
algorithm control on platforms remain inconclusive,
despite these reactions being well-documented.
Workers react by using workarounds, deviating from
prescribed processes (Möhlmann et al., 2023), and by
showing a decreased continuance intention, which
refers to their intention to keep working on a platform
(Kellogg et al., 2020; Wiener et al., 2023). While
fairness, privacy, and autonomy are tested antecedents
of workaround use and continuance intention (Wiener et
al., 2023), transparency, trust, and profitability are also
considered key influencing factors (Pregenzer et al.,
2021). When algorithmic management systems restrict
autonomy by controlling information and revenues, they
can trigger workers’ resistance (Kellogg et al., 2020).
However, the interplay of transparency and profitability
as antecedents of workaround use and continuance
intention is unexplored.
Similarly, the consequences of platform workers
using workarounds as a reaction to algorithmic control
are contradictory. On the one hand, workaround use is
viewed as a form of non-compliance (Ejnefjäll et al.,
2023; Pregenzer et al., 2021) with potentially
detrimental effects for both the platform and its
customers (Möhlmann et al., 2021). On the other hand,
workarounds are also seen as attempts by workers’ to
ease task execution and render systems workable
(Ejnefjäll et al., 2023; Zamani et al., 2021), suggesting
a willingness to continue working on the platform. This
dual perspective highlights the complexity of
understanding the true impact of workaround use.
To address these limitations, we investigate the
following two research questions (RQ):
RQ1: How do perceived transparency and
perceived value affect platform workers’ reactions to
algorithmic control?
RQ2: Why do platform workers employ
workarounds?
To answer these questions, we propose a research
model that we test using an online survey with 249
platform workers. We find perceived transparency and
perceived financial value to be antecedents of
workaround use and continuance intention.
Furthermore, we uncover an interaction effect of
perceived value on the relationship between perceived
transparency and workaround use. Finally, while we
find no support for the effect of workaround use on
continuance intention, we propose a differentiated view
on workaround use that future research can build on.
Our work sheds light on the consequences of
algorithmic control on platforms, contributing to the
growing body of knowledge on algorithmic
management (Benlian et al., 2022).
We organize the remainder of the paper as follows.
First, we explain the theoretical background of our
work. Second, we develop our hypotheses and research
model. Third, we present our methodological approach.
Fourth, we validate our results. Fifth, we discuss the
theoretical and practical implications of our results.
Finally, we conclude by summarizing the key
contributions of our paper.
2. Theoretical background
In this section, we provide the theoretical
background to our study. First, we summarize research
on algorithmic management. Second, we describe
workers’ reactions to algorithmic control. Finally, we
use agency theory to conceptualize the interaction of
workers and platforms.
2.1. Algorithmic management
Platform-based and traditional organizations have
started implementing algorithmic management systems,
that is, digital technologies that perform management
functions (Lee et al., 2015). Management functions
performed by algorithms range from supervising and
assigning tasks to workers (Möhlmann et al., 2021),
planning shifts (Lee, 2018), investigating worker
behavior (Veen et al., 2020), to coordinating workers
who collectively create complex products or services
(Kellogg et al., 2020). They also include evaluating
worker performance and managing staffing (Möhlmann
et al., 2021). One advantage over human managers is the
scale at which algorithmic management systems
manage workers (Benlian et al., 2022). They use data
collection and processing to make decisions and guide
workers towards the “optimal” approach. While this
optimization benefits the organization, one-sided
disclosure of data and algorithmic reasoning raises
concerns among workers who demand transparency
regarding datafication of the workplace (Gierlich-Joas
et al., 2024).
Organizations rely on managers to control and
coordinate work (Tsoukas, 1994). Control of behavior is
understood as the use of different mechanisms by
management to attain desired behavior from workers to
maximize the value created by workers (Kellogg et al.,
2020; Meijerink et al., 2023). When organizations use
algorithmic management systems to control the
workforce, we speak of algorithmic control. That is, “the
managerial use of intelligent algorithms and advanced
digital technology (e.g., mobile apps and sensors) as a
means to align individual worker behaviors with
organizational objectives” (Wiener et al., 2023, p. 3).
Similar to human managers in organizations,
algorithms coordinate and control workers. Currently,
algorithms mostly coordinate individual platform
workers and the execution of atomized tasks. However,
coordinating workers to cooperate on intertwined tasks
remains a technological challenge (Becker et al., 2023).
Algorithms automate the analysis of processes and
nudge worker behavior, effectively emulating the roles
of bosses (Möhlmann et al., 2021). Specifically, control
algorithms are used to monitor, recommend, require,
and rate worker conduct, restrict information linked to
task execution, or reward and sanction behavior in light
of organizational goals (Alizadeh et al., 2023).
2.2. Workers reactions to algorithmic
management
Algorithmic management systems, at the same
time, support and control human work (Jabagi et al.,
2021; Möhlmann et al., 2021). Depending on the
configuration, workers can perceive these systems as
empowering or exploiting (Jabagi et al., 2021). When
algorithmic management systems restrict autonomy by
controlling information and revenues, they can trigger
workers’ resistance (Kellogg et al., 2020).
Workers face a continuum ranging from
algorithmic opacity to algorithmic transparency
(Möhlmann et al., 2023). Workers perceive algorithms
as opaque when the relationship between input data and
algorithmic output is undisclosed (Möhlmann et al.,
2023). For example, it is unclear what input and output
data is collected and how it is subsequently used to
control worker behavior (Möhlmann et al., 2021).
Furthermore, algorithmic decision rules and decision
criteria can be opaque to workers. For example, why a
certain task is assigned to them (Göttel, 2021), why they
are sanctioned by or even banned from a platform
(Möhlmann et al., 2023), or how profitable an assigned
task is (Pregenzer et al., 2021).
Platform workers engage in sensemaking to
understand how algorithmic management systems work.
That is, they evaluate the algorithm’s decisions and
organize their knowledge to better understand how they
work (Möhlmann et al., 2023). Individually, workers try
to make sense of the algorithm by testing it and
engaging with customers to identify underlying patterns
(Jiang et al., 2021). Collectively, workers seek to
exchange and coordinate with other platform workers by
using communication infrastructures outside the
platform, such as forums, chat groups, and associations
(Möhlmann et al., 2023).
Workers respond to algorithmic control by
complying or resisting (Jiang, 2023). Their resistance
ranges from cynicism (Pregenzer et al., 2021) to
deliberate actions like interfering with data collection or
processes. In this study, we refer to workers
circumventing instructions, exploiting loopholes, and
“gaming” the platform’s system (Möhlmann et al.,
2023, p. 49) as workarounds. Workers actively resist
control by falsifying data and switching off the
application to prevent adverse data collection (Jiang et
al., 2021; Lee et al., 2015), urging customers for
reviews, or hiring bots to manipulate ratings (Elbanna et
al., 2022). Furthermore, they act through
noncompliance with systems’ control directives by
rejecting tasks (Jiang et al., 2021; Pregenzer et al., 2021)
or disregarding instructions (Tarafdar et al., 2022).
As a last resort, workers decide to leave the
platform and join competing ones. Furthermore,
workers can multihome depending on platforms’ pricing
structures, governance, and exclusivity. That is, they can
choose to work for various platforms at the same time
(Tian et al., 2022). For example, workers switch to other
platforms if they feel that their current earnings are
suboptimal (Möhlmann et al., 2021). This practice
threatens platforms’ service and revenues, as they
depend on consistent work performance of platform
workers to deliver the service to customers (Lee et al.,
2015; Pregenzer et al., 2021; Wiener et al., 2023).
2.3. Agency theory
The relationship between a platform and workers
can be conceptualized using agency theory (Eisenhardt,
1989), wherein the platform acts as the principal and the
worker as the agent (Doğan et al., 2024).
Positivist agency theory focuses on conflicts and
governance mechanisms that limit the agent’s “self-
serving behavior” (Eisenhardt, 1989, p. 59). Focal to
this problem is the principal’s issue in controlling and
verifying the agent’s actions while he performs his
duties (Eisenhardt, 1989). As a result, the principal must
design an optimal contract that balances “behavior and
outcome” and its effectiveness depends on aligning
contract outcomes with the agent’s interests
(Eisenhardt, 1989, p. 60).
The principal mitigates agent opportunism while
observing actual behavior by combining outcome-based
contracts with information systems. Outcome-based
contracts are an effective way to align the agent’s
behavior with organizational goals. In these contracts,
both parties’ rewards (i.e., compensation) depend on the
successful completion of the agent’s tasks. In contrast,
behavior-based contracts, reward the agent’s behavior
during task execution (Eisenhardt, 1989).
Platforms aim to maximize profits by offering high
levels of service to customers in terms of both quantity
and quality. To achieve this, they require many workers
to complete tasks, earning a fixed commission on the
generated revenue. However, on most platforms, the
contract between both parties is outcome-based.
Consequently, workers strive to maximize earnings per
unit of time invested (Doğan et al., 2024), leading
platforms to employ algorithmic control to monitor and
align worker behavior to uphold service quality
(Kellogg et al., 2020). In turn, workers resist the
imposed control (Möhlmann et al., 2021). Platforms,
hence, face the challenge of aligning the interests of
both parties.
3. Research model
In this section, we derive our hypotheses on how
algorithmic control influences perceived value and
perceived transparency, how these latter two influence
workaround use and continuance intention, and how
workaround use affects continuance intention.
Our model builds on prior work by Wiener, Cram,
and Benlian (2023), who examined the effects of
gatekeeping algorithmic control and guiding
algorithmic control on workers reactions. Our work
extends the model using a more general perspective on
algorithmic control (Alizadeh et al., 2023).
Furthermore, we do not focus on Uber drivers but regard
various algorithmic management systems. Finally, we
hypothesize that workaround use influences
continuance intention. We present our research model,
including the constructs and hypotheses, in Figure 1.
Figure 1. Structural equation model.
Prior research has highlighted the tensions
stemming from algorithmic control (Jabagi et al., 2021;
Kellogg et al., 2020; Meijerink et al., 2023; Pregenzer et
al., 2021). Algorithmic control affects how workers
perceive work execution and work compensation.
Prior research has highlighted that a system’s
control design affects how workers perceive work
execution (Kellogg et al., 2020; Meijerink et al., 2023).
Platforms control the information on input data,
algorithmic decisions, and outputs provided to workers.
While platforms communicate basic information on
input variables (e.g., Uber provides trainings), they
share less information on algorithmic decisions
(Möhlmann et al., 2023) or rating mechanisms
(Alizadeh et al., 2023). When user can observe inputs
and outputs but not the “inner workings” of a system
they tend to question transparency and ultimately
trustworthiness of this system (Von Eschenbach, 2021,
p. 1608). We thus hypothesize:
H1a: Perceived algorithmic control negatively
affects perceived transparency.
We define perceived value as platform workers’
evaluation of returns, that is, the individual costs and
benefits of working on the platform (Lin et al., 2012).
When platforms limit choices, autonomy and create
uncertainty regarding task remuneration (Möhlmann et
al., 2021), workers’ perceived value of the system
decreases . We thus hypothesize:
H1b: Perceived algorithmic control negatively
affects perceived value.
Low transparency of the algorithm negatively
influences the worker-employer relationship (Pregenzer
et al., 2021). Opaque or lacking instructions trigger
resistance behavior (Lee et al., 2015). Workarounds are
a worker’s resistance “attempt to wrest control back
from a technology” (Pollock, 2005, p. 497; Wiener et
al., 2023). For example, workers plan their own
transportation routes (Zheng et al., 2022) or turn off the
application and ignore notifications rather than “blindly
follow an algorithm while executing work” (Möhlmann
et al., 2021, p. 2013). Therefore, workers use
workarounds to overcome opacity (Möhlmann et al.,
2021; Zheng et al., 2022). We thus hypothesize:
H2: Perceived transparency negatively affects the
use of workarounds.
Workers constantly evaluate task value and switch
to the highest-paying platform to maximize their income
(Möhlmann et al., 2021; Pregenzer et al., 2021).
Platforms workers continuance intention is defined as
their willingness of continuing to work and offering
their services on a specific platform (Goldbach et al.,
2018). The perceived value of a system increases users’
continuance intention to work with it (Lin et al., 2012).
Similarly, if costs are not justified by earnings, workers
will quit (Doğan et al., 2024). For example, workers use
multiple smartphones to overview the bids for rides
offered by different platforms and choose the task that
offers the highest value (Möhlmann et al., 2021). We
thus hypothesize:
H3: Perceived value positively affects continuance
intention.
While perceived value and transparency have direct
effects, we argue that they interact. Algorithmic
management systems provide information, helping
workers perform their tasks efficiently (Cram et al.,
2020). With decreasing transparency, workers engage in
sensemaking, but they do not act upon it as long as
payments are acceptable (Pregenzer et al., 2021). For
workers, low work profitability leads to dissatisfaction
with the platform (Pregenzer et al., 2021). Furthermore,
when profitability drops below an acceptable threshold,
workers question the algorithm’s opaque decisions and
engage in workaround use to increase profitability
(Pregenzer et al., 2021). We argue that while perceived
transparency decreases workaround use (see hypothesis
H2), perceived value further strengthens this decreasing
effect of transparency on workaround use (Pregenzer et
al., 2021). We thus hypothesize:
H4: Perceived value strengthens perceived
transparency’s negative effect on workaround use.
Workaround use is understood as a form of non-
compliance (Ejnefjäll et al., 2023) and an attempt to ease
activities and make systems workable (Ejnefjäll et al.,
2023; Zamani et al., 2021). We argue that workers using
workarounds express dissatisfaction and resistance to
algorithmic management (Kellogg et al., 2020; Lee et
al., 2015). Workers use workarounds to escape the
control imposed by the system. For example, workers
establish relationships outside the applications to
circumvent fees (Lee et al., 2015) or switch between
multiple platforms (Möhlmann et al., 2021). In the long
Workaround
Use
Continuance
Intention
Algorithmic
Control
Transparency
Perceived
Val u e
H1a (-)
H1b (-)
H4 (+)
H2 (-)
H5 (-)
H3 (+)
run, if workers are dissatisfied with a platform, they will
leave it if they find suitable alternatives. We thus
hypothesize that:
H5: Workaround use negatively affects
continuance intention.
4. Methodology
To evaluate our model and hypotheses we
proceeded as follows. We selected suitable
measurement constructs, designed the survey, and
collected data. Subsequently, we assessed the validity of
our model and tested our hypotheses.
4.1. Measurement and data collection
We used five established scales with 40 items to
measure the constructs in our model. We measured
algorithmic control using Alizadeh et al.’s (2023) scale,
transparency with the items of Schnackenberg et al.
(2021), perceived value with items of Lin et al. (2012),
and workaround use and continuance intention with
items of Wiener et al. (2023). Finally, we controlled for
age, gender, education level, total tasks finished on
prolific.com, satisfaction of working with the platform,
reported time working on platforms, and multihoming.
Our data collection process consisted of three steps:
a pre-survey, a filtering survey, and a final survey. We
built all surveys with Qualtrics software.
First, we created and tested an initial survey by
publishing a pre-survey containing quantitative and
qualitative items on SurveyCircle.com. We analyzed 26
responses and identified significant differences in
response quality (especially for qualitative responses),
which led us to further refine our survey to acquire well-
suited participants.
Second, we selected survey participants on
Prolific.com who had experienced algorithmic
management systems. We achieved this by following
Alizadeh et al.’s (2023) approach of using a filtering
survey, prompting 432 participants to rate the following
item: “At my workplace, algorithms and digital
technologies such as smartphones, tablets, scanners,
digital systems, etc. guide what work task I fulfill and/or
how I should fulfill my work tasks.” (Alizadeh et al.,
2023, p. 8). Participants’ average completion time for
the filtering survey was 1 minute and 14 seconds. We
paid them 0.13€, regardless of how they answered. We
invited all participants who had answered “somewhat
agree,” “agree,” or “strongly agree” to the final survey,
resulting in 315 invitations.
Finally, we performed our main study on
Prolific.com. We cleaned the responses of our 315
participants by removing 43 incomplete answers. We
also removed 16 that failed the consent check and five
answers that failed the attention check. Finally, we
removed two answers because of bot detection
(Q_RecaptchaScore below .5 on Google reCAPTCHA
V3).
The final data set consists of 249 responses. 26%
responded that they mostly worked on Uber, 14% on
Freelancer, 14% on Fiverr, 12% on Upwork, 9% on
AmazonMTurk and 6% on Prolific. The remaining
participants reported mostly working on platforms, such
as Gorillas, Deliveroo, Mjam, Clickworker, Wolt or
Foodora which account for maximum 2% of
respondents per provider. Respondents were, on
average, 32 years old, 46.5% female, had finished 488
tasks on average, 42% worked on prolific less than a
year, and 47.6% multihomed. Participants finished
within 6:42 minutes on average and were paid 0.7€.
4.2. Model evaluation
We used a structural equation model and SmartPLS
version 4 to assess our research model (Hair et al.,
2021). We used the partial least squares structural
equation model because of its ease of use to validate
measurements, its prediction strength (Ringle et al.,
2012), and because it is an established method in
algorithmic management research (Wiener et al., 2023).
We evaluated items’ internal consistency and
constructs’ composite reliability, convergent validity,
and discriminatory validity. The results are displayed in
Table 1. We assessed indicator reliability and compared
their outer loadings with the threshold of .708 (Hair et
al., 2021). All constructs’ item factor loadings but
perceived algorithmic control (5 items) fulfilled this
threshold. We iteratively dropped two items with an
outer loading below .5 (Chin, 1998). Our model
achieved composite reliability when rho alpha, rho_c,
and Cronbach’s alpha were greater than .7. Furthermore,
all constructs met the convergent validity threshold of .5
average variance extracted (AVE), except for perceived
algorithmic control (.491). We thus removed the
indicator with the lowest outer loading to increase its
AVE to .533 (Hair et al., 2021).
Finally, we evaluated the discriminant validity of
our model through the heterotrait-monotrait ratio
(HTMT), where all values were below the .9 threshold,
supporting discriminant validity. Furthermore, our
model withstood the Fornell-Larcker criterion since the
AVE’s square roots are greater than the correlation
between constructs. There are therefore discriminant
validity issues (Hair et al., 2021). Finally, we used
10,000 iterations of bootstrapping and set parameters as
suggested by Hair et al. (2021). The constructs used in
the model demonstrate discriminant validity as all
HTMT values at the 95% one-sided bootstrap
confidence interval were below .9 (Hair et al., 2021).
Table 1. Composite reliability and validity.
5. Results
In this section, we present our results. Table 2
provides an overview of hypotheses evaluation.
We assess our structural models’ validity by
examining path coefficient estimates, t-values, p-values,
and confidence intervals. The R²-values indicate that our
structural equation explains the constructs well
(perceived transparency=.145, perceived value=.241,
workaround use=.262, continuance intention=.432)
First, algorithmic control significantly affects
transparency (H1a: β=.385, t=6.452, p=.000) and
perceived value (H1b: β=.491, t=8.942, p=.000).
However, this does not support our hypothesis, as both
effects are positive.
Next, we evaluate how value and transparency
affect workaround use and continuance intention.
Transparency decreases workaround use (H2: β=−.218,
t=2.740, p=.003). In turn, perceived value has a positive
effect on continuance intention (H3: β=.604, t=9.638,
p=.000). Additionally, value moderates the effect of
transparency on workaround use (H4: β=-.107, t=2.272,
p=.012). Thus, perceived transparency decreases
workaround use and perceived value strengthens this
effect.
Table 2. Overview of hypothesis.
Furthermore, we evaluate the effect of workaround
use on continuance intention. We hypothesized that
using workarounds negatively affects the continuance
intention of working for an algorithmic management
system. This hypothesis is not supported as the
relationship between the use of workarounds and
continuance intention is not statistically significant (H5:
β=−.026, t=.405, p=.343).
Finally, the tests of our control variables indicate
that experience with platform work decreases
workaround use (β=-.110, t=1.788, p=.037), and age
positively affects continuance intention (β=.108,
t=2.065, p=.019). The remaining control variables –
gender, level of education, total approvals, reported time
working on platforms, and multihoming – do not have
significant effects.
6. Discussion
In this study, we investigate the effect of
algorithmic control on workers’ perceived transparency,
perceived value, workaround use, and continuance
intention. We propose a structural equation model and
collect data via an online survey from 249 individuals
supervised by algorithmic management systems at
work. We then evaluate the model using SmartPLS. Our
findings reveal that perceived transparency decreases
.730
.533
.820
.711
.710
.850
.491
.723
.886
.831
.807
.845
.646
.381
.714
.882
.800
.800
.783
-.322
-.286
-.171
.613
.888
.853
.842
.946
-.271
.511
.611
.318
.894
.962
.943
.941
(5)
(4)
(3)
(2)
(1)
AV E
Composite
Reliability:
rho_c
Composite
Reliability:
rho alpha
Cronbach’s
Alpha
Construct
Continuance
Intention
Woraround
Use
Percevied
Transparency
Percevied
Valu e
Algorithmic
Control
Supported?
Dependent
Construct
Effect
Direction
Independent
Construct
Hypothesis
NoTransparency.385***
Algorithmic
Control
H1a
No
Perceived
Value
.491***
Algorithmic
Control
H1b
YesWorkaround Use
-.214**TransparencyH2
Yes
Continuance
Intention
.604***
Perceived
Value
H3
Yes
Transparency
Workaround Use
-.107*
Perceived
Value
H4
No
Continuance
Intention
-.026
Workaround UseH5
workaround use, while perceived value increases
platform workers’ continuance intention. Additionally,
perceived value and perceived transparency interact to
influence workaround use, as perceived value
strengthens the effect of perceived transparency on
workaround use. Finally, we find no significant
relationship between workaround use and continuance
intention. In the remainder of this section, we discuss
the theoretical and practical implications as well as
limitations of our work.
6.1. Theoretical implications
In this study, we contribute to algorithmic
management literature by using agency theory to
explain workers’ reactions to algorithmic control.
Platforms use outcome-based contracts and digital
systems with algorithmic control capabilities to align
workers’ behavior and reward successful task
completion. We hypothesize that algorithmic
transparency and perceived value explain workers’
workaround use and continuance intention.
First, contrary to our hypotheses, algorithmic
control shows a significant positive association with
perceived transparency and value. Although we
measured perceived overall transparency, platforms
selectively disclose information on algorithmic control
inputs, mechanisms, and outputs (Möhlmann et al.,
2021). Workers who are aware of inputs but not
mechanisms may still perceive the platform as
transparent. Similarly, the positive link between
algorithmic control and value may stem from workers’
knowledge of pricing and payment mechanisms, which
affects their perceived value of the platform. Future
studies could explore how differentiated information on
algorithmic control affects perceived transparency and
how this affects perceived value of a platform using
algorithmic management.
Second, we find that perceived transparency
decreases workers’ workaround use. Opaque
instructions trigger resistance behavior (Lee et al.,
2015), and workers use workarounds to overcome
intransparency. While our study shows the impact of
transparency on workaround use, future studies can
investigate how different information on input data,
algorithmic decisions, and output data (Möhlmann et al.,
2023) impact workaround use.
Third, we investigate the effect of perceived value
on worker behavior. Perceived value directly increases
workers’ continuance intention. From an agency theory
perspective, workers aim to maximize profitability to
achieve an optimal contract. The more closely the
contract’s outcome aligns with the agent’s interests, the
more likely the agent (the platform worker) will “behave
in the interests of the principal” (the platform owner)
(Eisenhardt, 1989, p. 60). While our study assumed
outcome-based contracts, the perception of behavior-
based contracts remains unknown (Doğan et al., 2024).
Future research can investigate how financial value
stemming from outcome-based and behavior-based
contracts affects workers’ continuance intention
differently. Furthermore, we encourage fellow
researchers to investigate what drives worker’s value
perception in the context of algorithmic management.
Fourth, we uncover that perceived value moderates
the relationship between transparency and workaround
use by increasing the effect of transparency. This
confirms qualitative observations that workers accept
opaqueness as long as the perceived value is satisfactory
(Pregenzer et al., 2021). Our results thus indicate that
with increasing perceived value workers get from a
platform, the less transparency is important to them and
the less they use workarounds in consequence.
The interplay of value and transparency can be
described with the performance control dilemma:
Platforms want a performing system, yet workers want
to have autonomy and control over their work (Kellogg
et al., 2020; Pregenzer et al., 2021; Rühr, 2020). While
workers value autonomy (Wood et al., 2019), platforms
cannot allow workers control over task allocation as it
can backfire and lead to low performance (Chan et al.,
2023). However, participation in decision-making
increases workers’ perceived power and satisfaction
leading to increased continuance intention. Therefore,
one avenue for future research could be allowing
workers to decide upon performance-unrelated metrics
referring to their work preferences, such as task
predictability, job security, and remuneration (Lee et al.,
2021). Consequently, granting workers transparency on
aspects that do not affect performance could decrease
workaround use and increase continuance intention. We
encourage researchers to investigate the modifiability of
algorithmic management systems, considering tradeoffs
between value and transparency.
Fifth, we contribute to the understanding of
workaround use in algorithmic management systems.
Our hypothesis that workaround use positively affects
continuance intention was not confirmed. We argue that
this might be due to two reasons: shortcomings caused
by social desirability bias and/or shortcomings caused
by the undifferentiated measurement of the workaround
use construct. We detail both arguments in the
following.
We argue the first reason to be that workers
perceive workaround use as a socially undesirable
practice. This perception could render them reluctant to
openly admit past actions and they might fear platform
surveillance and sanctions (Möhlmann et al., 2021).
The second reason could be a lack of differentiation
between the workaround use concept and corresponding
constructs in the algorithmic management context. We
argue that distinction is necessary because distinct
perspectives exist in the information systems and
algorithmic management literature (Ejnefjäll et al.,
2023; Jiang, 2023). For instance, workaround use has
been described as a form of non-compliance but it is
also understood as an attempt by employees to ease
activities and task completion (Zamani et al., 2021).
Finally, the consequences of workarounds use
could differ depending on the type and intended goal of
the workaround in the context of algorithmic control on
platforms. Platform workers’ resistance to algorithmic
control encompasses more practices than what has been
described in the general workaround literature in
information systems (Ejnefjäll et al., 2023; Jiang, 2023).
Thus, constructs for information systems workarounds
might not adequately represent resistance and
workaround use triggered by algorithmic management.
Table 3. Benevolent and malevolent workaround
use.
Concept
Definition
Benevolent
Workaround Use
Well-intended actions workers
take to improve their task
performance and increase their
revenue on the platform.
Malevolent
Workaround Use
Ill-intended actions workers
take that harm the platform and
the customer.
Building on the observation that workarounds can
have positive and negative consequences (Ejnefjäll et
al., 2023) and taking into perspective workers’
intentions, we propose distinguishing between
benevolent and malevolent workaround use, as
displayed in Table 3. We propose to conceptualize
benevolent workaround use as well-intended actions
that workers take to improve their task performance and
increase their revenue on the platform. This can be
understood as their attempt to ease activities and task
completion (Zamani et al., 2021), making systems
workable (Ejnefjäll et al., 2023; Zamani et al., 2021).
For example, platform workers use alternative routes or
applications to decrease travel time and thus improve
task performance (Tarafdar et al., 2022). In contrast, we
propose to conceptualize malevolent workaround use as
ill-intended actions with the same purpose as benevolent
workaround use but inflicting negative consequences on
the platform and the customer (Jiang, 2023; Möhlmann
et al., 2021), making workaround use an expression of
dissatisfaction (Kellogg et al., 2020; Lee et al., 2015).
For example, platform workers circumvent control
instructions or urge customers to switch to platforms
that are more favorable to them (Jiang, 2023).
Unusual routines theory suggests that workarounds
create subroutines like extra work, delays, errors, or
blame (Rice, 2008). However, the consequences for
continuance intention for the two proposed types
remains unclear: If workers know how to exploit the
system, could the use of workarounds actually increase
their intention to stay on a platform?
We encourage fellow researchers to further study
different types of workarounds in the context of
algorithmic management. While previous studies have
focused on antecedents of workarounds (Jiang, 2023)
understanding their consequences is crucial from an
organizational perspective. Differentiating workarounds
based on their business impact could clarify their
theoretical and practical implications for algorithmic
management system design. We propose drawing
insights from both the information systems (Ejnefjäll et
al., 2023) and algorithmic management literature (Jiang,
2023) as academic understanding of algorithmic
management system design can benefit from a more
nuanced classification of workarounds in the context of
platform work.
6.2. Practical implications
Platforms operate in a competitive two-sided
market, balancing customer demand and worker supply
(Möhlmann et al., 2021). Implemented “algorithms
enable and constrain the actions of workers” (Meijerink
et al., 2023, p. 7). To mitigate worker resistance to
control, organizations designing and implementing
algorithmic management systems must thus consider the
effects of perceived algorithmic control (Alizadeh et al.,
2023). In this study, we uncover that worker behavior is
driven by their perceived value of a platform and the
transparency it imposes.
Our study informs practitioners on the interaction
of transparency and value with workaround use.
Increasing algorithmic decision transparency helps
mitigate workaround use, and participation in decision-
making increases workers’ perceived power and
satisfaction. We argue that allowing workers to make
decisions based on performance-unrelated metrics, such
as work preferences (Lee et al., 2021), could decrease
workaround use and increase continuance intention. For
example, granting workers the opportunity to influence
certain aspects of their tasks while transparently
communicating the expected loss in revenue may likely
increase satisfaction. Second, value is a strong indicator
of continuance intention. Our study shows that
increasing the perceived value of a system has workers
increases continuance intention for workers. However,
as contracts are mostly outcome-based, effort remains
unrewarded (Doğan et al., 2024). Practitioners can
utilize this to re-evaluate their remuneration structures
to amass and retain workers in the labor market.
6.3. Limitations
Our study has several limitations. First, our study
uses the perceived transparency of algorithmic decisions
as an antecedent for workaround use. However, the
rather low R² of workaround use implies that there may
be additional antecedents that we did not account for.
Adding antecedents such as trust (Pregenzer et al., 2021)
and autonomy (Wiener et al., 2023) could help to paint
a more complete picture of workaround use. Future
studies could address this limitation by conducting a
detailed study on all antecedents of continuance
intention and workaround use.
Second, individual algorithmic management
systems are designed differently regarding control,
transparency, and value. Our participants work with
various systems, and our results are not bound to one
specific platform or algorithmic management system.
This may have weakened the effects measured in our
study. Despite differences, our model demonstrates
significant relationships between constructs, indicating
that dynamics, effects, and consequences for workers
are similar across platforms.
7. Conclusion
In this paper, we investigate antecedents and
consequences of platform workers’ reactions to
algorithmic control. The model describes how perceived
value and perceived transparency affect platform
workers’ workaround use and continuance intention.
We empirically test our proposed research model with a
survey of 249 participants. Our analysis yields two key
results. First, we confirm our hypotheses that workers’
perceived value increases continuance intention and that
perceived transparency decreases workaround use. We
also demonstrate that perceived value moderates the
effect of transparency on workaround use. Second, we
dive into the nature of workarounds and find no
significant effect of workaround use on continuance
intention. However, we argue there is conceptual
unclarity in workaround use measurement. We propose
the concepts of benevolent and malevolent workaround
use, which we argue to have opposite effects on
platform workers’ continuance intention. Future
research can build on these initial results and further
analyze the antecedents and consequences of different
workaround types in the context of algorithmic
management.
8. References
Alizadeh, A., Hirsch, F., Benlian, A., Wiener, M., & Cram, W.
A. (2023). Measuring Workers’ Perceptions of
Algorithmic Control: Item Development and Content
Validity Assessment. HICSS 2023 Proceedings, Maui,
Hawaii, USA.
Becker, L., Wurm, B., & Hess, T. (2023). Will Algorithms
Replace Managers? A Systematic Literature Review on
Algorithmic Management. 44th International Conference
on Information Systems (ICIS) 2023, Hyderabad, India.
Benlian, A., Wiener, M., Cram, W. A., Krasnova, H.,
Maedche, A., Möhlmann, M., Recker, J., & Remus, U.
(2022). Algorithmic Management: Bright and Dark
Sides, Practical Implications, and Research
Opportunities. Business & Information Systems
Engineering, 64(6), 825-839.
Chan, J., Kyung, N., Yoon, S., & Kim, Y. (2023). Algorithm
as Boss or Coworker? Randomized Field Experiment on
Algorithmic Control and Collaboration in Gig Platform.
44th International Conference on information Systems
(ICIS) 2023, Hyderabad, India.
Chin, W. W. (1998). The partial least squares approach to
structural equation modeling. Modern methods for
business research, 295(2), 295-336.
Cram, W. A., & Wiener, M. (2020). Technology-mediated
control: Case examples and research directions for the
future of organizational control. Communications of the
Association for Information Systems, 46(1), 70-91.
Doğan, O. B., Kumar, V., & Lahiri, A. (2024). Platform-level
consequences of performance-based commission for
service providers: Evidence from ridesharing. Journal of
the Academy of Marketing Science, 52, 1240–1261.
Eisenhardt, K. M. (1989). Agency theory: An assessment and
review. Academy of Management Review, 14(1), 57-74.
Ejnefjäll, T., Ågerfalk, P. J., & Hedrén, A. (2023).
WORKAROUNDS IN INFORMATION SYSTEMS
RESEARCH: A FIVE-YEAR UPDATE 54th Hawaii
International Conference on System Sciences, Hawaii.
Elbanna, A., & Idowu, A. (2022). Crowdwork, digital
liminality and the enactment of culturally recognised
alternatives to Western precarity: beyond epistemological
terra nullius. European Journal of Information Systems,
31(1), 128-144.
Gierlich-Joas, M., Baiyere, A., & Hess, T. (2024). Inverse
transparency and the quest for empowerment through the
design of digital workplace technologies. JAIS Preprints
(Forthcoming), 131.
Goldbach, T., Benlian, A., & Buxmann, P. (2018). Differential
effects of formal and self-control in mobile platform
ecosystems: Multi-method findings on third-party
developers’ continuance intentions and application
quality. Information & Management, 55(3), 271-284.
Göttel, V. (2021). Sharing Algorithmic Management
Information in the Sharing Economy–Effects on Workers’
Intention to Stay with the Organization. ICIS 2021
Proceedings, Austin, USA.
Hair, J. J., Hair, J. F. J., Hult, G. T. M., Ringle, C. M., &
Sarstedt, M. (2021). A primer on partial least squares
structural equation modeling (PLS-SEM). Sage
publications.
Jabagi, N., Croteau, A.-M., Audebrand, L., & Marsan, J.
(2021). Who’s the Boss? Measuring Gig-Workers’
Perceived Algorithmic Autonomy-Support. ICIS 2021
Proceedings, Austin, USA.
Jiang, J. (2023). COVERT RESISTANCE AGAINST
ALGORITHMIC CONTROL ON ONLINE LABOR
PLATFORMS – A SYSTEMATIC LITERATURE
REVIEW. ECIS 2023 Research Papers, Kristiansand,
Norway.
Jiang, J., Adam, M., & Benlian, A. (2021). Algoactivistic
practices in ridesharing-a topic modeling & grounded
theory approach ECIS 2021 Proceedings, Marrakech,
Morocco.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020).
Algorithms at work: The new contested terrain of control.
Academy of Management Annals, 14(1), 366-410.
Lee, M. K. (2018). Understanding perception of algorithmic
decisions: Fairness, trust, and emotion in response to
algorithmic management. Big Data & Society, 5(1), 1-16.
Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015).
Working with machines: The impact of algorithmic and
data-driven management on human workers. Proceedings
of the 33rd annual ACM Conference on Human Factors
in Computing Systems, Seoul, South Korea.
Lee, M. K., Nigam, I., Zhang, A., Afriyie, J., Qin, Z., & Gao,
S. (2021). Participatory algorithmic management:
Elicitation methods for worker well-being models.
Proceedings of the 2021 AAAI/ACM Conference on AI,
Ethics, and Society, New York, USA.
Lin, T.-C., Wu, S., Hsu, J. S.-C., & Chou, Y.-C. (2012). The
integration of value-based adoption and expectation–
confirmation models: An example of IPTV continuance
intention. Decision Support Systems, 54(1), 63-75.
Manyika, J., Lund, S., Bughin, J., Robinson, K., Mischke, J.,
& Mahajan, D. (2016). Independent Work: Choice
necessity and the gig economy.
https://www.mckinsey.com/~/media/McKinsey/Featured
%20Insights/Employment%20and%20Growth/Independ
ent%20work%20Choice%20necessity%20and%20the%
20gig%20economy/Independent-Work-Choice-
necessity-and-the-gig-economy-Full-report.ashx
Meijerink, J., & Bondarouk, T. (2023). The duality of
algorithmic management: Toward a research agenda on
HRM algorithms, autonomy and value creation. Human
Resource Management Review, 33(1), 100876.
Möhlmann, M., Salge, C., & Marabelli, M. (2023). Algorithm
sensemaking: how platform workers make sense of
algorithmic management. Journal of the Association for
Information Systems, 24(1), 35-64.
Möhlmann, M., Zalmanson, L., Henfridsson, O., & Gregory,
R. W. (2021). Algorithmic Management of Work on
Online Labor Platforms: When Matching Meets Control.
MIS Quarterly, 45(4), 1999-2022.
Pollock, N. (2005). When is a work-around? Conflict and
negotiation in computer systems development. Science,
Technology, & Human Values, 30(4), 496-514.
Pregenzer, M., Wieser, F., Santiago Walser, R., & Remus, U.
(2021). Obscure oversight: opacity drives sensemaking
and resistance behavior in algorithmic management.
ICIS 2021 Proceedings, Austin, USA.
Rice, R. E. (2008). Unusual routines: Organizational (non)
sensemaking. Journal of Communication, 58(1), 1-19.
Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). Editor's
comments: a critical look at the use of PLS-SEM in MIS
Quarterly. MIS Quarterly, 36(1), iii-xiv.
Rühr, A. (2020). Robo-advisor configuration: an investigation
of user preferences and the performance-control
dilemma. ECIS 2020 Research Papers, Marrakech,
Morocco.
Schnackenberg, A. K., Tomlinson, E., & Coen, C. (2021). The
dimensional structure of transparency: A construct
validation of transparency as disclosure, clarity, and
accuracy in organizations. Human Relations, 74(10),
1628-1660.
Schuetz, S., & Venkatesh, V. (2020). The Rise of Human
Machines: How Cognitive Computing Systems
Challenge Assumptions of User-System Interaction.
Journal of the Association for Information Systems,
21(2), 460-482.
Tarafdar, M., Page, X., & Marabelli, M. (2022). Algorithms as
co‐workers: Human algorithm role interactions in
algorithmic work. Information Systems Journal, 33(2),
232-267.
Tian, J., Zhao, X., & Xue, L. (2022). PLATFORM
COMPATIBILITY AND DEVELOPER
MULTIHOMING: A TRADE-OFF PERSPECTIVE.
MIS Quarterly, 46(3), 1661-1690.
Tsoukas, H. (1994). What is management? An outline of a
metatheory. British Journal of Management, 5(4), 289-
301.
Veen, A., Barratt, T., & Goods, C. (2020). Platform-capital’s
‘app-etite’for control: A labour process analysis of food-
delivery work in Australia. Work, employment and
society, 34(3), 388-406.
Von Eschenbach, W. J. (2021). Transparency and the black
box problem: Why we do not trust AI. Philosophy &
Technology, 34(4), 1607-1622.
Wiener, M., Cram, W., & Benlian, A. (2023). Algorithmic
control and gig workers: a legitimacy perspective of Uber
drivers. European Journal of Information Systems, 32(3),
485-507.
Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I.
(2019). Good gig, bad gig: autonomy and algorithmic
control in the global gig economy. Work, employment and
society, 33(1), 56-75.
Zamani, E. D., & Pouloudi, N. (2021). Generative
mechanisms of workarounds, discontinuance and
reframing: a study of negative disconfirmation with
consumerised IT. Information Systems Journal, 31(3),
384-428.
Zheng, Y., & Wu, P. F. (2022). Producing speed on demand:
Reconfiguration of space and time in food delivery
platform work. Information Systems Journal, 32(5), 973-
1004.