ArticlePDF Available

An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes

  • Univ Texas, El Paso/Hul Univl

Abstract and Figures

Management scholars and practitioners have highlighted the importance of ethical dimensions in the selection of strategies. However, to date, there has been little effort aimed at theoretically understanding the ethical positions of individuals/organizations concerning human resource management (HRM) decision-making processes, the selection of specific ethical positions and strategies, or the post-decision accounting for those decisions. To this end, we present a Throughput model framework that describes individuals' decision-making processes in an algorithmic HRM context. The model depicts how perceptions, judgments, and the use of information affect strategy selection, identifying how diverse strategies may be supported by the employment of certain ethical decision-making algorithmic pathways. In focusing on concerns relating to the impact and acceptance of artificial intelligence (AI) integration in HRM, this research draws insights from multidisciplinary theoretical lenses, such as AI-augmented (HRM(AI)) and HRM(AI) assimilation processes, AI-mediated social exchange, and the judgment and choice literature. We highlight the use of algorithmic ethical positions in the adoption of AI for better HRM outcomes in terms of intelligibility and accountability of AI-generated HRM decision-making, which is often underexplored in existing research, and we propose their key role in HRM strategy selection.
Content may be subject to copyright.
Human Resource Management Review xxx (xxxx) xxx
Please cite this article as: Waymond Rodgers, Human Resource Management Review,
1053-4822/© 2022 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license
An articial intelligence algorithmic approach to ethical
decision-making in human resource management processes
Waymond Rodgers
, James M. Murray
, Abraham Stefanidis
William Y. Degbey
, Shlomo Y. Tarba
Hull University Business School, United Kingdom and University of Texas, El Paso, USA
Hull University Business School, University of Hull, United Kingdom
Peter J. Tobin College of Business, St. John's University, New York, USA
School of Management, University of Vaasa, Finland and Turku School of Economics, University of Turku, Finland
Birmingham Business School, University of Birmingham, United Kingdom
Throughput model
Articial intelligence
Management scholars and practitioners have highlighted the importance of ethical dimensions in
the selection of strategies. However, to date, there has been little effort aimed at theoretically
understanding the ethical positions of individuals/organizations concerning human resource
management (HRM) decision-making processes, the selection of specic ethical positions and
strategies, or the post-decision accounting for those decisions. To this end, we present a
Throughput model framework that describes individuals' decision-making processes in an algo-
rithmic HRM context. The model depicts how perceptions, judgments, and the use of information
affect strategy selection, identifying how diverse strategies may be supported by the employment
of certain ethical decision-making algorithmic pathways. In focusing on concerns relating to the
impact and acceptance of articial intelligence (AI) integration in HRM, this research draws in-
sights from multidisciplinary theoretical lenses, such as AI-augmented (HRM
) and HRM
assimilation processes, AI-mediated social exchange, and the judgment and choice literature. We
highlight the use of algorithmic ethical positions in the adoption of AI for better HRM outcomes in
terms of intelligibility and accountability of AI-generated HRM decision-making, which is often
underexplored in existing research, and we propose their key role in HRM strategy selection.
1. Introduction
Articial intelligence (AI) has the ability to make decisions in real time based on pre-installed algorithms and computing tech-
nologies constructed based on data analysis to learn and acclimate automatically to offer more rened responses to situations.
Encompassing both the human element and the adoption of AI applications, human resource management (HRM) can offer an
improved experience for an organization's employees (Pereira, Hadjielias, Christo, & Vrontis, 2021). As AI technology has advanced,
concerns with human control of the inherently opaque nature of AI systems have driven increasing interest regarding the ethicsAI
interface. A limited understanding of the theoretical basis for AI assimilation in HRM decision-making functions has not impeded the
* Corresponding author.
E-mail addresses: (W. Rodgers), (J.M. Murray), (A. Stefanidis),
william.degbey@utu. (W.Y. Degbey), (S.Y. Tarba).
Contents lists available at ScienceDirect
Human Resource Management Review
journal homepage:
Received 30 October 2020; Received in revised form 12 April 2022; Accepted 3 June 2022
Human Resource Management Review xxx (xxxx) xxx
replacement of HRM decision-making by AI systems (Prikshat, Malik, & Budhwar, 2021); however, the increased adoption of AI and
advances in AI abilities have increased the focus on the ethical values and principals guiding AI development and use (Hermann,
2021). Past moral behaviors, new sets of agreed rules, or a mix of both, are framed by Loureiro, Guerreiro, and Tussyadiah (2021)
under the trend of AI integration, law, and ethics. We propose that the interface of positions on AI and ethics with HRM is practically
assimilated within organizational decision analyses of past and proposed systems. Decisions traditionally undertaken by HRM are
increasingly being made by algorithms (Duggan, Sherman, Carbery, & McDonnell, 2020; Parent-Rocheleau & Parker, 2021). HRM has
been encouraged to adopt data-driven predictive analytics to determine employee intentions and turnover (Haldorai, Kim, Pillai, Park,
& Balasubramanian, 2019). To evaluate this potential, an understanding of an organization's ethical position and strategywithin a
framework that allows for post-decision outcome analysisis warranted. To contain ethical and societal risks associated with the
adoption of AI for HRM, the values and practical insights of human resource (HR) decision makers need to be considered (Charlwood &
Guenole, 2022). With the changing nature of AI technologies, denitions of AI may not be static (Hermann, 2021). To differentiate the
use of AI in HRM from similar technology-enabled HRM terms,Prikshat et al. (2021, p. 2) introduced the concept of HRM
, and
further proposed HRM practitioners consider all four stages of the assimilation processes of initiation, adoption, routinization, and
extension, incorporating antecedents (technology, organization, people) and consequences (operational, relational, transformational)
within a HRM
assimilation framework (Prikshat et al., 2021). While organizations have prioritized AI service quality, AI satis-
faction, and AI job satisfaction in AI investments (Nguyen & Malik, 2021b), there appears to be a lack of studies on ethics in AI-driven
HRM system algorithms. Concerns regarding AI developers neglecting ethics in favor of technical and commercial priorities (Charl-
wood & Guenole, 2022) and the impact of this on HRM practitioners who are considering using or are using AI-driven HRM technology
suggest a need for a practical framework for HRM practitioners to evaluate the incorporation of ethics into their algorithm-based
In this context, the objective of this study pertains to the need for an HRM accountability framework for the implementation and use
of AI in the workplace. Since algorithmic design can induce bias in the decision-making process, our research infuses an ethical
decision-making platform into the process that can guide HR (Langer & Konig, 2021). AI algorithms can incorporate ethical consid-
erations, decision-making processes, and managers' knowledge to determine the most appropriate HRM strategy in each situation. To
assist HRM in taking full advantage of the power and potential that AI offers, this paper focuses on designing and developing ethical
HRM systems to eliminate AI design bias. Moreover, we aspire to enhance past literature by incorporating an AI algorithmic ethical
decision-making model into HRM concepts and practices. This AI approach can provide valuable insights into how different pathways
may inuence the strategies employed by HRM decision makers. Finally, the proposed model offers a framework for practitioners to
frame and assess decisions, providing an audit trail and structure to frame the root-cause analysis (RCA) of post-decision outcomes,
such as pay-gap analysis, the effectiveness of diversity and inclusion policies, and performance measurement and reward systems
The focus of AI technology is precipitously changing from decision-making to strategies (West, 2018; West & Allen, 2018). As the
increased use of AI technology is adopted within traditional professional roles, and not just in manufacturing and distribution, the
trend of disintermediation (e.g., intermediary roles such as travel agents being replaced by websites) will continue to affect inter-
mediary service providers (Susskind & Susskind, 2015). This may lead to AI technology challenging HRM when strategizing longer-
term employee development. HRM is faced with addressing trends toward more systemization, the widespread distribution of pro-
fessional expertise, and costbenet challenges in the adoption of technology data management. As we observe a major increase in the
management of transactions occurring over the Internet, the emergence and widespread adoption of the gig economyprovides new
challenges to HRM practitioners, since employment status ambiguity and legal challenges result in a reassessment of HRM practices in
the management relationships between gig workers and organizations (Duggan et al., 2020). Duggan et al. (2020) highlight algorithm
management facilitating work relationships via incentives, HRM access to online platforms, dispute mediation, and app-focused
performance, challenging our understanding of HRM concepts and practices. Given that the impact of workplace connectivity is
driven by physical and behavioral environmental components (Haynes, 2008), the introduction of AI applications in HRM in isolation
from the impact of the connectivity of workers to an organization raises questions of the role and accountability of HRM practitioners.
Furthermore, algorithms tend to drive the communication process between two entities, which is a subset within the AI system.
Engineers, computer scientists, and programmers use algorithms when designing a learning machine, with the algorithm constituting
the mechanism for the machine to process the data. Machine learning focuses on teaching machines to adapt to changes within the
technology or to adapt additional information to a current problem and make rational decisions (Rodgers, 2020, 2022). Hence, in
focusing on the under-investigated area of AI algorithms applied to HR decision choices (Prikshat et al., 2021), our study emphasizes
algorithmic ethical positions incorporated into important HRM decision-making processes. Employees' perceptions of AI tools having
high levels of accuracy and current information (Nguyen & Malik, 2021a) suggest the need to more fully explore the ethical impli-
cations of these perceptions in order for HRM practitioners to use these tools effectively. The opaque nature of algorithmic processing
may obfuscate biased inputs and outputs. Predictions, classications, and recommendations require explainable and interpretable AI.
A lack of intelligibility may impede decisions as to how and where to delegate decisions to AI systems (Hermann, 2021). We propose a
more systematic manner of depicting algorithms by providing a precise understanding of the kinds of ethical behaviors we want to
introduce into the AI system. Further, operational guidelines are considered for AI algorithmic processes, as well as an understanding
of how to employ ethical theories in decision-making algorithms. The six dominant ethical theories that are implanted into algorithmic
modeling are ethical egoism-preference based, deontology-rule based, utilitarianism (principle-based), relativism, virtue ethics, and
the ethics of care (stakeholder's perspective) (Rodgers, 2009; Rodgers & Al Fayi, 2019; Rodgers & Gago, 2001).
Therefore, the problem statement relates to a throughput (TP) modeling process that provides six dominant ethical algorithms
addressing HRM issues, considering whether a better understanding of ethical algorithms would assist organizations in problem-
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
solving. As organizations increasingly depend on algorithm-based HRM decision-making to observe their employees, this movement is
buttressed by the technology industry, maintaining that its decision-making apparatuses are efcient and objective, thereby down-
playing their potential biases. Our study identies six ethical theories that undergird the efciency-driven logic of algorithm-based
HRM decision-making and may assist HRM practitioners to more fully understand and support the balance between employees'
personal integrity and workplace compliance (Leicht-Deobald et al., 2019).
Following the six distinct ethical approaches to HRM, two objectives can be achieved. First, mainstream and critical approaches
will be challenged to address ethical issues in HRM more critically (Leicht-Deobald et al., 2019). Second, a stalwart forward-looking
research agenda for the ethical analysis of HRM will be advanced (Kandathil & Joseph, 2019).
2. Literature review
The intent of an organization is inuenced by the environmentwithin the organization (natural, social, and economic), and the
adoption of AI technology incorporating an organization's environmental variableswithin HRM algorithms allows the opportunity
for post-decision evaluation via RCA.
Nonetheless, Selbst, Boyd, Friedler, Venkatasubramanian, and Vertesi (2019) claimed that the elements of solutionism, the ripple
effect, formalism, portability, and framing should be addressed when considering designing an AI-based machine-learning solution.
1. Solutionism is the failure to recognize the possibility that the best solution to a problem may not involve technology.
2. The ripple effect represents the failure to understand how technology incorporation into a prevailing social system changes the
behaviors and embedded values of the former system.
3. Formalism indicates the breakdown to account for the overall connotation of social concepts, such as fairness, which can be pro-
cedural, contextual, and contestable, and cannot be reconciled through mathematical formalisms.
4. Portability implies the failure to comprehend how algorithmic solutions conceived for one social context may be misleading,
erroneous, or otherwise cause impairment when harnessed to a dissimilar context.
5. Framing relates to the failure to model the complete system, whereby a social criterion, such as fairness, will be enforced (Selbst
et al., 2019).
Moreover, ignorance of these issues may cause technical involvements to become ineffective, inaccurate, and perilously imprudent
when they enter the societal context that surrounds decision-making systems. In this context, the current research focuses on three
pillars. First, by identifying six distinct ethical approaches (solutionism) to HRM, the issue of framing in the AI algorithmic model that
deduces formalism is addressed. Second, as social and technical approaches can be challenged to address ethical issues in HRM more
critically, the issue of portability will be examined (Leicht-Deobald et al., 2019). Third, a stalwart and predictable forward-looking
research agenda for the ethical analysis of HRM is offered (ripple effect) in view of the incorporation of AI technology (Kandathil &
Joseph, 2019).
In addition, AI is a technology that attempts to simulate human reasoning in computers and other types of machines (Rodgers,
2019; Rodgers & Al Fayi, 2019). Algorithms used in AI are unambiguous specications for performing calculations, data processing,
automated reasoning, and other tasks. This conceptual study employs AI algorithmic pathways derived from TP model theory
(Rodgers, 1997; Rodgers, Alhendi, & Xie, 2019), which highlights six dominant algorithmic pathways by employing the four major
concepts of (1) perception (i.e., framing of the problem), (2) information, (3) judgment (analysis of perception and information), and
(4) decision choice.
The TP model is engaged in this study because it embraces several vital issues in organizational behavior (Foss & Rodgers, 2011),
accounting and management (Rodgers & Housel, 1992), education (Rodgers, Simon, & Gabrielsson, 2017), ethics/corporate social
responsibility (Rodgers et al., 2019; Rodgers, S¨
oderbom, & Guiral, 2014), consumer behavior (Rodgers & Nguyen, 2022), and ethical
dilemmas in auditing (Guiral, Rodgers, Ruiz, & Gonzalo-Angulo, 2015; Rodgers, Guiral, & Gonzalo, 2009). Moreover, the TP model
provides a broad conceptual framework for examining the interrelated processes inuencing the decision choices that affect orga-
nizations. This model's unique contribution is that it illuminates essential pathways in ethical decision-making (i.e., a parallel process
instead of a serial process). Finally, the model integrates the concepts of perception (framing situational conditions), information,
judgment (analysis of information/situational conditions), and decision choice as it applies to organizations.
As Westerman, Edwards, Edwards, Luo, and Spence (2020) emphasize, with the rapid increase in the use of AI systems, inter-
disciplinary insights are required to understand interactions among people. In this context, the major contribution of our theoretical
work is to enhance AI systems by highlighting ethical algorithms that can equip system designers, computer analysts, and HR prac-
titioners with improved systems and accountability for their decisions.
AI can be described as the theory and development of computer systems that can undertake assignments typically driven by al-
gorithms (Rodgers, 2020). These algorithms are often supported by machine learning to add signicant power to HRM concepts and
practices. Since algorithmic bias can induce bias in the decision-making process, our research infuses an ethical decision-making
platform into the process that can guide HR. AI algorithms can encompass ethics, decision-making, and managerial knowledge to
identify appropriate HRM strategies.
To address an evident gap in the literature, this study explores the impact of ethical dimensions on the selection of ethical strategies.
Employing Rodgers' (1997) TP model, which highlights dominant algorithmic pathways for ethical decision-making processes
(Rodgers & Gago, 2001), we aspire to enhance past literature by incorporating an AI algorithmic ethical decision-making model into
HRM concepts and practices. This AI approach can provide valuable insights into how different pathways may inuence the strategies
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
Table 1
Relationship of key HRM articles to the TPM algorithmic approach.
Author Title Article purpose Findings Relationship to the TPM AI
algorithmic approach
Bader & Kaiser,
Algorithmic decision-making? The
user interface and its role for human
involvement in decisions supported
by articial intelligence
Assess the role of AI in workplace
Humans are increasingly
detached from decision-
making spatially and
temporally, with rational
distancing and cognitive
Humans remain attached to
decision-making due to
infrastructural proximity,
imposed engagement, and
affective adhesion. An
imbalance in the use of AI and
human decision-making
results in deferred decisions,
workarounds, and data
Addresses ontological distance and
convergence of humans and AI, to
consider, in more detail, the role of
epistemologies in algorithmic
Bankins &
When AI meets PC: Exploring the
implications of workplace social
robots and a humanrobot
psychological contract
Examine social robots as likely
future psychological contract
partners, outlining potential
implications of humanrobot
psychological contracts, to offer
pathways for future research.
Understanding why humans
technology, suggesting more
complex processes exist.
Employees may hold multiple
psychological contracts with
multiple organizational
constituents. Psychological
contract research on
employees' interaction with
increasingly sophisticated AI
technologies is under-
Research in synthetic
relationshipsrequires an
interdisciplinary approach.
Research pathways to
develop a more structural
measure of the humanrobot
contract and examine how
individual differences
inuence the amount of
reciprocity in humansocial
bot contracts and the
implications of accountability
for each party.
Though a humanrobot
interdependent relationship may be
established, AI programming
accountability is not addressed. The
TP AI algorithmic approach provides a
framework for HRM accountability of
employees' psychological contracts
with AI and contributes to the
suggested inter-disciplinary approach
to HRM AI use.
Barro &
People and machines: Partners in
Impact of AI in changing behavior
and driving innovation.
Advances in technology are
undermined by insufcient
attention to integration and
human capital. Proposes
organizations develop a road
map for future initiatives
involving technology and
A TPM pathway roadmap for HRM
decision integration with AI.
Bekken, 2019 The Algorithmic governance of data
driven-processing employment:
Evidence-based management
practices, articial intelligence
recruiting software, and automated
hiring decisions
To better understand the
relationship between evidence-
based management practices, AI
recruiting software, and automated
hiring decisions.
Convergence of computer-
based data science with the
investigation of human
behavior has dened the
sphere of people-analytics.
HRM departments are
instrumental in detecting
relevant external data,
transferring signicant
external input into the
The TPM AI algorithm approach
introduces a framework for
accountability of decision-making
process resulting from HRM input of
external data.
Author Title Article Purpose Findings Relationship to TPM AI Algorithmic
Buzko et al.,
Articial intelligence technologies in
human resource development
To determine the effectiveness of
training costs using cognitive-system
AI analytics.
Transition from information
processing to AI is more
relevant for decision-making.
Use of TPM is a foundation for
interactions between AI and self-
(continued on next page)
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
employed by HR decision makers.
The TP model offers insights from cognitive and social psychology into a descriptive model of how human constituents make
decisions within organizations. It encompasses four components: perception (P), information (I), judgment (J), and decision choice
(D). In the rst stage, both perception and information inuence judgment; then, in the second stage, perception and judgment in-
uence decision choice (Foss & Rodgers, 2011).
The use of AI technologies by HRM practitioners raises questions regarding the acceptance and use of rules and policies that are
imposed by programmers remotely from the HRM team. The TP model provides a framework for practitioners to frame and assess
decisions, employing an audit trail and structure to frame the RCA of post-decision outcomes, including pay-gap analysis, the effec-
tiveness of diversity and inclusion policies, and PMRS. The effective design and use of AI by HRM practitioners will help the PMRS to
motivate employees to strive to attain an organization's goals, as ineffective systems can lead to a wide range of problems (Lillis,
Malina, & Mundy, 2015). The combination of AI technology and an ethical framework, such as the TP model, offers HRM practitioners
the opportunity to account for both the objective and subjective components of performance measurement. In other words, the use of
TP ethical pathways aims to explain the place of a particular AI algorithm in the overall decision-making process and how such al-
gorithms work generally.
The integration of AI into HRM can be depicted in four categories: (1) it is a system that thinks like a human, (2) it thinks rationally,
(3) it acts like a human, and (4) it acts rationally (George & Thomas, 2019). For example, the Turing test, formerly termed the imitation
game by Alan Turing in 1950, is a test of an AI machine's capability to exhibit intelligent behavior comparable to or indistinguishable
from that of a person. If the evaluator cannot certiably indicate that the machine is different from the human, the machine is said to
have passed the test (Moor, 2003).
The incorporation of computer-mediated communication (CMC) in lieu of human interaction in an organization's HRM practices
may strategically be used to achieve an organization's objectives (Westerman et al., 2020). Smart chatbots, which are AI-based
technologies that can support HRM decision-making (Rodgers, 2020), can assist the HRM team in relaying consistent organization-
related information to employees, while simultaneously offering them a global view of the organization. This study proposes the
adoption of an algorithmic pathway model to help determine the appropriate use and accountability of such technology.
Westerman et al. (2020, p. 398) refer to humanmachine communication (HMC) regarding the balance of privacy and disclosure,
whereby people balance their privacy concerns with the need to self-disclose in interpersonal relationships.As HMC is utilized, HRM
practitioners will need to have a full understanding of any benets of HMC over humanhuman contact. An example of HMC analysis is
where data is obtained after human contact (e.g., post-phone call performance feedback via text), and although this is inuenced by
Table 1 (continued )
Author Title Article purpose Findings Relationship to the TPM AI
algorithmic approach
Duggan et al.,
Algorithmic management and app-
work in the gig economy: A research
agenda for employment relations
and HRM
Proposes a classication of gig work,
with new lines of enquiry into
employment relationships and HRM.
Provides questions for future
Algorithms are undertaking
roles that were traditionally
the role of HRM professionals,
raising questions about the
HRM function, responsibility,
ethical appropriateness, and
accountability of algorithmic
The TPM AI algorithmic framework
supports accountability in HRM.
et al., 2019
The challenges of algorithm-based
HR decision-making for personal
To identify how algorithmic-based
HR decision-making challenges the
balance between employee personal
integrity and compliance.
Algorithm-based HR decision-
making can be ethically
problematic, may harm
employees' personal integrity,
and marginalize human sense
making decision-making
processes. Emphasizes the
importance of data literacy
and ethical awareness and
recommends participatory
design methods and
regulatory regimes.
Mainstream and critical approaches
will be challenged to take into account
ethical issues in HRM AI decisions.
Tambe, Cappelli,
Articial intelligence in human
resources management: Challenges
and a path forward
To identify challenges in using data
science technologies for HR tasks.
Covers the complexity of HR
phenomena, constraints
imposed by small data sets,
accountability questions
associated with fairness/
ethical/legal constraints, and
employee reactions to HRM
decisions. How data analytics
managers decide which HR
questions to investigate
articulates the need for
causation in AI use, and the
ethical use of data.
The TPM AI algorithmic approach
provides a framework for causation
analysis in HRM AI decision
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
ethical positioning, feedback may be adjusted by perceptions of AI anonymity.
Moreover, AI technology can benet HRM in the following transaction areas:
1. Time pressure decisions: The cost of unhurried decisions is high (speed being essential).
2. Accuracy: The cost of wrong decision choices is minimized.
3. Allocation of resources: The data size is too large for manual analysis or traditional algorithms.
4. Decisions where prediction accuracy is more important than explanation or clarication.
5. Provision of information where regulatory requirements are slight (Rodgers, 2020).
According to their review of the state-of-the-art literature on the role of AI in business, Loureiro et al. (2021) observe that
employment and decision-making constitute major areas in which AI's impact is prevalent. The adoption of AI is changing the strategic
direction of the recruitment industry, impacting cost control and the volume of candidates for clients (Upadhyay & Khandelwal, 2018),
and automating repetitive administrative tasks. Scalability of HRM processes can be achieved using AI technology to increase the
number of recruitment candidates, not only dramatically reducing the timescale and cost of recruitment but also increasing the so-
cioeconomic diversity of new hires (Wilson & Daugherty, 2018).
Moreover, AI can play a role in HRM strategy and the analysis of organizational policies, such as by supporting organizational
compliance (see Table 1). With the appropriate algorithms, AI-enabled systems can support management to recruit potential em-
ployees, to give prompt responses to candidates' queries and doubts, and to manage the submission and processing of applications.
Furthermore, with the development of AI-enabled applications, HR-related cost savings and individualized employee experiences can
be achieved (Malik, Budhwar, Patel, & Srikanth, 2020), also promoting personalized talent management practices, which can increase
job satisfaction and reduce turnover intentions (Malik, De Silva, Budhwar, & Srikanth, 2021; Nguyen & Malik, 2021a).
The interdisciplinary literature highlighted in Table 1 illustrates an emerging pattern of concern with regard to the incorporation of
AI into the workplace, which replicates human thought processes in a more efcient manner. Three themes emerge from the literature:
employee detachment from decision-making, human understanding and perception of AI processes, and the impact of AI interpretation
of datasets.
As AI algorithms in the work environment are increasingly adopted in a convenient and accessible modality (e.g., smartphone
apps), roles that traditionally belonged to HRM professionals are undertaken by AI algorithms (Duggan et al., 2020). This raises
questions regarding the HRM function, and the pattern of concerns with AI adoption reinforcing the need for an HRM accountability
framework for the implementation and use of AI in the workplace. App working in the gig economy has also challenged the
conceptualization of work and employment status, with algorithms exercising control over app-workers' performance and scheduling
based on real-time and predictive analytics, disrupting conventional workplace decisions and relationships (Duggan et al., 2020;
Minbaeva, 2021).
Decision-making in a more traditional contract-worker environment using AI algorithms reects similar patterns of cognitive
detachment from the decision-making process. Upskilling in such an environment requires a level of understanding of the appropriate
detachment from, and attachment to, AI decisions, dependent on the organization's strategies, hierarchy, and accountability. Onto-
logical distance from decision-making (Bader & Kaiser, 2019) suggests organizations should consider, in more detail, the role of
epistemologies in algorithmic decision-making. Questions regarding employee integrity interfaced with AI decisions reinforce the
importance of identication and an understanding of employees' perceptions of their ethical position in the decision-making process
(Leicht-Deobald et al., 2019).
Research on understanding why humans anthropomorphize technology illustrates that complex processes exist in employees'
psychological contracts with the use of AI technologies in organizations (Bankins & Formosa, 2020) and suggests that the positive
impact of utilizing AI technology is undermined by a lack of attention given to the integration of AI and human capital (Barro &
Davenport, 2019). The interdependent relationships between employees and AI technology rely on an understanding of programmed
decision algorithms. We add further to this area of research through the development of an accountability framework for organizations
to more fully understand AI-generated decisions. Decision-making is dependent on relevant and applicable datasets, and this is an area
where data literacy, the size of the data sets, and the incorporation of external data have an impact on HRM engagement and
accountability in the investigation and development of employees' performance (Bekken, 2019; Tambe et al., 2019). Exponential
increases in data generation require an AI solution to information processing, as employees have a limited capacity to process in-
formation (Nguyen & Malik, 2021a). The acceptance of AI software in replacing the human role in repetitive and time-consuming tasks
(Upadhyay & Khandelwal, 2018) has now developed from being a component for increasing workplace productivity to a key factor in
regional economic growth strategies toward upskilling the workforce for the AI-led transformation of the economy (European Com-
mission, 2020). This research indicates the need for introducing a decision accountability framework whereby HRM practitioners have
a pathway to consider and account for components of the organizational environment, employee engagement, and ethics when
incorporating AI decision-making to assist in achieving organizational goals.
2.1. Machine learning, deep learning, and articial neural networks
The emerging and developing technologies increasingly utilized by HRM practitioners highlight the need to critically understand
the processes involved, as the benets for organizations and employees may be undermined without an understandable roadmap for
future integration (Barro & Davenport, 2019). The TP model offers a framework for both communicating and understanding AI-driven
HRM decisions. Machine learning and deep learning are the key processes within the overall AI technology used in HRM. The roots of
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
machine learning and deep learning are embedded in pattern recognition and in the concept that algorithms can learn from recorded
data without being programmed to do so (Rodgers, 2020).
Specically, key cases of machine learning in an HRM context include the following:
1. Anomaly detection: Identify items, events, or observations that do not conform to an expected pattern or other items in a pool of job
2. Background verication: Machine learning-powered predictive models can extract meaning and highlight issues based on
structured and unstructured data points from applicants' resumes.
3. Employee attrition: Find employees who are at high risk of attrition, enabling HR to proactively engage with and retain them.
4. Content personalization: Provide a more personalized employee experience by using predictive analytics to recommend career
paths (Bekken, 2019), professional development programs, or optimize a workplace environment based on prior employee actions.
Deep learning is a branch of machine learning that trains a computer to learn from large amounts of data through neural network
architecture. It is a more advanced form of machine learning that breaks down data into layers of abstraction. Instead of organizing
data to run through predened equations, deep learning sets up basic parameters about the data and trains the computer to learn
independently by recognizing patterns using multiple neural network layers for processing (similar to neurons in the brain) (Rodgers,
The use of articial neural networks (ANNs) is a machine-learning technique to form systems of elements of articial neuronsas
numerically connected virtual synapseswith numerical weights that are tuned based on experience and that are adaptive to inputs
and capable of learning (Buzko et al., 2016). After sufcient training, deep-learning algorithms can begin to make predictions or
interpretations of very complex data with minimal human oversight, such as nancial trading (Barro & Davenport, 2019).
Key use cases of deep learning in an HRM context include questions of ethics and data management for HRM practitioners,
requiring a framework for decisions and accountability, including:
1. Image and video recognition: Deep-learning algorithms outperform humans in object classication. Given videos and photos of
thousands of applicants, deep-learning systems can identify and classify candidates based on objective data. Employing historical
data, behavioral analytics is used by some organizations to predict how behavioral antecedents may result in fraudulent practice
(Cockcroft & Russell, 2018). Ethical positions of organizations and decision makers incorporated within a decision framework can
be considered by HRM practitioners in organizations that utilize real-time AI psychological proling systems that measure near
real-time non-verbal behavior.
2. Speech recognition: While understanding the human voice and myriad accents is difcult for most machines, deep-learning al-
gorithms can be designed to recognize and respond to human voice inputs. Virtual assistants use speech-recognition algorithms to
process human voice characteristics and respond accordingly. Speech analytics software can help organizations ensure compliance
with statutory regulations, identify potential fraud, and review previous communication to provide a pathway for future
communication. However, these same data may include sensitive characteristics, identifying illnesses, and social, economic, or
racial origin, requiring an ethical framework for HRM in the collection, processing, and storing of workforce analytics records.
3. Chatbots: Natural language processing (NLP) trains chatbots and similar systems to understand human language, tone, and context.
NLP will emerge as a crucial capability for AI systems, as organizations continue to automate HRM service delivery with chatbots.
4. Recommendation engines: Digital learning experiences often involve personalized learning recommendations related to skill levels
and professional interests. Using Big Data and deep learning, learning experience platforms can identify learning pathways that
benet both individual employees and their employer. Moreover, AI provides managers with a list of training exercises they can
show to their employees (Matsa & Gullamajji, 2019). Reliance on AI decision-making may lead HRM practitioners to replicate
humanmachine contact in PMRS humanhuman engagement (Bankins & Formosa, 2020), deferring responsibility for HRM de-
cisions to the AI HRM decision outcome when communicating to candidates or employees, resulting in a negative overall expe-
rience in engagement with the organization.
The use of AI technology in HRM can improve organizational decision-making tasks, including HRM practitioners' everyday ac-
tivities, such as scheduling vacation requests, team training, and recruitment (Matsa & Gullamajji, 2019). The use of AI technology
helps HRM teams by integrating recurring and low-value tasks, allowing the HRM team to focus on more strategic workforce tasks. An
example would be when an organization traditionally hires a new employee, and it needs to provide the employee with an ofce,
computer, etc. Instead of being tasked with these components, the organization uses algorithm-based apps, allowing for a more exible
work environment and granting the HRM team more time to focus on mentoring new employees.
AI technology can also be used to identify which employees are contemplating leaving an organization by analyzing their computer
activities (e.g., emails and Internet browsing) (Matsa & Gullamajji, 2019). The adoption of new AI software assists an organization in
the automation of administrative tasks, such as reducing time on scheduling interviews or pre-screening candidates. In addition,
algorithm-based email responses have helped HRM teams in their time management. By utilizing employees' surveys to enhance
meaningful feedback pertaining to job satisfaction, organizations can better manage and evaluate employees' roles (O'Connor, 2020),
and with AI-enhanced feedback mechanisms, organizations can encourage the retention of valuable employees using analytics to
devise employee incentives.
HRM practitioners have increased their access to algorithm-based appsto monitor and address workforce accountability (Bekken,
2019). The deep-learning algorithms used in such AI technology are inherently opaque, and while HRM practitioners need a
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
framework to account for decisions generated by AI systems, they also need to have a clear understanding of the quality of the data
input and processes utilized by such systems (Haenlein & Kaplan, 2019).
Before machine learning and deep learning, HRM managed data in a manual and semi-automated manner. Nonetheless, HRM has
been relatively slower to come to the table with machine learning and AI compared with other elds, such as marketing, communi-
cations, or healthcare (e.g., Hermann, 2021). The value of machine learning and deep learning in HRM can now be utilized, especially
due to advances in algorithms that can forecast employee attrition. For example, deep-learning neural networks are edging toward
more transparent reasoning in displaying why a particular result or conclusion was made (Bader & Kaiser, 2019). In recruiting,
machine learning and deep learning can be implemented to analyze blog/social media proles and identify candidate attributes that
may not appear on their resumes. Recruiters can also utilize machine learning and deep learning to proactively nd the correct people
for openings with software that searches the Internet to source prospects. Moreover, video-based interviewing analyzed by machine
learning and deep learning can assist in determining an interviewee's mood, and whether the candidate is telling the truth or not. The
preliminary stages of interviewing can become much simpler with the implementation of machine learning- and deep learning-driven
chatbots on a rm's website to provide applicant onboarding (Matyunina, 2020).
Organizations and employees may have misunderstandings and contradictory perceptions regarding the intent and use of AI
technology in HRM (Bankins & Formosa, 2020). Ontological distance to AI-driven HRM decisions (Bader & Kaiser, 2019) raises the role
of epistemologies in algorithm-based decisions. Selbst et al. (2019) explore further how machine learning can incorporate notions of
fairness, justice, and due process, and suggest a shift from a solution-oriented approach to a process-oriented one in helping decision
makers understand the technology they use. It is in this process-oriented approach that the TP model allows HR decision makers to
more fully understand ethical considerations incorporated in AI-based decision-making.
2.2. Throughput model theory
The question of the accountability of organizations for errors in the algorithms they use is a real issue, resulting in ethical, legal, and
philosophical challenges that need to be addressed (Haenlein & Kaplan, 2019). Rodgers (1997, 2006) developed a decision-making
model that acknowledges that decision makers do not always act rationally. The model highlights discrepancies between how deci-
sion makers behave when compared with the intent and ethical position of their organization, and it portrays the importance of
developing accurate descriptions of the algorithmic pathways used by decision makers. It thus identies systematic pathways in which
decision makers may depart from rationality, simultaneously allowing for an analysis of what can be expected if they follow a
particular pathway (Foss & Rodgers, 2011).
The TP model is a cognitive model explaining the role played by perception, information, and judgment in human decision-making
incorporated in machine learning (Rodgers, 1997, 2006, 2020). Further, the model provides a broad conceptual framework for
examining interrelated and parallel processes that impact decisions affecting individuals and organizations. Parallel processing depicts
a knowledge representation displaying that perception and information can separately inuence judgment, as well as perception and
judgment independently inuencing decision choice (Rumelhart & McCelland, 1986; Rumelhart & Ortony, 1977). Moreover, this
model depicts a multi-stage, information-processing function in which cognitive, economic, and social processes are used to generate a
set of outcomes via algorithmic pathways. Finally, the concept of a decision choice is a composite of mental or neural pathway ac-
tivities that recognize and structure decision situations and then evaluate preferences to produce judgments and choices (Einhorn &
Hogarth, 1981; Kahneman & Tversky, 1979).
2.2.1. Components of the throughput model
The TP model, which is presented in Fig. 1, has four components: perception (P), information (I), judgment (J), and decision (D).
According to the model, perception and information lead to judgment in the rst stage, and then perception and judgment lead to a
decision. The perception concept indicates that individuals frame situations according to their experience, training, and education.
Further, based on the strengths or weaknesses of these elements, decision makers may employ heuristics and biases in the perception
stage (Tversky & Kahneman, 1981). This model proposes that information and perception are interrelated, as shown in Fig. 1 by the
double-ended arrow, and that judgment is a joint product of information and perception.
P =perception, I =information, J =judgment, and D =decision choice.
The interdependent relationship between perception and information (i.e., P←→I) is comparable to a Bayesian statistic (Bolstad &
Curran, 2016), in that the informationconcept is continuously revising the decision maker's perception, that is, previous information
is continually captured within the information construct. In addition, decision makers' previous decisions are immersed by infor-
mation sources. Hence, the P←→I correlation functions in part as a framework that is similar to a neural network (Rodgers, 2020).
A neural network is a class of computer software that simulates humans' biological neurons (Barnett & Cerf, 2017). In addition,
neural networks can buttress machine learning in that they can emulate pattern recognition or match similarities in the P←→I bond as
they learn to decipher a problem (Rodgers, 2020). This methodology can provide a machine-learning apparatus (supervisory or non-
supervisory) for HRM. The AI machine-learning characteristic of the TP model provides the algorithmic pathways with the prociency
to robotically learn and improve from experience (i.e., P←→I) without being openly programmed.
Likewise, information is subjectively processed by humans through the ve senses: vision, hearing, touch, taste, and smell.
Nonetheless, through education, training, and experience (i.e., perception), we make sense of data and arrive at a consensus in society
regarding the reliability and relevance of information as it pertains to our understanding. Therefore, the rst part of the model (i.e.,
perception←→information) suggests that perception is updated by external information. This process is similar to Bayes' Theorem,
which advocates that our perception is constantly updated by incoming information. Moreover, Bayes' Theorem is at the heart of
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
neural networks that are utilized for AI applications of deep-learning tools (Cui & Wong, 2006). Furthermore, there is no needfor a
pathway from decision choice to information, since exogenous informationis continually updated, which inuences perception.
From a statistical perspective, the loop from decision choice to information is problematic, since it will produce multiple solutions
unless another variable is introduced inuencing the decision-choice concept (see Rodgers & Housel, 1992, for a presentation/analysis
of a non-recursive model). In addition, from a cognitive perspective, the P←→I neural network also suggests that perception in-
uences information and that information is stored in memory (i.e., judgment) for further processing and encoding to be acted upon by
compensatory or non-compensatory routines.
Accordingly, we conceptualize the operationalization of perception (P), information (I), judgment (J), and decision choice (D) in an
HRM context. These components are also summarized in Table 2. Depending upon an individual's perspective, certain pathways may
be weighted heavier or dominate other pathways. HRM managers can considerably benet from using this model by observing what
other pathways may need to be explored in order to examine and modify their decisions to determine the best outcome for their
organization and employees. This fresh approach enables us to complement several ethical positions with inimitable decision-
making paths leading to a decision. Nevertheless, as pointed out by Selbst et al. (2019), the unethical application of AI technology
risks embellishing existing biases and inequalities within organizations. Henceforth, it is imperative that HRM leaders have an ethical
framework to assess the use of new AI technologies and to entrench ethical thinking in their evaluations.
Furthermore, ethical considerations as part of AI can benet everyone in society and in organizations, not just those who control it.
To circumvent unethical design problems, individuals from a wide range of disciplines can contribute to the social, societal, cultural,
nancial, and economic contexts in which AI operates and inject ethical thinking into every stage of the process, from design to
The TP model portrays the most inuential algorithmic pathways employed to arrive at a decision choice. That is, what we hold as
constructive enters into our perception, and it can, in turn, inuence our judgment and decision choice. Furthermore, judgments process
information sources, analyze what is suitable as information, what evidence we frame (i.e., perception), and which information is
relevant to answer questions inuenced by what we hold as valuable (Rodgers & Gago, 2001). Decision-making in the TP model is
dened here as a multi-stage, information-processing function in which cognitive, economic, political, and social components inu-
ence data to generate a set of outcomes.
Perception involves the process of individuals framing their problem-solving set or view of the world. Depending upon the task at
hand, this framing involves individuals' expertise in using pre-formatted knowledge to direct and guide their search and assessment of
incoming information necessary for problem-solving or decision-making. Rodgers (1997) argued that perception represents a person's
expertise in classifying and categorizing information. This information is converted to knowledge once it is processed in the minds of
individuals, and knowledge is consequently transferred as information once it is articulated and presented in the form of text, nar-
ratives, and graphics or other symbolic forms. Information includes the set of technical, managerial, economic, political, social, and
environmental information available to a decision maker for problem-solving. The judgment stage contains the process by which in-
dividuals' implement and analyze incoming information and the inuences from their perception. From both these sources, rules are
implemented to weigh, sort, and classify the knowledge and information for decision-making. Finally, in the decision-choice stage, an
action is taken or not taken.
The stages of perception, information, judgment, and choice are always present in decision-making; however, their predominance
P = perception, I = information, J = judgment, and D = decision choice.
Fig. 1. Decision process diagram.
Table 2
The throughput model's algorithmic pathways.
Primary Ethical Pathways
(1) Preference-based (Ethical egoism) P D
(2) Rule-based (Deontology) P J D
(3) Principles-based (Utilitarianism) I J D
Secondary Ethical Pathways
(4) Relativism-based I P D
(5) Virtue ethics-based P I J D
(6) Ethics of care-based (stakeholders) I P J D
For further discussion regarding the other three pathways (relativism, virtue ethics, and ethics of care), please see Rodgers et al.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
or ordering inuences decision-making. There are differences of opinion about how many stages and subroutines within stages exist
and the order in which the stages occur; however, the concepts in the model proposed here appear with certain consistency in the
literature (Hogarth, 1987). This model represents a parsimonious way to capture major concepts about organizations, yet it provides a
more interpretative cognitive schema, in that basic information-processing modeling normally involves serial processing, whereas we
take this approach one step further by assuming parallel processing. That is, the complete TP model posits that there are many
(oftentimes simultaneous) pathways leading to a decision. Furthermore, this decision-making model has been shown to be useful in
conceptualizing, in tandem, a number of different issues that are important to organizations (Foss & Rodgers, 2011; Rodgers, 1997). It
is particularly relevant for clarifying critical pathways inuenced by ethical positions (Rodgers & Gago, 2001).
2.2.2. The six algorithmic pathways of the throughput model
There are six algorithmic pathways inuenced by the ethical positions a decision maker can use. Table 2 illustrates the six algo-
rithmic pathways, which are ethical egoism-preference based (PD), deontology-rule based (PJD), utilitarianism-principle based
(IJD), relativism (IPD), virtue ethics (PI JD), and ethics of care (Stakeholder's perspective) IP JD (Rodgers, 2009;
Rodgers & Gago, 2001). These algorithmic pathways can assist in machine learning by providing computer apparatuses with the ability
to learn ethical foundations without specic programming.
The six dominant ethical algorithmic pathways that inuence a decision choice (Rodgers, 2009)
reect the problem statement in
the introduction, whereby the modeling process may help arrest problems of transmitting and receiving HRM knowledge and infor-
mation due to organizations seeking different and comparative ethical solutions to a problem.
2.2.3. The six critical algorithmic pathways and related ethical theories
The TP model offers insights from social psychology into a descriptive model of how HRM managers make decisions. More spe-
cically, the TP model helps identify and explain the impact of perceptions of the HRM situation (e.g., environmental contextual
features, employer organizational characteristics) on the HRM process. The multiple objectives of organizations can be incorporated
into AI decision-making. An analysis of the decision intent and anticipated outcome will help HRM practitioners understand the
pathways to an organizational decision. Understanding the ethical pathways will give HRM practitioners insight as to whether the
organization's goals and objectives are reected in evaluating or implementing AI decision-making systems. Between the four com-
ponents outlined above (I, P, J, D), this model highlights six critical pathways in the decision-making process, eliminating rival
alternative hypotheses. These pathways, which have been associated with six theories of ethical behavior (Rodgers et al., 2009;
Rodgers & Gago, 2001), are as follows.
(1) PD algorithm encapsulates ethical egoism: In this algorithmic pathway, an action is considered ethically correct when it
maximizes one's self-interest (Rodgers et al., 2009; Rodgers & Gago, 2001). According to this reasoning, the decision is based on the
perceived circumstance, downplaying any relevant information and judgment. Thus, the decision maker's perception directly in-
uences the decision. Generally, ethical egoism (or psychological egoism in psychology or utility-based in economics/nance) sug-
gests that PD is the appropriate algorithmic pathway, since perceptionencapsulates one's wants, needs, and desires. These wants,
needs, and desires are shaped by experience, training, and education. It is not that information does not exist, but it is downplayed due
to the dominance of perception over information (see Rodgers & Gago, 2001, 2003, 2004). For example, based on parental experi-
ences, parents may tell their children that they cannot play after school until they have nished their homework. Here, the parent
believes that s/he knows what is best for the children. There are many situations whereby information is either fragmented or
incomplete, or where there is too much noise or disturbance in the information channel, hence one's perception dominates.
(2) PJD algorithm portrays the deontology position: In this algorithmic pathway, the decision maker is committed to independent
moral rules or duties and, thus, equal respect is devoted to all individuals. The focus is on taking the right actions, rather than on the
consequences of the actions in this pathway, rules and laws are framed (P), and a judgment (J) is made before a decision is made (D).
This position portrays the deontological perspective (see Guiral et al., 2015; Guiral, Rodgers, Ruiz, & Gonzalo-Angulo, 2010; Rodgers
& Gago, 2001). In most cases, rules, procedures, guidelines, and laws are encapsulated in one's perception to be analyzed (i.e.,
judgment) before making a decision choice. For example, when people drive, they do not have a set of rules written down before them.
That is, the duality of physically controlling the mechanics of a vehicle and processing the observation and compliance with trafc
rules are embedded in people's perceptions whilst driving (and is not a single process of reading instructions and rules).
(3) IJD algorithm denotes the utilitarian position: This algorithmic pathway emphasizes the maximization of the good and the
minimization of harm to a society. Therefore, available information (I) is used in an objective manner throughout the analysis (J)
before a decision is made (D). The decision maker's perception (P) is not considered. Guiral et al. (2010, 2015) advocated that I J
D reects the utilitarian position, which is concerned with consequences, as well as the greatest good for the greatest number of people.
Further, Rodgers and Gago (2001, p. 362) advocated that utilitarianism is generally traced to Jeremy Bentham (17481832) who
For example, the preference-based ethical pathway (PD) shows only the direct impact of perception on the decision. In addition, the rule-based
ethical pathway (PJD) contains the direct impact of perception on judgment and the direct impact of the judgment on the decision. These two
relationships represent the indirect impact of perception on the decision through judgment. The same is true with the principles-based ethical
pathway (IJD), which represents the indirect impact of information on the decision through the judgment stage (Rodgers & Al Fayi, 2019). The
remaining three secondary algorithmic pathways build upon the preference-based algorithm (PD), which advances to relativism-based (IPD).
Next, the principles-based algorithm (IJD) forwards to virtue-based (PIJD). Finally, the rule-based algorithm evolves into the ethics of
care-based algorithm (IPJD).
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
sought an objective basis for making value judgments that would provide a common and publicly acceptable norm for determining
social policy and social legislation (see Bentham, 1962). This position is committed to the maximization of the good and the mini-
mization of harm and evil. Furthermore, this theory advocates that society should always produce the greatest possible balance of
positive value or the minimum balance of negative value for all individuals affected. Therefore, the utilitarian principle infers that
quantities of benets produced by an action can be measured and added, and the quantities of harm can be measured and subtracted.
This will determine which action produces the greatest total benets or the lowest total costs. Finally, this process is considered a
backward-chain from a consequentialist viewpoint, as opposed to a forward-chain process indicated by a rule-based or non-
consequentialist perspective.
(4) IPD algorithm indicates the relativism position: This algorithmic pathway considers ethical standards based on the decision
makers themselves or the people around them. In this light, ethical beliefs are not absolute but depend on circumstances. Therefore,
available information (I) inuences individual perception (P) before a decision is reached (D). Rodgers and Gago (2001, p. 361) argued
that IPD highpoints the relativist perspective, which assumes that decision makers use themselves or the people surrounding them
as their foundation for describing ethical standards. They observe the dealings of members of some applicable group and endeavor to
ascertain the group consensus on a given behaviour. Relativism acknowledges that people live in a society in which they have different
views and positions in order to validate decisions as right or wrong. Therefore, ethical relativists uphold that all ethical beliefs and
values are relative to one's own culture, feelings, or religion.
(5) PIJD algorithm describes the virtue ethics position: This algorithmic pathway does not consider what makes a good action,
but instead focuses on how a good person makes a decision choice. Perception (P) thus inuences the selection process of the in-
formation (I), ensuring that the selected information is consistent with being a good person. This leads to the judgment stage (J), en
route to a decision (D).
(6) IPJD algorithm depicts the ethics of care position: This algorithmic pathway assumes that people are willing to listen to
distinct and previously unacknowledged perspectives. Thus, all relevant information (I) is considered, and it inuences perception (P).
The resulting perceptions are analyzed in a judgment (J), en route to a decision (D). Rodgers and Gago (2001, p. 364) maintained that
IPJD represents the ethics of care philosophy, which focuses on a set of character traits that are deeply valued in close personal
relationships, such as sympathy, compassion, delity, love, friendship, and the like. This algorithm represents the last possible
fragmented way for individuals' cognitive processes. In this sequence, an individual studies the given information, frames the problem,
and then proceeds to analyze the problem before rendering a decision. Information guides an individual's perceptual perspective. That
is, the ethics of care philosophy incorporates a willingness to listen to distinct and previously ignored or unaccustomed viewpoints.
The authors further stated on p. 364 that, In the IPJD pathway, information dominates the perception in an ‘open- minded
individual. The judgments used to decide on will be the result of the perceptions that the individual produced as a result of the in-
formation. The ‘altruismis modeled in this model by the information available to decide on.
In summary, HRM practitioners can be assisted by understanding the AI tools utilized in programming parameters that may
incorporate biases, such as gender, age, race, school attended, etc. (Upadhyay & Khandelwal, 2018). Therefore, biases can be depicted
from two major progenies in establishing an HRM AI algorithmic system, which are type 1 and 2 errors (Rodgers, 2020). Type 1 and
type 2 errors may occur due to the design and programming bias of the AI system (observer, instrument, recall, etc.). Hence, in
practical terms, this research paper has identied applicable ethical algorithmic pathways to implement to address type 1 and type 2
errors. Type 1 errors may fuel inefciencies and increase transaction costs, which can cause inadequate algorithms, as depicted by an
Table 3
Ethical positions related to biases: Type 1 and 2 errors.
Ethical AI
Type 1 error/false positive Type 2 error/false negative
Ethical Egoism
Overly rigid HRM AI algorithm presumption, thereby DENYING
certain personnel of opportunities.
Overly accommodating HRM AI algorithm presumption; hence,
ALLOWING inappropriate people to gain favors or opportunities.
HRM AI algorithms' guidelines and procedures are very restrictive.
Result: PREVENT promotions, hiring, etc., of certain classes of
HRM AI algorithms' guidelines and procedures are too lax. Result:
Wrong individuals RECEIVE benets.
Appropriate people in the same social networks (i.e., sharing some
common experience, tradition, education, customs, culture,
religion, etc.). Others are NOT allowed to share benets and
opportunities, as suggested from the HRM AI algorithms.
The wrong people in the same social networks (i.e., sharing some
common experience, tradition, education, customs, culture, religion,
etc.) are ALLOWED to share benets and opportunities as suggested
from the HRM AI algorithms.
Employees DENIED promotion, hiring, etc. opportunities due to
overly critical use of supporting information sources (e.g., social
media) for reliability and relevance implemented in HRM AI
Employees ENDOWED with opportunities due to weak supporting
and relevant information implemented in HRM AI algorithms.
Virtue Ethics
Personnel DENIED promotion, hiring, etc. opportunities due to
overly critical formal structures, judging individual attributes
executed in HRM AI algorithms.
Personnel ENDOWED with opportunities due to weak formal
structures, judging individual attributes executed in HRM AI
Ethics of Care
Workforce DENIED promotion, hiring, etc. opportunities due to the
overly critical evaluation of relevant and reliable information
about others to understand them and accurately predict their likely
behavior via HRM AI algorithms.
Workforce ENDOWED with opportunities due to a weak evaluation
of relevant and reliable information about others to understand
them and accurately predict their likely behavior via HRM AI
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
AI system. Likewise, the insertion of a type 2 error may engender inappropriate workforce individuals to receive opportunities (see
Table 3). In selecting a particular HRM algorithmic pathway, organizations can utilize a costbenet analysis to control for type 1 and
type 2 errors. Features, such as the size of the company, and budgetary and regulatory constraints, will factor into the decision-making
processes in employing the appropriate AI algorithmic pathway for HRM design.
2.2.4. Incorporation of the throughput model in AI system analysis for HRM practitioners
Forming decisions is the process of assessing how a particular action was initiated, with an evaluation of anticipated results versus
measured results. Evaluating decision results is framed by the intent of the decision maker and is a measured variable that can be
dened by the decision maker's priorities (e.g., prot or performance). Individuals have a legitimate interest in knowing who to hold
accountable for AI-based decision-making (Hermann, 2021, p.10), which requires an understanding of the hierarchy levels of
decision-making in organizations. The organizational culture may direct the ethical positionality of decision-making and inuence
accountability in the use of AI decision-making. Understanding the context and characteristics of the organization's decision makers
(Prikshat et al., 2021) will help HRM practitioners to assess the interface of ethics in AI-generated HRM decisions. Incorporation of the
TP model in AI system analysis with variables weighted in qualitative and quantitative data can provide an opportunity for HRM
practitioners to account for decision outcomes.
Due to the opacity of algorithms in AI systems, HRM practitioners need to mediate between both low and high levels of human
involvement in decision-making (Bader & Kaiser, 2019, p. 656) to fully account for HRM decisions. Processes organizing the in-
teractions between people and their organizations are integral when considering AI (Hermann, 2021), and we develop research to
further address this by proposing the use of the TP model to look at the organizational level of AI decision-making and the organi-
zational environment generating decision-making pathways.
This level of human involvement is dependent on the decision maker's understanding of their objectives and anticipated outcomes,
framed by the organizational decision hierarchy, and originating from the social, economic, and natural environment of the organi-
zation and the decision maker. The decision-making process in organizations can be split into a hierarchy of three layers:
1. Strategic (to achieve an overall objective).
2. Tactical (to modify to align with changes in the environment).
3. Operational (programmed decisions to control activity).
Within an AI deep neural network, decision outcomes are continually fed back to the organizational environment and decision
pathway in a continuous loop of reinforcement learning, with the exibility of decision processing being dependent on experience from
previous decisions.
Adoption of the TP model ethical pathways in AI decision processing is represented by the neural network diagram illustrated in
Fig. 2, with the learningcomponent illustrated by the arrow direction.
An AI algorithmic ethics framework must be the foundation on which any AI technology is fashioned and implemented. None-
theless, even in its presence, it may be a while before bias can be entirely addressed in the execution of AI-powered solutions for HRM.
In addition, such a framework can assist organizations in creating AI technologies to minimize, if not eliminate, bias in their algo-
rithms. Combined with human intervention, AI applications can spearhead unbiased recruitment, merits, promotions, and quality
hiring. Table 4 provides an overlay of strengths and weaknesses when applying different AI ethical algorithmic pathways in an
Hid den
layer Output
Perception &
Influencing HRM
Decision Pa thway(s)
Hidden Layer 1
Fig. 2. Articial single-layer neural network for HRM decision algorithms.
Source: Adapted from Rodgers (2022).
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
organization's HRM system.
2.2.5. Ethical framework of the throughput model within the decision dashboard
HRM practitioners can evaluate the extent of their attachment and detachment from AI-generated decisions (Bader & Kaiser, 2019)
by giving weight to the inuencing variables to a decision. Mechanisms depicting the ow of data and the use of AI techniques
(Prikshat et al., 2021) help HRM practitioners not only in terms of intelligibility, but also in terms of communicating accountability.
The awareness and evaluation stage of the HRM
framework proposed by Prikshat et al. (2021) can initiate an HRM analysis of
intentions with anticipated outcomes utilizing AI systems prior to any commitment regarding adoption. Using the TP model, post-
decision outcomes can be analyzed to determine which ethical pathways are to be taken, and whether these pathways support the
delivery of the organization's goals.
HRM practitioners can help in interpreting and contextualizing different HR activities (Prikshat et al., 2021). Evaluating decision-
making by employees as a consequence of algorithm-driven decision-making and contextualizing the environmental antecedents
(social, economic, physical) within the organizational decision hierarchy will help HRM practitioners assist management in inter-
preting decision outcomes. We propose that organizations should identify their ethical position on specic HRM activities rst (with a
focus on agility and scenario planning), and then analyze whether this is reected in AI-driven HRM activities. Reecting on
accountability, the decision level within the organization, and the organization's goals and objectives will guide HRM practitioners as
to whether a specic decision path is to be taken or not, raising questions as to the decision-making impact of AI-driven HRM decisions
on the organization.
Recent research suggests that practical guidance on how to address incorporating principals into AI practice is required (Hermann,
2021). The development of the TP model within a framework to incorporate inuencing components, such as the environment, time, or
the organizational level impacting decision choice, will help HRM practitioners to more fully account for the incorporation of ethical
principles into AI decision outcomes.
The positioning of the organization and the decision maker when determining the intention and anticipation of an HRM decision
outcome is framed by components that are both static and uid, requiring an understanding of relationships with other disciplines.
Each of the economic, social, and physical environments of the organization or decision maker may be impacted by changes, impacting
the quality or perception of the information available. Decision intent is framed by the anticipation of potential decision outcomes. In
addition, the intent requiring a decision is inuenced by the environment of the organization and the decision maker (social, economic,
physical environment). The quality and weight given to the data of this environmental variable trigger the perception of, and in-
formation available to, the decision maker. As discussed earlier, the decision maker's decision pathway is a result of their priorities and
ethical position; however, assessment by the decision maker of their decision pathway is also impacted by a time-based variable, that
is, time pressureand time restorationimpact decision choice. Deadlines for a decision may inuence the decision maker to follow a
particular ethical pathway; however, a restoration in time may allow the decision maker to reect on and adopt an alternative pathway
dependent on the decision to be made.
Table 4
Six Ethical AI Algorithmic Pathways as the Most Dominant and Inuential for Decision-Making Governed by Particular Ethical Perspectives (Adapted
from Rodgers, 2009).
Ethical AI
Aspects of ethical positions Aim Positive examples Negative examples
Ethical Egoism
Wants, needs, and desires. Maximizing one's utility or
advantage, which is inserted
into AI algorithms.
Improving one's AI HRM algorithms by
expressing expertise in ethical matters.
Improving one's grade by
obtaining examination
questions before taking the
Laws, rights, procedures,
guidelines, etc.
Ethical behavior is derived
based on duty or
responsibility, not on the
consequences of resultant
AI algorithms are built on a framework
of clarity and the ability to turn ethical
philosophical questions into
mathematical data that can tell right
from wrong.
Employees living in different
global communities may not
adhere to a particular rule-based
Values, attitudes, and
Consequences aimed at results
are more important than
procedures or duty.
Morality is viewed as that which
produces the greatest happiness for the
greatest number.
HRM policies induced by AI
algorithms may cause harm to
subgroups of employees.
Wants, needs, and desires
are modied by the
environmental conditions.
Ethical consideration is
relative to the norms of one's
culture or situation.
AI algorithms are robust to changes
based on the situation or geographical
area in the world.
Changing AI algorithmic
systems may cause problems in
consistency and comparability
across situations over time.
Virtue Ethics
Values, attitudes, and
beliefs are based upon one's
Consequences or outcomes are
based upon one's reputation or
One's values, attitudes, and beliefs
override the group function (i.e.,
IJD) to structure AI ethical
Values, attitudes, and beliefs
may be biased due to culture,
religion, or customs inuencing
AI algorithms.
Ethics of Care
Laws, rights, procedures,
guidelines, etc. are a
function of the
stakeholders' interests.
Ethical behavior is derived
based on duty or responsibility
as it relates to stakeholders.
Creating a diversied expert advisory
board to guide ethical AI from multiple
stakeholders, respecting differences of
Stakeholders are not diversied
regarding HRM policies.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
Fig. 3. Decision dashboard diagram.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
Attributing weight to data in each variable in the journey to a decision can be compared to operating the dashboard controls of a car
while reacting to constantly changing information. As each variable on the dashboard is adjusted, the journey to the decision
outcome can be measured and assessed. The decision hierarchy frames the level of decision using the dashboard, responding to changes
resulting from the environmental variable (information and perception). For instance:
Strategic decision =travel from A to B.
Tactical decision =truck.
Operational decision =operating brakes/steering, etc.
A simple HRM example is illustrated below:
Strategic decision: senior HRM allocation of nancial resources
Tactical decision: HRM investment in appropriate communication software
Operational decision: HRM engagement with workforce operation of software
Each of these decisions is impacted by the environmental variable and time pressure or time restoration, and these will impact the
information and perception catalysts for the ethical pathway to a decision choice. Specically:
Physical environment (working in the ofce ‘v working from home)
Social environment (group work ‘v individual processing)
Economic environment (economic downturn ‘vniche service demand)
As AI is incorporated into the decision process, the journey from the weighting of data through to the decision choice evolutionary
algorithms, from machine learning through to deep learning in a deep neural network, requires a framework in order for HRM
practitioners to more fully understand and account for the ethical pathway to a decision choice. The evolution of the TP model within a
framework dashboardto account for AI decision-making is illustrated in Fig. 3, with the sequential process explained in Table 5.
Organizational leaders and HRM practitioners who are considering adopting AI or who are in the process of evaluating or
implementing existing solutions can follow the sequential process in Table 5 and Fig. 3 as an analytical guideline for evaluating ethical
incorporation into algorithmic decision-making. By giving weight to variables within the ethical framework of the TP model within the
Decision Dashboard, each variable on the dashboardis adjusted on the journey to the decision outcome, which can be measured and
assessed. These variables will have both static and uid positions, impacting the decision process and accountability. However, by
using RCA with the Decision Dashboard, HRM practitioners can assess the weight given to the sequence of components resulting in a
particular AI decision outcome, and also evaluate the extent of their attachment and detachment from these decisions.
The adoption of some AI technologies may challenge HRM in developing talent and career paths while achieving an organization's
goals and objectives. It is reasonable for one of the organization's objectives to be to maintain cashow and productivity by incor-
porating new AI technology; however, the use of the technology may reduce the workforce (World Economic Forum, 2018) and
prohibit career development options (e.g., company accountant replaced with cloud-based accounting subscription service). Recent
research by Nguyen and Malik (2021a, p. 21) reports how reliability, exibility and timeliness are the three dimensions of an AI
system that frequently need to be checked to support the knowledge sharing process among employees.The time component affecting
decision-making is reected in the TP model's Decision Dashboard, allowing HRM practitioners to help determine if time is a factor
affecting whether one decision pathway is chosen over another.
An analysis of organizational decisions using the TP ethical algorithms within the Decision Dashboard provides the opportunity for
HRM to analyze the ethical position and impact at each organizational level and whether time or environmental components affect the
decision choice and outcome.
An example of an HRM-based decision focused on staff training and development is that of an architectural rm required to use
building information modeling (BIM) software in order for the rm to be retained on procurement consortiums. This investment,
requiring staff development to operate it, may result in unanticipated outcomes due to a time pressure variable driven by a senior
management-level decision for software adoption, raising questions about the awareness and skills at every level of the organization.
Unanticipated outcomes may manifest only when the design project is on site (e.g., overly complex construction details relative to
current market resources) as a result of the experience and knowledge of older management not being incorporated into the AI al-
gorithms incorporated within the software operated by more junior staff.
The strategic decision intent in this scenario was driven by the economic environmental variable to retain contract opportunities,
Table 5
The decision dashboard.
Decision question process (follow the decision dashboard diagram arrows):
1. What is the organizational intent?
What are the strategic decisions to achieve the overall objectives?
What are the tactical and operational decisions required to react to changes in environmental data?
2. What is the time pressure on the decision, and is there time restoration to revisit the decision analysis?
3. What is the status of the economic, social, and physical environments affecting each level of decision to achieve the anticipated decision
outcomes? These may not be static and will have consequences on the quality of information and perception.
4. Which ethical pathway (preferences, rules, and principles) inuences the decision-making organization and actors?
5. Decision is chosen and implemented.
6. Root-cause analysis of the decision outcome.
7. Repeat, incorporating changes in environmental data for incorporation into a deep neural network.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
and it was taken under time pressure, with management not fully aware of the time restoration required to develop staff training. The
decision by management may be based on an anthropomorphized perception of AI processes (Bankins & Formosa, 2020), as opposed to
an information-based decision. Operational-based judgment may have followed an analytical utilitarianism pathway driven by a peer-
restricted social environment within the ofce, with outcome decisions replicated with consequential damage to the organization's
reputation. In this scenario, HRM's use of the TP model within the Decision Dashboard would positively help this organization to
develop the ofce social environment demographics, analyze perceptions held by the organizational hierarchy and their sharing of
information to develop junior career training, and coordinate time management to implement change.
3. Conclusion
AI has impacted and changed a variety of aspects of our everyday lives. Opportunities to address ethical, legal, and strategic
challenges in HRM practice and research exist. AI software has inuenced the HRM process by reducing the inefciencies and time
required to complete tasks, yet questions of trust from employees and management remain. If organizations' HRM teams do not keep up
to speed with the forthcoming advances in AI technology, their organizations may not be able to compete effectively in attracting and
recruiting employees in effective roles (Barro & Davenport, 2019). To succeed, organizations will need to commit resources based on
AI-impacted cost projections, rather than nancing HRM development based purely on previous organizational income (Buzko et al.,
2016). A framework for accountability in this investment decision will be required. To assist management to take full advantage of the
power and potential that AI offers, this paper focuses on designing and developing ethical HRM systems to eliminate design bias. To
this end, a TP model providing six dominant algorithmic pathways is offered to provide possible solutions to reduce AI system bias,
which may lead to unethical actions.
The application of analytics and algorithmic decision-making has delivered practical and conceptual problems to HRM, raising
questions about accountability, which proves critical when biases may be inputted at the data-generation stage (Tambe et al., 2019).
Issues with accuracy, reliability, and bias within the data can also generate additional problems for HRM practitioners, where orga-
nizational priorities differ from the algorithm-based decision outcomes. Current research indicates that the integration of AI-driven
HRM decisions may de-bias human recruiting (Loureiro et al., 2021). Though algorithms may de-bias human judgments, increasing
privacy concerns for both companies and individuals regarding sharing data online may not only skew data and drive biases in
algorithmic processing but may also raise privacy concerns with regards to transparency in communicating accountability (Hermann,
2021). While there have been few interdisciplinary exchanges in advances in AI research (Loureiro et al., 2021), a multidisciplinary
perspective provides insights into the adoption of an ethical framework in decision analysis. Using the TP model is useful in
uncovering algorithmic pathways management accountants use before arriving at a decision (Rodgers, 2020, p. 117). Multidisci-
plinary knowledge sharing and encouraging collaboration for effective AI-mediated knowledge sharing suggest that the insertion of
ethical considerations (such as monitoring and privacy) into AI processes by HRM practitioners will have a positive impact on decision-
making (Malik et al., 2020).
4. Implications and directions for future research
As AI algorithms continue to evolve and grow, so do the associated risks. As data scientists, system designers, and programmers
form an integral role within the core organizational strategy for HRM, responsibility for the design and development of HRM processes
within the adoption of such inherently opaque AI technology needs to be clearly dened. Critical and legal questions arise when
accountability for AI decisions is raised. A clear pathway for understanding the ethical position of organizations and their decision
makers can help HRM practitioners interpret AI-generated HRM decisions. The traditional employment relationship based on reci-
procity has eroded with the increasing reliance on AI algorithm-based technology (Duggan et al., 2020). An AI system fraught with
unethical problems may emerge without employing a framework such as that of Selbst et al. (2019), which considers elements of
solutionism, the ripple effect, formalism, portability, and the framing issues to be resolved.
The TP model's algorithms can assist HRM signicantly by addressing accountability related to HRM decision-making in AI en-
vironments. Specically, by embracing the six dominant algorithmic pathways offered by the TP model, HRM practitioners can
considerably mitigate the risks associated with biases, which may be inherent in AI systems (Charlwood & Guenole, 2022), thus
reducing the occurrence of unethical actions. The analysis of which decision-making ethical pathway is chosen is made relative to each
other (i.e., they are not to be analyzed in isolation). Depending on which stakeholder is concerned, each pathway can be considered in
context, reecting the organization's social, economic, and physical environments. These alternate pathways can act complementarily
toward ensuring that ethics and fairness are actively promoted by HR professionals. Ethical egoism, for instance, can support the
development of AI HRM algorithms based on decision-makers' existing experiences (e.g., regarding employee promotion procedures).
Deontology can contribute toward transferring transparently ethical guidelines into mathematical codes that will support managers in
distinguishing wrongdoing (e.g., in the recruitment and selection process). Utilitarianism would orientate HR decision makers toward
the incorporation of moral standards that will favor sustainable welfare practices in the workplace (e.g., job satisfaction). Relativism
can support HRM in making culturally conscious decisions that consider the specicities of international contexts (e.g., regarding
ethical decision-making by international assignees). Virtue ethics can support the prevalence of HR leadership in initiating the coding
of ethics that will override ethically problematic established practices (e.g., eradicate group thinking). Ethics of care may promote the
creation of expert task forces that monitor the implementation of ethical AI-related HR processes, integrating the perspectives of
multiple stakeholders within and beyond the boundaries of the organization.
Although an organization's agents may take a position on whether an employment contract exists in the various manifestations of
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
app work in the gig economy, psychological contracts may exist from the workers' perspective (Duggan et al., 2020). In recent years,
we have witnessed an increasingly wide range of workplace exibility practices, such as part-time work, extime, and telecommuting
(Whyman, Baimbridge, Buraimo, & Petrescu, 2015). This trend may become more challenging from an HRM perspective as more
organizations have been compelled to change from a traditional workplace environment to adopting working from homedue to the
Covid-19 global pandemic, raising new questions and areas of research on the future role of HRM.
The adoption of AI technology has enabled this trend, bringing additional challenges for HRM practitioners. As employee satis-
faction is critical to the retention of talent and key staff (Degbey, Rodgers, Kromah, & Weber, 2021), the impact of physical envi-
ronmental factors has been seen to inuence job satisfaction and productivity (Kwon & Remøy, 2020). Since HRM practitioners may
have no input into the newly imposed work environments of an organization's employees, questions are raised regarding the
framework of accountability for decision outcomes resulting from the adoption of workplace AI technology in the home environment.
HRM decisions based on AI data analyzing employee performance in home environments may challenge data patterns established from
traditional work environments, raising questions around performance accountability, and new challenges regarding teamwork,
employee satisfaction, and employee development. Increased use of algorithm-based decision-making through remote access raises
questions regarding perceptions of monitoring and privacy, suggesting that future research on employee perceptions of HRM prac-
titioners accessing AI technology within organizations may assist in the development of ethical guidelines in the HRM use of AI. A
clearly accountable decision model supports organizations in terms of corporate disclosure and corporate social responsibility, and not
only provides regulators and educators with a framework for evaluating decisions, but it also equips HRM leaders and practitioners
with a framework to account for AI decisions in PMRS.
As emerging evidence illustrates that various professionals' traditional roles are also challenged due to the widespread development
of AI algorithm-based technology (Susskind & Susskind, 2015), similar issues of automation and disintermediation faced by pro-
fessionals, such as architects and lawyers, are faced by HRM practitioners. Without human oversight and intelligibility, HRM roles risk
being de-skilled (Charlwood & Guenole, 2022), raising research questions on how HRM practitioners can address a potential loss of
control as a result of organizational perceptions of AI decision-making abilities, and questions on the long-term outcomes in terms of
the professional expertise and commercial approach of HRM practitioners delegating HRM decisions to AI technologies.
There are risks in the acceptance of AI decisions without a clearly understandable decision audit trail that is legible for non-
programmers. Applying HRM practitioner domain knowledge during the design, development, and deployment stages of AI tech-
nologies will help ensure that ethical considerations are addressed in AI-generated HRM decisions and that they will likely result in
positive outcomes (Charlwood & Guenole, 2022). Employee engagement with the testing and design of AI applications results in
improvements in experience when interacting with technology (Malik et al., 2020). Future research on how HRM practitioners engage
with AI developers, how organizations identify accountability internally and externally for AI-generated HRM decisions, and the
impact on an organization's reputation due to AI-generated HRM decisions will inform HRM decision-making and professional
development. Just as explainers are required in evidence-based industries to understand and communicate AI-generated recom-
mendations (e.g., law, medicine) (Wilson & Daugherty, 2018), the TP model supports HRM practitioners in understanding, managing,
and communicating AI-generated HRM decisions. We hold that future studies that will collect data to operationalize the TP model can
contribute to developing a deeper understanding of algorithm-based HR decisions. By implementing the TP model, as highlighted by
the Dashboard, HR practitioners can help shed light on the technology interface between workplace ethics, employee compliance, and
decision outcomes.
William Degbey acknowledges the Kaute Foundation and Marcus Wallenberg Foundation in Finland for their support of this
Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by articial intelligence.
Organization, 26(5), 655672.
Bankins, S., & Formosa, P. (2020). When AI meets PC: Exploring the implications of workplace social robots and a human-robot psychological contract. European
Journal of Work and Organizational Psychology, 29(2), 215229.
Barnett, S. B., & Cerf, M. (2017). A ticket for your thoughts: Method for predicting content recall and sales using neural similarity of moviegoers. Journal of Consumer
Research, 44, 160181.
Barro, S., & Davenport, T. H. (2019). People and machines: Partners in innovation. MIT Sloan Management Review, 60(4), 2228.
Bekken, G. (2019). The algorithmic governance of data driven processing employment: Evidence-based Management practices, articial intelligence recruiting
software, and automated hiring decisions. Psychosociological Issues in Human Resource Management, 7(2), 2530.
Bentham, J. (1962). The works of Jeremy Bentham. London: John Bowring.
Bolstad, W. M., & Curran, J. M. (2016). Introduction to Bayesian statistics. John Wiley & Sons.
Buzko, I., Dyachenko, Y., Petrova, M., Nenkov, N., Tuleninova, D., & Koeva, K. (2016). Articial intelligence technologies in human resource development. Computer
modelling and new technologies, 20(2), 2629.
Charlwood, A., & Guenole, N. (2022). Can HR adapt to the paradoxes of articial intelligence? Human Resource Management Journal.
Cockcroft, S., & Russell, M. (2018). Big data opportunities for accounting and nance practice and research. Australian Accounting Review, 28(3), 323333.
Cui, G., Wong, M. L., & Lui, H.-K. (2006). Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Management
Science, 52(4), 597612.
Degbey, W. Y., Rodgers, P., Kromah, M. D., & Weber, Y. (2021). The impact of psychological ownership on employee retention in mergers and acquisitions. Human
Resource Management Review, 31(3), 100745.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and app-work in the gig economy: A research agenda for employment
relations and HRM. Human Resource Management Journal, 30(1), 114132.
Einhorn, H. J., & Hogarth, R. M. (1981). Behavioral decision theory: Processes of judgment and choice. Annual Review of Psychology, 32, 5388.
European Commission. (2020). White Paper on Articial Intelligence A European Approach to Excellence and Trust (p. 6). Brussels: European Commission. COM 65 Final.
Foss, K., & Rodgers, W. (2011). Enhancing information usefulness by line managers involvement in cross-unit activities. Organisation Studies, 32, 683703.
George, G., & Thomas, M. R. (2019). Integration of articial intelligence in Human Resource. International Journal of Innovative Technology and Exploring
Engineering (IJITEE), ISSN: 22783075, 9(2). Retrieved April 1, 2020.
Guiral, A., Rodgers, W., Ruiz, E., & Gonzalo-Angulo, J. A. (2010). Ethical dilemmas in auditing: Dishonesty or unintentional Bias? Journal of Business Ethics, 91
(Supplement 1), 151166.
Guiral, A., Rodgers, W., Ruiz, E., & Gonzalo-Angulo, J. A. (2015). Can expertise mitigate auditorsunintentional biases? Journal of International Accounting, Auditing
and Taxation, 24, 105117.
Haenlein, M., & Kaplan, A. (2019). A brief history of articial intelligence: On the past, present, and future of articial intelligence. California Management Review, 61
(4), 514.
Haldorai, K., Kim, W. G., Pillai, S. G., Park, T. E., & Balasubramanian, K. (2019). Factors affecting hotel employeesattrition and turnover: Application of pull-push-
mooring framework. International Journal of Hospitality Management, 83, 4655.
Haynes, B. P. (2008). Impact of workplace connectivity on ofce productivity. Journal of Corporate Real Estate, 10(4), 286302.
Hermann, E. (2021). Leveraging articial intelligence in Marketing for Social GoodAn Ethical Perspective. Journal of Business Ethics, 119.
Hogarth, R. M. (1987). Judgment and choice (2nd ed.). Chichester, UK: John Wiley & Sons.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263291.
Kandathil, G., & Joseph, J. (2019). Normative underpinnings of direct employee participation studies and implications for developing ethical reexivity: A
multidisciplinary Review. Journal of Business Ethics, 157(3), 685697.
Kwon, M., & Remøy, H. (2020). Ofce employee satisfaction: The inuence of design factors on psychological user satisfaction. Facilities, 38(1/2), 119.
Langer, M., & Konig, C. J. (2021). Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human
resource management. Human Resource Management Review, 31.
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Gabriel Kasper, G. (2019). The challenges of algorithm-based HR decision-
making for personal integrity. Journal of Business Ethics, 160(2), 116.
Lillis, A., Malina, M. A., & Mundy, J. (2015). Performance measurement, evaluation, and reward: The role and impact of subjectivity.
Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2021). Articial intelligence in business: State of the art and future research agenda. Journal of Business Research,
129, 911926.
Malik, A., Budhwar, P., Patel, C., & Srikanth, N. R. (2020). May the bots be with you! Delivering HR cost-effectiveness and individualised employee experiences in an
MNE. The International Journal of Human Resource Management, 131.
Malik, A., De Silva, M. T., Budhwar, P., & Srikanth, N. R. (2021). Elevating talentsexperience through innovative articial intelligence-mediated knowledge sharing:
Evidence from an IT-multinational enterprise. Journal of International Management, 27(4), Article 100871.
Matsa, P., & Gullamajji, K. (2019). To study impact of articial intelligence on Human Resource Management. International Research Journal of Engineering and
Technology (IRJET), 6(8), 12311238.
Matyunina, J. (2020). How machine learning is changing HR industry. CodeTiburon.
Minbaeva, D. (2021). Disrupted HR. Human Resource Management Review. 31 p. 100820).
Moor, J. H. (2003). The Turing Test: The Elusive Standard of Articial Intelligence. NY: Springer.
Nguyen, T. M., & Malik, A. (2021a). Impact of knowledge sharing on employeesservice quality: The moderating role of articial intelligence. International Marketing
Nguyen, T. M., & Malik, A. (2021b). A two-wave cross-lagged study on AI service quality: The moderating effects of the job level and job role. British Journal of
O'Connor, S. W. (2020). Articial intelligence in human resources management: What HR professionals should know.
Parent-Rocheleau, X., & Parker, S. K. (2021). Algorithms as work designers: How algorithmic management inuences the design of jobs. Human Resource Management
Review, 31, Article 100838.
Pereira, V., Hadjielias, E., Christo, M., & Vrontis, D. (2021). A systematic literature review on the impact of articial intelligence on workplace outcomes: A multi-
process perspective. Human Resource Management Review, 31.
Prikshat, V., Malik, A., & Budhwar, P. (2021). AI-augmented HRM: Antecedents, assimilation and multilevel consequences. Human Resource Management Review,
Rodgers, W. (1997). Throughput modeling: Financial information used by decision makers. Greenwich, CT: Jai Press.
Rodgers, W. (2006). Process thinking: Six pathways to successful decision making. NY: iUniverse, Inc.
Rodgers, W. (2009). Ethical beginnings: Preferences, rules, and principles inuencing decision making. NY: iUniverse, Inc.
Rodgers, W. (2019). Trust throughput modeling pathways. Hauppauge, NY: Nova Publication.
Rodgers, W. (2020). Evaluation of articial intelligence in a throughput model: Some major algorithms. Florida: Science Publishers (Taylor & Francis).
Rodgers, W. (2022). Algorithms shaping the world we live in: Throughput modeling of articial intelligence applications. UAE: Bentham Publishers (forthcoming).
Rodgers, W., & Al Fayi, S. (2019). Ethical pathways of internal audit reporting lines. Accounting Forum, 43(2), 220245.
Rodgers, W., Alhendi, E., & Xie, F. (2019). The impact of foreignness on the compliance with Cybersecurity controls. Journal of World Business, 54(6).
Rodgers, W., & Gago, S. (2001). Cultural and ethical effects on managerial decisions: Examined in a throughput model. Journal of Business Ethics, 31, 355367.
Rodgers, W., & Gago, S. (2003). A model capturing ethics and executive compensation. Journal of Business Ethics, 48, 189202.
Rodgers, W., & Gago, S. (2004). Stakeholder inuence on corporate strategies over time. Journal of Business Ethics, 52, 349363.
Rodgers, W., Guiral, A., & Gonzalo, J. A. (2009). Different pathways that suggest whether auditors going concern opinions are ethically based. Journal of Business
Ethics, 86, 347361.
Rodgers, W., & Housel, T. (1992). The role of componential learning in accounting education. Accounting and Finance, 32, 7386.
Rodgers, W., & Nguyen, T. (2022). Advertising benets from ethical articial intelligence algorithmic purchase decision pathways. Journal of Business Ethics. https://
Rodgers, W., Simon, J., & Gabrielsson, J. (2017). Combining experiential and conceptual learning in accounting education: A review with implications. Management
Learning, 48(2), 187205.
Rodgers, W., S¨
oderbom, A., & Guiral, A. (2014). Corporate social responsibility enhanced control systems reducing the likelihood of fraud. Journal of Business Ethics,
131(4), 871882.
Rumelhart, D. E., & McCelland, J. L. (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). Cambridge, MA: MIT Press.
Rumelhart, D. E., & Ortony, A. (1977). The representation of knowledge in memory. In R. C. Anderson, R. J. Sprio, & W. E. Montague (Eds.), Schooling and the
acquisition of knowledge. Hillsdale, NJ: Erlbaum.
Selbst, A., Boyd, D., Friedler, S., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In ACM Conference on Fairness,
Accountability, and Transparency.
Susskind, R. E., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. USA: Oxford University Press.
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Articial intelligence in human resources management: Challenges and a path forward. California Management
Review, 61(4), 1542.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453458.
W. Rodgers et al.
Human Resource Management Review xxx (xxxx) xxx
Upadhyay, A. K., & Khandelwal, K. (2018). Applying articial intelligence: Implications for recruitment. Strategic HR Review, 17(5), 255258.
West, D. M. (2018). The future of work: Robots, AI, and automation. Washington DC: Brookings Institution Press.
West, D. M., & Allen, J. R. (2018). How articial intelligence is transforming the world. Brookings Institute.cial-
Westerman, D., Edwards, A. P., Edwards, C., Luo, Z., & Spence, P. R. (2020). I-It, I-Thou, I-Robot: The Perceived Humanness of AI in Human-Machine Communication.
Communication Studies (pp. 116).
Whyman, P. B., Baimbridge, M. J., Buraimo, B. A., & Petrescu, A. I. (2015). Workplace exibility practices and corporate performance: Evidence from the British
private sector. British Journal of Management, 26(3), 347364.
Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114123.
World Economic Forum. (2018). Insight report - the future of jobs report 2018. Switzerland: World Economic Forum Centre for the New Economy & Society.
W. Rodgers et al.
... Both students and teachers must understand how artificial intelligence technology such as ChatGPT processes information and creates responses before implementing the technology in educational settings. This assists in the elimination of any ambiguities or misconceptions that may develop and ensures that the technology is used ethically and responsibly Rodgers et al. 2023 This knowledge may be made accessible in a variety of formats, such as educational materials or guides for academic institutions and their respective students and professors. In addition, educational institutions can make it a priority to utilize open-source or transparent AI technology to ensure that students and teachers have access to the source code and underlying data. ...
Full-text available
Significant changes have been brought about in society, the economy, and the environment as a result of the quick development of technology and the interconnection of the world. Artificial intelligence has advanced significantly in recent years, which has sparked the creation of groundbreaking technologies like Open AI's ChatGPT. Modern technology like the ChatGPT language model has the potential to revolutionize the educational landscape. This article's goals are to present a thorough analysis of the responsible and ethical usage of ChatGPT in education, as well as to encourage further study and debate on this important subject. The study found that the use of ChatGPT in education requires respect for privacy, fairness and non-discrimination, transparency in the use of ChatGPT, and a few other factors that were included in the paper. To sustain ethics and accountability in the global education sector, it is advised in this study that all these recommendations be carried out.
... Both data protection requirements and ethical guidelines must be adhered to [39]. The development and application of clear ethical standards in the use of AI will be a success factor for social acceptance in this context [49]. ...
In order to counter the impending shortage of skilled professionals in the aging societies of our time in many western countries such as Germany, solutions for business and society are urgently needed. Here, artificial intelligence (AI) can play an important role in mitigating the problem with the help of diverse applications. At the same time, it is important to consider both the needs of the respective employee1 and the company to ensure that the use of AI has a positive impact on the organization and finds social acceptance. In this article, we describe the newly developed OSQE model (Optimize, Secure, Qualify, Expand), which for the first time outlines an AI cycle against the shortage of skilled professionals in a holistic approach that focuses equally on people and companies. This can serve organizations as a guide for strategy development, decision-making for and implementation of AI-supported measures in an entire cycle of an employee's affiliation with a company. The model takes three driving forces into account: companies, professionals, and AI applications. In the model, the measures to be implemented are prioritized with ascending numbering based on what would be most urgent for a company to implement. All measures relate to areas of action that place people at the center and can be assigned to the classic cycle of belonging of an employee in the company. In this regard, the opportunities that AI offers to professionals and companies are highlighted.
... "Ethics" refers to the standard system that judges right and wrong, good and evil, moral and immoral behaviors [30], and "ethical management" refers to the application of ethical standards to corporates' decision-making and implementation [31]. Especially when matching people with the help of artificial intelligence, the ethics of the platform operator can have a direct impact on the matching results [32]. A corporation's ethical attitude refers to a corporation following the legal and ethical principals of right, good and just in its business activities [33]. ...
Full-text available
Starting with corporate and customer factors, this paper establishes a research model of the influencing factors that affect the customers’ value co-creation behavior in a sharing economy. Guided by this model, this study conducted a questionnaire survey on 587 Malaysian Airbnb customers, and analyzed the valid data with software such as SPSS26 and AMOS24. The results show that although the operators of sharing economy platforms do not directly provide products and services, their ethical management, corporate authenticity and corporate image still positively influence customer value co-creation behavior, and that sharing economy customers, whether they are suppliers or demanders, have their own characteristics that influence value co-creation behavior. Based on these results, this study suggests that sharing economy corporations should pay attention to their business operations and customer behavior as well as their APS (Application product services), so as to achieve sustainable and virtuous development.
Humans and robots must be able to cooperate and work together to complete their roles and activities in the age of Society 5.0, which is a challenge for scholars and professionals of HRM. The company’s HR management operations, including the hiring process, interviews, coaching, advancement, salary, and staff effectiveness reviews, have widely used artificial intelligence (AI). Algorithm-based technology is thought to produce more productive and profitable outcomes, as well as reducing conventional biases. The purpose of this Systematic Literature Review (SLR) is to examine prior research on the application of artificial intelligence to human resource management (HRM), and examine the extent to which the use of artificial intelligence (AI) has affected businesses and employees.
Artificial intelligence (AI) affects human resource management (HRM), and in so doing, it is transforming the nature of work, workers and workplaces. While AI-assisted HRM is increasingly considered a strategy for improving organizational productivity, the academic literature has not yet offered a strategic framework to guide HR managers in adopting and implementing it. However, existing research in this area offers an opportunity to build such a framework. This systematic review of 67 peer-reviewed articles helps to achieve this objective. We critically examine the organizational and employee-centric outcomes of AI-assisted HRM and develop a strategic framework to guide its practice and future research.
Full-text available
Artificial intelligence (AI) has dramatically changed the way organizations communicate, understand, and interact with their potential consumers. In the context of this trend, the ethical considerations of advertising when applying AI should be the core question for marketers. This paper discusses six dominant algorithmic purchase decision pathways that align with ethical philosophies for online customers when buying a product/goods. The six ethical positions include: ethical egoism, deontology (i.e., rule-based), relativist, utilitarianism, virtue ethics, and ethics of care (i.e., stakeholders’ perspective). Furthermore, this paper launches an “intelligent advertising” AI theme by examining its present and future as well as identifying the key phases of intelligent advertising. Several research questions are offered to guide future research on intelligent advertising to benefit ethical AI decision-making. Finally, several areas that can be widely applied to ethical intelligent advertising are suggested for future research.
Full-text available
Artificial intelligence (AI) is widely heralded as a new and revolutionary technology that will transform the world of work. While the impact of AI on human resource (HR) and people management is difficult to predict, the article considers potential scenarios for how AI will affect our field. We argue that although popular accounts of AI stress the risks of bias and unfairness, these problems are eminently solvable. However, the way that the AI industry is currently constituted and wider trends in the use of technology for organising work mean that there is a significant risk that AI use will degrade the quality of work. Viewing different scenarios through a paradox lens, we argue that both positive and negative visions of the future are likely to coexist. The HR profession has a degree of agency to shape the future if it chooses to use it; HR professionals need to develop the skills to ensure that ethics and fairness are at the centre of AI development for HR and people management.
Full-text available
Artificial Intelligence and algorithmic technologies support or even automate a large variety of human resource management (HRM) activities. This affects a range of stakeholders with different, partially conflicting perspectives on the opacity and transparency of algorithm-based HRM. In this paper, we explain why opacity is a key characteristic of algorithm-based HRM, describe reasons for opaque algorithm-based HRM, and highlight the implications of opacity from the perspective of the main stakeholders involved (users, affected people, deployers, developers, and regulators). We also review strategies to reduce opacity and promote transparency of algorithm-based HRM (technical solutions, education and training, regulation and guidelines), and emphasize that opacity and transparency in algorithm-based HRM can simultaneously have beneficial and detrimental consequences that warrant taking a multi-stakeholder view when considering these consequences. We conclude with a research agenda highlighting stakeholders' interests regarding opacity, strategies to reduce opacity, and consequences of opacity and transparency in algorithm-based HRM. Keywords: explainable Artificial Intelligence, Explainability, Opacity, Transparency, Human Resource Management
Full-text available
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles , the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Purpose A growing number of international travellers have influenced how hotels manage their customer satisfaction reviews and ratings. This study examines the influence of knowledge sharing on employee service quality and customer satisfaction in the hotel industry. Another purpose of this study is to investigate the moderating effect of artificial intelligence (AI) system quality on the relationship between knowledge sharing on employee service quality and customer satisfaction. Design/methodology/approach The research design was developed using the positivism approach and quantitative method. Data were collected via a self-administered survey from Vietnamese hotels that used AI systems in employees' work tasks. Three hundred and fifty pairs of questionnaires for frontline employees and customers were collected and used for the data analysis. Structural equation modelling was accessed to examine the framework model. Findings This research shows that the increase of knowledge sharing behaviours significantly influenced customer perceptions of employees' service quality. Furthermore, employee service quality positively affected customer satisfaction. An indirect impact of knowledge sharing on customer satisfaction via employee service quality was found. AI system quality moderated the effect of knowledge sharing on employee service quality whereby the higher the AI system quality, the stronger the impact of knowledge sharing on employee service quality. Therefore, a moderated mediation of employee service quality was found in examining the relationship between knowledge sharing and customer satisfaction. Research limitations/implications This study's findings direct hotel knowledge management and marketing strategies to attract international customers. The study provides hotel managers with directions to increase customer satisfaction to create a competitive advantage in international marketing strategies. Originality/value This study's distinctive contribution lies in examining the phenomenon of employee service quality at the intersection of knowledge sharing and customer satisfaction and the use of AI systems from an emerging market context. Furthermore, the moderation role of AI quality has rarely been explored.
The current literature on the use of disruptive innovative technologies, such as artificial intelligence (AI) for human resource management (HRM) function, lacks a theoretical basis for understanding. Further, the adoption and implementation of AI-augmented HRM, which holds promise for delivering several operational, relational and transformational benefits, is at best patchy and incomplete. Integrating the technology, organisation and people (TOP) framework with core elements of the theory of innovation assimilation and its impact on a range of AI-Augmented HRM outcomes, or what we refer to as AI-Augmented HRM. This paper develops a coherent and integrated theoretical framework of HRM(AI) assimilation. Such a framework is timely as several post-adoption challenges, such as the dark side of processual factors in innovation assimilation and system-level factors, which, if unattended, can lead to the opacity of AI applications, thereby affecting the success of any HRM(AI). Our model proposes several testable future research propositions for advancing scholarship in this area. We conclude with implications for theory and practice.
Artificial intelligence (AI) can bring both opportunities and challenges to human resource management (HRM). While scholars have been examining the impact of AI on workplace outcomes more closely over the past two decades, the literature falls short in providing a holistic scholarly review of this body of research. Such a review is needed in order to: (a) guide future research on the effects of AI on the workplace; and (b) help managers make proper use of AI technology to improve workplace and organizational outcomes. This is the first systematic review to explore the relationship between artificial intelligence and workplace outcomes. Through an exhaustive systematic review and analysis of existing literature, we ultimately examine and cross-relate 60 papers, published in 30 leading international (AJG 3 and 4) journals over a period of 25 years (1995–2020). Our review researches the AI-workplace outcomes nexus by drawing on the major functions of human resource management and the process framework of ‘antecedents, phenomenon, outcomes’ at multiple levels of analysis. We review the sampled articles based on years of publication, theories, methods, and key themes across the ‘antecedents, phenomenon, outcomes’ framework. We provide useful directions for future research by embedding our discussion within HR literature, while we recommend topics drawing on alternative units of analysis and theories that draw on the individual, team, and institutional levels.
This study examines whether the adoption of artificial intelligence (AI) in the workplace can make employees satisfied with AI service quality and increase their job satisfaction. The study's conceptual framework was tested using a cross‐lagged panel analysis (N = 313) of hotel employees and managers in Vietnam. This research shows that AI satisfaction with service quality mediated the impact of AI service quality on employees’ job satisfaction. Job level had a moderating effect on the impact of AI service quality on AI satisfaction and job satisfaction, such that AI service quality had an impact on AI satisfaction only in the non‐supervisory group but had an impact on job satisfaction in both non‐supervisory and supervisor/manager groups. AI service quality affected AI satisfaction in both job roles (frontline and back‐end employee roles), but only influenced job satisfaction for the back‐end employee role. The implications of our findings for future research and practice are discussed.
Managing talent and growth in sizeable global information technology (IT) multinational enterprises (MNE) facing technological disruption requires a well-developed innovation strategy. This study presents novel insights into how a large MNE shared knowledge through artificial intelligence (AI) mediated social exchange using effective global talent management (GTM) strategies. Analyzing in-depth qualitative interview data from an extensive global technology MNE subsidiary, this research draws upon the literature on the knowledge-based view (KBV), AI-mediated social exchange theory and GTM, and explores how, through an AI-mediated knowledge-sharing exchange, the MNE managed its knowledge needs. Findings suggest AI-enabled talent applications improved individual experiences of talents at this MNE pursuing an innovation strategy. Findings from the data analysis suggest that first, an innovation-led strategy and culture created a social context for sharing of talent-specific knowledge through knowledge-based data systems embedded in talent-focused AI applications. Second, talent-focused knowledge sharing using AI-mediated social exchange applications resulted in talents experiencing varying personalization levels and positive experience in terms of increased job satisfaction and commitment and reduced turnover intentions. Implications for MNEs in emerging markets to manage global talents in an AI embedded digital social exchange for effective individual outcomes.
We review the literature on algorithmic management (AM) to bridge the gap between this emerging research area and the well-established theory and research on work design. First, we identify six management functions that algorithms are currently able to perform: monitoring, goal setting, performance management, scheduling, compensation, and job termination. Second, we show how each AM function affects key job resources (e.g., job autonomy, job complexity) and key job demands (e.g., workload, physical demands); with each of these resources and demands being important drivers of worker motivation and their well-being. Third, rejecting a deterministic perspective and drawing on sociotechnical systems theory, we outline key categories of variables that moderate the link between AM on work design, namely transparency, fairness and human influence (e.g., whether workers can control the system). We summarize our review in the form of a model to help guide research on AM, and to support practitioners and designers in the creation and maintenance of meaningful jobs in the era of algorithms.