ArticlePDF Available

Abstract

The impact of the implementation of artificial intelligence (AI) on workers’ experiences remains underexamined. Although AI-enhanced processes can benefit workers (e.g., by assisting with exhausting or dangerous tasks), they can also elicit psychological harm (e.g., by causing job loss or degrading work quality). Given AI’s uniqueness among other technologies, resulting from its expanding capabilities and capacity for autonomous learning, we propose a functional-identity framework to examine AI’s effects on people’s work-related self-understandings and the social environment at work. We argue that the conditions for AI to either enhance or threaten workers’ sense of identity derived from their work depends on how the technology is functionally deployed (by complementing tasks, replacing tasks, and/or generating new tasks) and how it affects the social fabric of work. Also, how AI is implemented and the broader social-validation context play a role. We conclude by outlining future research directions and potential application of the proposed framework to organizational practice.
https://doi.org/10.1177/09637214221091823
Current Directions in Psychological
Science
2022, Vol. 31(3) 272 –279
© The Author(s) 2022
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/09637214221091823
www.psychologicalscience.org/CDPS
ASSOCIATION FOR
PSYCHOLOGICAL SCIENCE
Imagine a loan consultant working in a large bank and
proud of being responsible for complex loan decisions
involving millions of euros. Then, over time, manage-
ment replaces the previously human-only task of loan
decision making with a faster, automated, more precise,
but unintelligible algorithm. Now picture a surgeon,
operating with real-time artificial intelligence (AI) anal-
ysis of operative videos. This technique reduces the
duration of surgery and improves patients’ outcomes
(examples adapted from Hashimoto etal., 2018; Strich
etal., 2021). In both cases, the workers experience
dramatic changes to core work tasks that challenge
their understandings of their work and themselves in
relation to their work—their identities (Endacott, 2021).
Identities offer a system of self-reference for attitudes
and behaviors and define an individual’s place in soci-
ety (Tajfel & Turner, 1986). Understanding how workers
react to AI-related changes requires examining how
self-understandings are affected. In this article, we
develop an integrative functional-identity framework to
expand current understanding of AI’s effects on workers
and enable a constructive implementation of AI at work.
This analysis is crucial given the rapid expansion of
AI across business sectors, including health care (e.g.,
diagnostic scanning and analysis), operations and pro-
duction management (e.g., resource optimization),
retail (e.g., chatbots), defense and security (e.g., cyber-
crime detection), banking and finance (e.g., stock-
market predictions), and human resource management
(e.g., recruitment and selection). AI implementation is
often guided by business priorities, such as enhanced
1091823CDPXXX10.1177/09637214221091823Selenko et al.Current Directions in Psychological Science 31(3)
research-article2022
Corresponding Author:
Eva Selenko, School of Business and Economics, Loughborough
University
Email: e.selenko@lboro.ac.uk
Artificial Intelligence and the Future of
Work: A Functional-Identity Perspective
Eva Selenko1, Sarah Bankins2, Mindy Shoss3,4, Joel Warburton5,
and Simon Lloyd D. Restubog6,7
1School of Business and Economics, Loughborough University; 2Department of Management,
Macquarie Business School, Macquarie University; 3Department of Psychology, University of Central Florida;
4Peter Faber Business School, Australian Catholic University; 5Lincoln International Business School, University
of Lincoln; 6School of Labor and Employment Relations, University of Illinois at Urbana-Champaign; and
7UQ Business School, The University of Queensland
Abstract
The impact of the implementation of artificial intelligence (AI) on workers’ experiences remains underexamined.
Although AI-enhanced processes can benefit workers (e.g., by assisting with exhausting or dangerous tasks), they can
also elicit psychological harm (e.g., by causing job loss or degrading work quality). Given AI’s uniqueness among other
technologies, resulting from its expanding capabilities and capacity for autonomous learning, we propose a functional-
identity framework to examine AI’s effects on people’s work-related self-understandings and the social environment
at work. We argue that the conditions for AI to either enhance or threaten workers’ sense of identity derived from
their work depends on how the technology is functionally deployed (by complementing tasks, replacing tasks, and/or
generating new tasks) and how it affects the social fabric of work. Also, how AI is implemented and the broader social-
validation context play a role. We conclude by outlining future research directions and potential application of the
proposed framework to organizational practice.
Keywords
artificial intelligence, complementing tasks, generating tasks, identity threat, meaning of work, replacing tasks,
technological change
Current Directions in Psychological Science 31(3) 273
efficiency, which have been criticized for placing too
little importance on workers’ identity processes when
AI supplements work (as in the example of the sur-
geon) or takes away work (as in the example of the
loan consultant). At the same time, AI can also generate
new tasks and create new roles. These changes are situ-
ated in a currently largely polarized discourse about
AI, spanning highly optimistic views about its benefits
(e.g., freeing workers from laborious and repetitive
tasks) to more catastrophic predictions of human unem-
ployment. In this review of AI’s impact on workers’
identities and their subsequent attitudes and behaviors,
we begin by outlining the technology’s functionality.
What Is AI, and Why Is AI Different?
The term “AI” refers to “a collection of interrelated tech-
nologies used to solve problems that would otherwise
require human cognition” (Walsh etal., 2019, p. 2).
Advancements in AI are attributed to wider data access
and collection (big data), greater computational power,
and enhanced modeling approaches (e.g., neural net-
works). “AI” represents a range of technologies using
a variety of computational methods, particularly
machine learning, which involves computerized learn-
ing processes inspired by human intelligence (Walsh
etal., 2019). Through simple (e.g., decision trees) or more
complex (e.g., artificial neural networks, or deep learn-
ing) modeling methods, AI can analyze large data sets
via learning processes that are supervised (i.e., learning
guided by a human) or unsupervised (i.e., machine-
autonomous learning from the data; Walsh et al., 2019).
Common methods that are often referred to as AI also
include natural-language processing (e.g., analysis and
generation of text) and pattern recognition (e.g., identify-
ing associations in data sets; Walsh etal., 2019).
All forms of current AI fall into the category of narrow
AI. This means the technology can undertake a domain-
specific task (e.g., assessing résumés) but, unlike humans,
cannot translate its capabilities to new domains (e.g.,
driving a car after learning to assess résumés). Despite
what the term “narrow” suggests, AI already outperforms
humans in a range of functions through the speed, accu-
racy, and scale of its processing capabilities (Walsh etal.,
2019, p. 34). Debate remains regarding when (and
whether) AI will achieve human-equivalent, or general,
intelligence. Nevertheless, there is widespread societal
debate, often fearful, surrounding the rapid growth of
AI and what that means for the future. This is reflected
in, and largely influenced by, popular-culture depictions
of advancing technologies and debates regarding the
future of work (Cave etal., 2018).
Several aspects of AI differentiate it from prior tech-
nologies. The enhanced predictive and forecasting
abilities of machine-learning algorithms, particularly
through unsupervised learning, extend AI’s capabilities
into tasks traditionally viewed as human cognitive
work. As AI is trained on large data sets, the nature of
these data (e.g., their uncertain representativeness
across populations) and how they are accessed and
secured also raises questions about increasing “datafica-
tion” of workplaces and the fairness of the outcomes
that AI generates. Implementation of AI generates
implications for workers’ privacy and autonomy. The
use of neural network models means AI’s computations
are often a black box, unknowable to AI designers and
end users alike, which has implications for account-
ability and transparency when AI is used for decision
making. AI’s objective technical capabilities also gener-
ate subjective perceptions of the technology. For exam-
ple, perceptions of agency in AI processes, via its
self-learning nature and autonomous deployment, can
make it seem like a quasisocial actor that can act inde-
pendently on behalf of a human, and this has implica-
tions for workers’ self-understandings (Brunn etal.,
2020; Endacott, 2021).
A Functional-Identity Framework for AI
Examination of AI’s impact on workers should be
grounded in its functional capacities and the way the
technology affects specific work tasks (Brynjolfsson &
Mitchell, 2017; Das etal., 2020). Functionally, AI can
(a) complement and support existing human work tasks,
(b) replace existing human work, and/or (c) create new
human tasks and subsequently new work roles (Acemo-
glu & Restrepo, 2020; Brynjolfsson & Mitchell, 2017).
Depending on the nature of the tasks involved (e.g.,
their structure, repetitiveness, and outcomes), but also
economic and structural factors, different occupations
will be differentially affected by AI-related changes. For
example, occupations that rely more heavily on infor-
mation-technology tasks will be more highly exposed,
and exposed at an earlier time, to effects of task substi-
tution, task replacement, and new-task generation,
which will lead to more fundamental changes in occu-
pational structure (Das etal., 2020). AI-related changes,
supplementations, replacements, and new-task genera-
tion may also happen simultaneously in different areas
of an occupation (Brynjolfsson & Mitchell, 2017).
Figure 1 illustrates how changes and challenges asso-
ciated with AI implementation can be understood using
this functional-identity framework. The introduction (or
anticipated introduction) of a nonhuman “intelligent”
actor demands sensemaking, which will affect how
workers think about themselves and experience their
work—generating opportunities for both work-related
identity threat and work-related identity enhancement,
274
Work-Related Identity Functions
(e.g., Self-Esteem, Belonging,
Self-Knowledge, Self-Verification,
Self-Continuity, Self-Enhancement,
Uncertainty Reduction, Perceived
Meaningfulness)
Individual
Well-Being
AI-Related
Behavior and
Attitudes
Work-Related
Behavior and
Attitudes
Behavior and
Attitudes Toward
Society
AI That Complements Tasks
- Are New Skills Required? Do Old
Skills or Behaviors Need to Be
Unlearned?
- Do New Interactions Follow? Is the
Job Role Affected?
AI That Replaces Tasks
- Which Types of Tasks Are Being
Replaced (Arduous vs. Highly
Skilled)?
- Which Tasks Remain? How Is the
Job Role Affected?
- How Is the Social Work Context
Affected?
AI That Generates New Tasks
- Do New Tasks Require New Skills?
- Are There Other People Who Must
Carry Out the New Tasks?
- Do New Tasks Create New Roles?
What Are the Functional
Changes of AI?
How Will the Functional Changes Affect Work-Related Identity
Enactment and Functions and the Social Fabric of Work?
Social Validation
Identity Threat
Identity Expansion
Social Fabric of Work
Work-Related Identity Enactment
Social Validation From the Social Context
Social Validation
Social Validation From the Social Context
Process of AI Implementation
(e.g., Pace of Change, Number and Types of Tasks Affected,
Availability of Liminal Spaces, Uncanniness of Change)
The Broader Context of AI and Work
(e.g., Popular Narratives,
Organizational Sensemaking,
Occupational Communities)
Fig. 1. A framework of functional task changes related to implementation of artificial intelligence (AI), the effects of these changes on work-related identity, and
individual, work-related, and societal outcomes. Functional task changes caused by AI can affect identity enactment, work-related identity functions, and/or the social
fabric of work in various ways, which can lead to identity threat or enable identity expansion, depending on processes of implementation but also the broader context
of work. These identity-change processes are embedded in a changing social context, which acts as a source of social validation and can support identity change.
Current Directions in Psychological Science 31(3) 275
with subsequent effects on well-being, behavior, and
attitudes. To understand which responses are the most
likely ones, it is important to ask the following ques-
tions: (a) What are the functional work changes antici-
pated, and what challenges will arise from AI use in a
specific work context? and (b) How will these changes
and challenges affect the enactment of important work-
related identities (e.g., occupational, role, and organi-
zational identities), their functions (e.g., belonging,
self-esteem, and self-enhancement), and the social fab-
ric of work (e.g., team composition and organizational
status hierarchies)? Furthermore, responses also depend
on certain conditions. That is, the change process asso-
ciated with AI implementation will shape employees’
responses. Relevant factors include the number and
type of tasks affected, the pace of change, and the
social context of the implementation, both within and
outside work.
Work-related identities reflect “who you are” and
“what you do” regarding work. They are informed by
the social groups people feel part of and by enactment
of certain behaviors that are prototypical for those
groups, and they offer important social recognition for
those behaviors (Ashforth & Schinoff, 2016; Nelson &
Irwin, 2014). Work offers plenty of opportunities for
social self-categorization in that people can see them-
selves as part of an occupation, an organization, or a
work team. People act according to social-group norms
in their work and thereby gain social recognition. Fur-
thermore, work-related identities fulfill multiple impor-
tant identity functions. For example, they provide a
sense of self-esteem and offer opportunities to experi-
ence meaning, a sense of belonging, and competence
(see Ashforth & Schinoff, 2016). In addition, work
contexts—especially teams, colleagues, and supervi-
sors, along with their respective organizations and
occupational communities—can offer social validation
to ingrain those work-related identities and ensure that
they fulfill their functions.
To make sense of dramatic AI-induced changes,
workers will consider what these changes mean for
their work-related identities, for their ability to meet
identity functions (e.g., self-esteem), and for their
enactment of identity-relevant behaviors through work.
For this reason, consequences of AI-induced work
changes depend on whether they generate threats or
enhancements to identities and their functions. If identi-
ties and their functions are threatened, undermined, or
lost, this not only is upsetting for the individual, whose
well-being is affected, but also will result in a variety
of identity-protection responses (Petriglieri, 2011). Con-
versely, if AI-induced change supports identity func-
tions, and brings people closer to their ideal work
selves, people can restructure, adapt, and expand their
work identity (Endacott, 2021). Theoretically, all of this
will have consequences for the individual as well as for
the individual’s attitudes and behaviors toward AI,
toward the changed workplace, and perhaps toward
society at large (Craig etal., 2019; Nelson & Irwin, 2014;
Petriglieri, 2011). Individuals may vary in their identity
responses, and additional variation can result from the
specific process by which AI is implemented, as noted
earlier.
Sensemaking is not a singular process but rather
happens in a social context that offers validation for
new behaviors and new definitions of identity that
will make identity changes stick or unstick (Ashforth
& Schinoff, 2016). AI changes to key tasks can affect
occupational boundaries and consequential team and
organizational structures, thereby changing the social
fabric of work (Craig et al., 2019). Moreover, the
broader organizational, occupational, and societal con-
text will matter for identity changes, as it provides a
system of norms and expectations as reference points
for sensemaking (Endacott, 2021). In the following sec-
tions, we leverage this framework to examine potential
identity consequences associated with three main
workplace functions of AI.
AI that complements and supports
existing human tasks
AI offers new tools to complement and support existing
work, such as through real-time monitoring or interven-
tion in work environments (e.g., analyzing smart phone
data to identify workplace hazards; Howard, 2019) or
providing and structuring informational inputs (e.g.,
improving scheduling; Endacott, 2021). Workers using
AI may need to acquire new skills (e.g., fluency in data
analytics and evaluating data outputs) or unlearn old
routines, as the demand for certain tasks in their jobs
shifts (Lanzolla etal., 2020). Any resulting changes in
identity functions (e.g., self-esteem, belonging), in turn,
may affect work-related identities (Ashforth & Schinoff,
2016).
Task-related changes will also affect the social fabric
of work. For example, some researchers propose that
the usage of AI in psychiatry requires data-management
skills and reliance on close collaborations with software
engineers, thereby redefining organizational hierarchies
and what it means to be a (competent) medical profes-
sional (Brunn etal., 2020). Interview studies indicate
that imposed work changes in general are initially per-
ceived as identity threats, but workers can gradually
move toward acceptance if they manage to adapt to the
changes and modify their identities (Chen & Reay, 2021).
This outcome has been shown to depend on the manner
in which AI is implemented, such as whether people
276 Selenko et al.
have a voice in the implementation and whether there
is a gradual experience of change. Another relevant fac-
tor is the availability of liminal, or transitional, safe
spaces that allow for new learning and competency gain,
which can facilitate adaptation toward new work-related
self-understandings (e.g., seeing oneself as an informa-
tion specialist rather than a radiologist; Jha & Topol,
2016). Also, if AI improves people’s ability to enact cer-
tain identity motives (e.g., to become better in their jobs
and thereby gain self-enhancement), work-related identi-
ties can be extended, and “working with AI” can become
a positive identity category (Endacott, 2021).
AI that replaces human tasks
AI-enhanced processes can also replace various cogni-
tive and manual tasks previously done by humans,
including (a) arduous and repetitive tasks (e.g., pattern
recognition, stock refilling), (b) other routine tasks (e.g.,
scheduling, diagnostics, data search), and (c) more
highly skilled tasks associated with complex decision
making (e.g., AI-automated financial, legal, or policing
decisions; customer service). Such replacement brings
additional identity challenges, over and above those pre-
sented by AI that complements current tasks. When AI
replaces tasks, workers are no longer able to enact task-
related professional self-understandings. This can disrupt
a sense of self-continuity and possibly frustrate the sat-
isfaction of other related identity functions that carrying
out the replaced tasks previously served (e.g., gaining
self-esteem, certainty, meaning; Endacott, 2021). How-
ever, if the replacement of certain tasks by AI enables
workers to get closer to their aspired identities (e.g.,
because it removes an obstacle to accessing identity-
relevant functions by ameliorating a high failure rate or
social stigma), workers will find it easier to change their
identities, and the replacement will be more readily
accepted (Endacott, 2021). The replaced tasks may also
reshape the organization of remaining work. For exam-
ple, interacting with or being managed by a self-learning,
unintelligible algorithmic process that acts in a quasihu-
man way may feel uncanny (Schafheitle etal., 2020).
Moreover, if decisions are perceived as being made with-
out appropriate contextual information, or if they are
perceived as incorrect or arbitrary, they may not be
trusted (Raisch & Krakowski, 2021), which can result in
feelings of alienation or dehumanization.
If the replacement of tasks is accompanied by the
replacement of humans, this will also alter the social
fabric of work, which, in turn, will affect how remaining
workers can validate their existing work-related identi-
ties (Endacott, 2021). Workers who lose significant
aspects of their jobs, or their job roles, will face the
greatest identity challenge. How can they protect their
self-esteem and achieve a sense of self-continuity and
self-verification if the social self-categorizations
enabling those functions no longer exist?
AI that generates new human work tasks
Despite its opportunities for human replacement, the
implementation of AI can also create new tasks and job
roles. Various “algorithmic occupations” are emerging,
focused on training AI (e.g., getting tasks ready for
automatization, teaching the algorithm), explaining the
changes to workers (e.g., convincing them to use algo-
rithmic outputs), and sustaining the use of AI (e.g.,
considering its ongoing ethical implications; Wilson
etal., 2017). More small-scale changes due to AI will
also create new tasks for workers, which might demand
new skills. These new tasks are likely to be met with
a variety of reactions. For example, people have been
found to mourn the loss of changed work, try to con-
serve existing professional identities, and avoid new
tasks (Chen & Reay, 2021). However, if liminal spaces
are created for people to engage in learning and in
identity restructuring, then identity expansion and
adjustment to the changes is more likely to happen.
Identity conditionality
Whether functional changes lead to identity threat or
enhancement will depend not only on how they affect
workers’ self-understanding and their ability to enact
work-related identities and to enjoy their identity func-
tions, but also on (a) how AI-related task changes are
implemented (e.g., the pace or pervasiveness of change)
and (b) the broader social-validation context. The social
groups people feel part of provide a system of norms
and values that guide how they make sense of AI inter-
ventions at work and how they behave in regard to AI
implementation. For example, workers who feel that a
new AI tool runs against professional norms may report
frustration and show resistance (Chen & Reay, 2021;
Strich etal., 2021). This sensemaking will happen in a
changed work context, as the functional task changes
might recompose teams and organizational hierarchies
by creating new roles and replacing old ones. The
functional change may also shift the norms of what
constitutes esteemed, desirable, and knowledgeable
behavior in the eyes of other people. This change will
foster identities that have been expanded and changed
and threaten identities that are no longer adaptive. Also,
the wider popular narrative surrounding AI technolo-
gies will be of influence. Currently, popular opinions
on AI tend to fall into two camps: those that foretell
doom (i.e., opinions that are overly skeptical and dis-
trusting toward AI) versus those that foretell utopia (i.e.,
Current Directions in Psychological Science 31(3) 277
opinions that are overly excited and overly trusting
toward AI; Raisch & Krakowski, 2021). Both positions
can be problematic (Craig etal., 2019), and whether
workers are more likely to experience identity threat or
identity expansion will depend on which position more
closely resonates with them. Thus, organizational
attempts at sensemaking can be helpful in solidifying
the expansion of new identities (Ashforth & Schinoff,
2016). Also, occupational communities for new or
changed occupations can assist with collective sense-
making and redefining professional roles that will enable
gradual identity development (Chen & Reay, 2021).
A Way Forward: Recommendations
for Future Research and Practice
Our framework shows the importance of identity for
understanding workers’ reactions toward AI implemen-
tation and the outcomes of such implementation. If
AI-related changes modify or remove work that reflects
valued components of people’s identities or reduces
the opportunity to enact these identities, AI implemen-
tation creates a greater risk of identity threat (Craig
etal., 2019; Petriglieri, 2011). Conversely, if AI-related
changes bring people closer to their ideal work selves
or enable better job-related coping and positive self-
definitions, then positive work-related identity change
is more likely (Endacott, 2021).
Although we have identified several factors that influ-
ence workers’ reactions to AI implementation and the
outcomes of such implementation, more research is
needed to specify when, where, and by whom AI-
related changes are assessed as irrelevant, supportive,
or threatening for work-related identities and their func-
tions. For example, workers who are entrenched in stan-
dard procedures are likely to experience threat after AI
implementation (Nelson & Irwin, 2014), whereas those
with a more playful frame of reference (e.g., high levels
of openness to experience) are more likely to experi-
ence positive identity growth (Schneider & Sting, 2020).
Research confirms that senior experts tend to experience
greater identity threat from task replacement by AI than
beginners do (Strich etal., 2021). More research is also
needed to examine how workers dynamically respond
to AI-induced demands to adjust and recraft their identi-
ties, by redefining what they do and who they are in
relation to it (Strich etal., 2021), as well as to investigate
the consequences of AI implementation for workers’
well-being, attitudes and behavior toward AI, and work-
related outcomes (e.g., performance, commitment,
engagement; Craig etal., 2019). Future research may
also extend our framework to the team level to allow
an examination of disruption and the process of recov-
ery among teams during AI implementation.
Our proposed framework also offers several practical
recommendations for organizations. Best practices in AI
implementation often focus on identifying salient stake-
holders, such as workers, and their expectations and
needs (Wright & Schultz, 2018). If workers’ needs are
to be taken seriously, and if identity threat is particularly
likely to occur in situations of distrust, of “black-
box-ism” (in which algorithmic decisions appear to be
unintelligible), and of replacement, leaders should adopt
approaches to AI implementation that identify, mitigate,
and compensate for these issues. For example, research
shows that employers can assist workers in forming new
identities conducive to acceptance and mastery of AI by
providing narratives that focus on sensemaking and
identity development (e.g., “we are on the advanced side
of technology”), and help reduce workers’ fears or aver-
sion to AI (Tong etal., 2021). Employers must also
appropriately retool, retrain, and reskill workers (Brunn
etal., 2020), so that they can interact with AI in ways
that get them closer to their ideal work selves (Endacott,
2021). Offering social validation and a safe liminal space
to restructure and enact new identities can also help
sustain these efforts (Chen & Reay, 2021). Organizational
leaders also need to be mindful of social relationships
at work and beyond, as these will shape how people
see themselves and evaluate how AI may remove or
reconfigure social connections (Endacott, 2021).
As for the pace of AI implementation, identity
research suggests that workers would benefit from
paced replacement that is limited to particular tasks
(ideally those not relevant to identity), rather than
radical changes that affect aspects central to the job
(Ashforth & Schinoff, 2016). Replacement will be faster
when new technology can simply be “plugged in” and
slower when new technology would demand a redesign
of the work environment (Brynjolfsson & Mitchell,
2017). More research is needed to systematically com-
pare workers’ outcomes across different kinds of AI
implementation.
In conclusion, AI-related changes to work affect
workers’ understanding of work, of themselves in rela-
tion to work, and of their social environment. As the
use and capabilities of AI expand, workers, organiza-
tions, and broader society must manage these changes
to enable workers to grow and develop toward satisfy-
ing and meaningful work selves.
Recommended Reading
Ashforth, B. E., & Schinoff, B. S. (2016). (See References).
A useful starting point on work and identity that offers
insights into identity processes in organizations, particu-
larly in times of change and sensemaking.
Jha, S., & Topol, E. J. (2016). (See References). A discussion of
how two clinicians must focus on their core and imitable
278 Selenko et al.
skills and shape their understandings of self in relation to
work when adapting to the use of artificial intelligence
(AI) that replaces and complements human tasks.
Nelson, A. J., & Irwin, J. (2014). (See References). A context-
specific account of the dynamics of identity change
among librarians as a form of technology (not AI in this
case) serves to replace, complement, and generate new
work within their occupation.
Strich, F., Mayer, A. S., & Fiedler, M. (2021). (See References).
An in-depth empirical examination of how workers expe-
rience the implementation of an AI system and the mecha-
nisms through which they both protect and strengthen
their professional identities.
Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels,
I., & Wood, F. (2019). (See References). A thoughtful,
extensive, and accessible overview of the nature of AI
technologies, the sectors to which they are being applied,
and their wider implications for individuals, society, and
work.
Transparency
Action Editor: Robert L. Goldstone
Editor: Robert L. Goldstone
Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of
interest with respect to the authorship or the publication
of this article.
ORCID iDs
Eva Selenko https://orcid.org/0000-0002-9579-9200
Joel Warburton https://orcid.org/0000-0002-5638-1514
References
Acemoglu, D., & Restrepo, P. (2020). The wrong kind of AI?
Artificial intelligence and the future of labour demand.
Cambridge Journal of Regions, Economy and Society,
13(1), 25–35. https://doi.org/10.1093/cjres/rsz022
Ashforth, B. E., & Schinoff, B. S. (2016). Identity under con-
struction: How individuals come to define themselves
in organizations. Annual Review of Organizational
Psychology and Organizational Behavior, 3, 111–137.
https://doi.org/10.1146/annurev-orgpsych-041015-062322
Brunn, M., Diefenbacher, A., Courtet, P., & Genieys, W.
(2020). The future is knocking: How artificial intelligence
will fundamentally change psychiatry. Academic Psychiatry,
44(4), 461–466. https://doi.org/10.1007/s40596-020-
01243-8
Brynjolfsson, E., & Mitchell, T. (2017). What can machine
learning do? Workforce implications. Science, 358(6370),
1530–1534. https://doi.org/10.1126/science.aap8062
Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J.,
Singler, B., & Taylor, L. (2018). Portrayals and perceptions
of AI and why they matter. The Royal Society. https://
royalsociety.org/~/media/policy/projects/ai-narratives/
AI-narratives-workshop-findings.pdf
Chen, Y., & Reay, T. (2021). Responding to imposed job redesign:
The evolving dynamics of work and identity in restructuring
professional identity. Human Relations, 74(10), 1541–
1571. https://doi.org/10.1177/0018726720906437
Craig, K., Thatcher, J. B., & Grover, V. (2019). The IT iden-
tity threat: A conceptual definition and operational mea-
sure. Journal of Management Information Systems, 36(1),
259–288. https://doi.org/10.1080/07421222.2018.1550561
Das, S., Steffen, S., Clarke, W., Reddy, P., Brynjolfsson, E.,
& Fleming, M. (2020). Learning occupational task-shares
dynamics for the future of work. In AIES ’20: Proceedings
of the AAAI/ACM Conference on AI, Ethics, and Society
(pp. 36-42). Association for Computing Machinery.
https://doi.org/10.1145/3375627.3375826
Endacott, C. G. (2021). The work of identity construction in
the age of intelligent machines [Doctoral dissertation, UC
Santa Barbara]. UC Santa Barbara Electronic Theses and
Dissertations. https://escholarship.org/uc/item/2kb6p061
Hashimoto, D. A., Rosman, G., Rus, D., & Meireles, O. R.
(2018). Artificial intelligence in surgery: Promises and
perils. Annals of Surgery, 268(1), 70–76. https://doi
.org/10.1097/SLA.0000000000002693
Howard, J. (2019). Artificial intelligence: Implications for the
future of work. American Journal of Industrial Medicine,
62(11), 917–926. https://doi.org/10.1002/ajim.23037
Jha, S., & Topol, E. J. (2016). Adapting to artificial intelligence:
Radiologists and pathologists as information specialists.
Journal of the American Medical Association, 316(22),
2353–2354. https://doi.org/10.1001/jama.2016.17438
Lanzolla, G., Lorenz, A., Miron-Spektor, E., Schilling, M.,
Solinas, G., & Tucci, C. L. (2020). Digital transformation:
What is new if anything? Emerging patterns and manage-
ment research. Academy of Management Discoveries, 6(3),
341–350. https://doi.org/10.5465/amd.2020.0144
Nelson, A. J., & Irwin, J. (2014). “Defining what we do—All
over again”: Occupational identity, technological change,
and the librarian/Internet-search relationship. Academy
of Management Journal, 57(3), 892–928. https://doi.org/
10.5465/amj.2012.0201
Petriglieri, J. L. (2011). Under threat: Responses to and
the consequences of threats to individuals’ identities.
Academy of Management Review, 36(4), 641–662. https://
doi.org/10.5465/amr.2009.0087
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and
management: The automation–augmentation paradox.
Academy of Management Review, 46(1), 192–210. https://
doi.org/10.5465/amr.2018.0072
Schafheitle, S., Weibel, A., Ebert, I., Kasper, G., Schank, C.,
& Leicht-Deobald, U. (2020). No stone left unturned?
Toward a framework for the impact of datafication
technologies on organizational control. Academy of
Manage ment Discoveries, 6(3), 455–487. https://doi.org/
10.5465/amd.2019.0002
Schneider, P., & Sting, F. J. (2020). Employees’ perspectives
on digitalization-induced change: Exploring frames of
Industry 4.0. Academy of Management Discoveries, 6(3),
406–435. https://doi.org/10.5465/amd.2019.0012
Strich, F., Mayer, A. S., & Fiedler, M. (2021). What do I do in
a world of artificial intelligence? Investigating the impact
of substitutive decision-making AI systems on employ-
ees’ professional role identity. Journal of the Association
Current Directions in Psychological Science 31(3) 279
for Information Systems, 22(2), 304–324. https://doi.org/
10.17705/1jais.00663
Tajfel, H., & Turner, J. C. (1986). The social identity theory of
intergroup behavior. In S. Worchel & W. G. Austin (Eds.),
Psychology of intergroup relations (pp. 7–24). Nelson-Hall.
Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of arti-
ficial intelligence feedback: Deployment versus disclosure
effects on employee performance. Strategic Management
Journal, 42(9), 1600–1631. https://doi.org/10.1002/smj.3322
Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels,
I., & Wood, F. (2019). The effective and ethical development
of artificial intelligence: An opportunity to improve our
wellbeing. Australian Council of Learned Academies.
https://acola.org/wp-content/uploads/2019/07/hs4_arti
ficial-intelligence-report.pdf
Wilson, H. J., Daugherty, P., & Bianzino, N. (2017). The jobs
that artificial intelligence will create. MIT Sloan Manage-
ment Review, 58(4), 14–16.
Wright, S. A., & Schultz, A. E. (2018). The rising tide of arti-
ficial intelligence and business automation: Developing
an ethical framework. Business Horizons, 61(6), 823–832.
https://doi.org/10.1016/j.bushor.2018.07.001
... Thirdly, beyond the pandemic contexts, this study brings new knowledge on the link between individually experienced work-related changes and wider societal behaviors, by highlighting the role of identity (e.g., see also : Selenko & De Witte 2021;Van Hootegem et al. 2022). Particularly now, with the next major work-related change just happening, the lessons from this study seem pressing: The rapid implementation of generative AI tools at work brings radical changes to work, to concepts of expertise, and to occupations (see Selenko et al. 2022). We believe this study, albeit conducted in a unique context, can still be very informative for situations of major work-related changes and understanding their societal implications. ...
... Success needs to be measured by (long-term) changes, for instance, regarding HCPS functionality and security. Impact assessment should take place at various levels in the early stages of technology development and explicitly address aspects of human (psychological) well-being including understandability of, trust in and threat or expansion of human identity through CPS-functionalities (Selenko et al., 2022). HCPS development need to emphasise psychological facets because the development of cyber parts based on artificial intelligence (AI) will augment and challenge human cognitive abilities in a yet unprecedented way. ...
Article
Many cyber-physical systems face the challenge of appropriately integrating domain-specific human expert knowledge into the cyber part to create a shared sphere of knowledge and intelligent interactions between humans and the semi-autonomous technical system. Cognitive engineering contributes methods and insights into higher-order cognition that help to embed human knowledge in an appropriate way. The original research introduces a novel transdisciplinary framework called Human-CoMo, which demonstrates a systematic modelling process, different human perspectives, and the integration of expert knowledge at multiple hierarchical levels. Fundamental principles inspired by human cognition, such as conceptual chunking and knowledge precision, are characterised. Furthermore, it is shown how knowledge hierarchies can be methodically reflected in appropriate data analysis and modelling levels for small and big data applications including artificial intelligence approaches. Combined knowledge- and data-based modelling approaches offer more flexibility to integrate the strengths of humans and technology in a complementary way. The cognitive foundations and their computational reflections are outlined for the technical example process electroplating from the field of materials and surface engineering. The possibilities and limitations of integrating human knowledge through formalisation and implications for future forms of human-machine interaction are discussed.
Article
In this paper we discuss the possibility of robots having a mind and being able to act like human beings and even surpass the human intelligence, and in consequence taking over the world. It is possibility that has been put forward in human history long ago, and that has been accentuated with the new advances in technology from the last few years, of which Chat GPT is the last very well-known example. We base ourselves in a literature review made on eight basic features we define as characteristic of humans, namely: Reproduction, Creation, Belonging, Citizenship, Self-Awareness, Mortality, Rationality, Humour, Feelings and Emotions. We use a plurality of databases as Google and SCOPUS. As a result, we conclude that even if robots may express themselves as humans, and may beat humans in specific activities, they lack most of the features that define human beings and most probably they will ever do. As with time and space travelling, robots that would take power on Earth are a utopia that will probably never happen, but whose pursue will be beneficial for the human race. The paper has the limitation of being only theoretical, and the originality of being based on Philosophy of Artificial Intelligence and presented in a scientific environment.
Article
Full-text available
The increasing volume of unstructured data from social media, news articles, and financial reports presents a significant opportunity for market forecasting. This study explores the application of sentiment analysis as a predictive tool in financial markets using machine learning techniques. By employing natural language processing (NLP) algorithms, we analyze sentiment from diverse textual sources to gauge public opinion and its correlation with market trends. We utilize a range of machine learning models, including logistic regression, support vector machines, and neural networks, to classify sentiment polarity and predict market movements. The effectiveness of the proposed framework is evaluated through backtesting against historical market data, demonstrating its ability to enhance forecasting accuracy compared to traditional financial indicators. Our findings suggest that sentiment analysis, when integrated with machine learning methodologies, can serve as a valuable asset for investors and analysts, providing deeper insights into market dynamics and informing trading strategies. Background Information The financial markets are influenced by a myriad of factors, including economic indicators, company performance, and investor sentiment. In recent years, the advent of digital communication platforms has led to a substantial increase in the volume of textual data available for analysis. Social media, news outlets, and financial blogs serve as rich sources of public opinion, which can significantly impact market behavior. Understanding and quantifying this sentiment can provide crucial insights for market forecasting. Sentiment analysis, also known as opinion mining, is a subfield of natural language processing (NLP) that focuses on extracting subjective information from text. By classifying sentiments as positive, negative, or neutral, researchers and analysts can gauge public sentiment toward specific assets or the market as a whole. Traditional methods of sentiment analysis have often relied on lexicon-based approaches, where predefined lists of words are used to determine sentiment polarity. However, these methods can be limited in their ability to capture the nuances of language and context. With the advancement of machine learning algorithms, particularly in NLP, more sophisticated approaches to sentiment analysis have emerged. Techniques such as supervised learning, where models are trained on labeled datasets, allow for the automatic extraction of features and patterns that are indicative of sentiment. Recent developments in deep learning, including recurrent neural networks (RNNs) and transformers, have further enhanced the accuracy of sentiment classification. Market forecasting traditionally involves the use of quantitative indicators such as price trends, trading volume, and economic data. However, integrating sentiment analysis into forecasting models can provide a more comprehensive view of market dynamics. By analyzing the sentiment
Article
Full-text available
The integration of Artificial Intelligence (AI) into Human Resource Management (HRM) has transformed traditional practices, particularly in predicting employee retention and performance. This study explores the application of AI-driven analytics in HRM, emphasizing its potential to enhance decision-making processes related to workforce management. By leveraging machine learning algorithms and predictive modeling, organizations can analyze vast datasets encompassing employee demographics, engagement levels, performance metrics, and historical turnover patterns. This research highlights how AI tools can identify key factors influencing employee retention, thereby enabling HR professionals to implement targeted interventions to foster a more engaged and productive workforce. Furthermore, the study examines the implications of AI for performance assessment, demonstrating how data-driven insights can lead to more accurate evaluations and personalized development plans. The findings underscore the necessity for HR practitioners to embrace AI technologies to remain competitive in a rapidly evolving business landscape. Ultimately, this study contributes to the growing body of literature on AI in HRM, offering practical recommendations for integrating these tools into existing HR frameworks. Background Human Resource Management (HRM) has traditionally relied on qualitative methods and human judgment to manage workforce dynamics. However, the rapid advancement of technology, particularly Artificial Intelligence (AI), is reshaping the landscape of HRM. AI encompasses a range of technologies, including machine learning, natural language processing, and predictive analytics, which enable organizations to analyze large volumes of data efficiently. Employee retention and performance are critical metrics for organizational success, directly impacting productivity, employee morale, and overall business outcomes. High turnover rates can lead to significant costs associated with recruitment, training, and loss of institutional knowledge. Consequently, organizations are increasingly seeking innovative solutions to identify and mitigate factors contributing to employee attrition. AI presents a unique opportunity to enhance HRM practices by providing data-driven insights into employee behaviors and trends. By utilizing algorithms to process and analyze data from various sources-such as employee surveys, performance evaluations, and exit interviews-HR professionals can uncover patterns that predict retention risks and performance potential. This predictive capability allows for proactive measures to improve employee satisfaction, engagement, and performance outcomes. Despite the potential benefits, the implementation of AI in HRM also raises ethical considerations, such as data privacy, algorithmic bias, and the need for transparency in decision-making processes. Therefore, organizations must navigate these challenges while integrating AI technologies to optimize their HR practices.
Article
Full-text available
The integration of big data analytics in manufacturing has revolutionized the landscape of predictive maintenance, enhancing operational efficiency and minimizing downtime. This paper explores the pivotal role of big data in predictive maintenance strategies, emphasizing the ability to collect, process, and analyze vast amounts of data from various sources, including sensors, machinery, and production processes. By leveraging advanced analytics techniques such as machine learning and artificial intelligence, manufacturers can predict equipment failures, optimize maintenance schedules, and reduce costs. The study highlights case studies demonstrating the successful implementation of big data-driven predictive maintenance frameworks, illustrating their impact on productivity, safety, and sustainability. Additionally, it discusses the challenges of data management, integration, and security in adopting these technologies. The findings suggest that embracing big data analytics in predictive maintenance not only enhances operational reliability but also fosters a proactive maintenance culture, ultimately leading to improved competitive advantage in the manufacturing sector. Background In recent years, the manufacturing sector has undergone a transformative shift due to the rapid advancement of technology, particularly in the realm of data analytics. Big data refers to the vast volume of structured and unstructured data generated from various sources, such as sensors, machinery, and operational systems. This influx of data presents both opportunities and challenges for manufacturers aiming to enhance efficiency and competitiveness. Predictive maintenance, a proactive approach to equipment management, leverages big data analytics to predict when maintenance should be performed. Unlike traditional maintenance strategies, which are often reactive or scheduled at fixed intervals, predictive maintenance relies on real-time data to forecast equipment failures before they occur. This approach not only minimizes unplanned downtime but also optimizes maintenance resources and reduces operational costs. The importance of predictive maintenance has grown as manufacturing systems become increasingly complex and interconnected. The rise of the Industrial Internet of Things (IIoT) has enabled real-time monitoring of equipment health, allowing manufacturers to gather critical performance data continuously. This data, when analyzed effectively, can provide insights into equipment performance trends, identify potential issues, and facilitate timely interventions. Despite its potential benefits, the implementation of big data-driven predictive maintenance is not without challenges. Manufacturers face obstacles such as data integration from disparate sources, the need for advanced analytical skills, and concerns regarding data security and privacy. Additionally, organizations must cultivate a culture of data-driven decision-making to fully leverage the capabilities of predictive maintenance.
Article
Full-text available
The rise of digital transactions has intensified the risk of financial fraud, necessitating the development of robust detection mechanisms. Machine learning (ML) offers innovative approaches to identifying fraudulent activities by analyzing vast datasets for patterns indicative of fraud. This paper reviews various ML algorithms employed in financial fraud detection, including supervised learning methods such as logistic regression, decision trees, and ensemble techniques, as well as unsupervised methods like clustering and anomaly detection. We discuss the advantages of ML, including its ability to learn from historical data, adapt to evolving fraud strategies, and minimize false positives. Additionally, the paper addresses challenges such as data privacy, model interpretability, and the need for continuous model retraining. Through case studies and performance evaluations, we highlight successful applications of ML in real-world scenarios and propose future research directions to enhance the effectiveness of fraud detection systems. Background Information Financial fraud has emerged as a significant concern for individuals, corporations, and financial institutions worldwide, particularly with the rapid growth of online banking, e-commerce, and digital payment systems. Fraudulent activities, including identity theft, credit card fraud, and money laundering, have led to substantial financial losses, estimated to be in the billions of dollars annually. Traditional fraud detection methods, which often rely on rule-based systems and manual reviews, have proven inadequate in keeping pace with the sophisticated tactics employed by fraudsters. The advent of big data and advancements in computational power have paved the way for more effective solutions. Machine learning, a subset of artificial intelligence, enables systems to learn from historical data and identify complex patterns that may indicate fraudulent behavior. Unlike conventional approaches, ML algorithms can adapt to new fraud patterns over time, making them particularly suited for dynamic and evolving financial environments. The application of machine learning in financial fraud detection involves the use of various algorithms, such as supervised learning techniques (e.g., classification algorithms) and unsupervised learning techniques (e.g., clustering and anomaly detection). Supervised learning models are trained on labeled datasets, allowing them to predict fraudulent activities based on historical instances. In contrast, unsupervised learning approaches analyze data without pre-labeled outcomes, identifying unusual patterns that may warrant further investigation. Despite the potential benefits, the integration of machine learning into fraud detection systems presents several challenges. Issues such as data privacy, the need for high-quality labeled datasets, and the interpretability of complex models must be addressed to enhance trust and usability in financial applications. Furthermore, the fast-paced nature of financial fraud requires continuous model updates and retraining to ensure optimal performance.
Article
Full-text available
As artificial intelligence (AI) increasingly permeates business analytics, it raises significant ethical considerations that merit urgent attention. This paper explores the multifaceted ethical implications of employing AI in business decision-making processes, particularly concerning data privacy, bias, accountability, and transparency. The integration of AI can enhance efficiency and drive informed decisions, yet it also poses risks of reinforcing existing biases and discrimination within datasets, which can lead to skewed outcomes. Furthermore, the lack of transparency in AI algorithms complicates accountability, making it difficult for businesses to ensure ethical compliance. By examining case studies and existing frameworks, this study aims to propose ethical guidelines for implementing AI in business analytics. Ultimately, fostering a responsible approach to AI can promote trust, safeguard stakeholder interests, and enhance the long-term sustainability of businesses in an increasingly data-driven world. Background The rapid advancement of artificial intelligence (AI) technologies has revolutionized various sectors, with business analytics being one of the most impacted areas. AI-driven tools enable organizations to process vast amounts of data, uncover patterns, and generate predictive insights that inform strategic decisions. However, as the reliance on AI in business grows, so does the necessity to address the ethical implications associated with its use. Business analytics encompasses a range of activities, from descriptive and diagnostic analytics to predictive and prescriptive analytics. In this context, AI enhances analytical capabilities by automating data processing, improving accuracy, and enabling real-time insights. Yet, the implementation of AI raises critical ethical concerns, particularly regarding data privacy, algorithmic bias, accountability, and transparency. Data privacy is a prominent issue as businesses increasingly collect and analyze personal information to drive insights. The use of sensitive data necessitates strict adherence to privacy regulations and ethical standards to protect individual rights. Furthermore, AI algorithms are often trained on historical data that may reflect systemic biases, leading to discriminatory outcomes in decision-making processes. This bias can perpetuate inequalities in hiring, lending, and customer service, undermining the fundamental principles of fairness and equity. Accountability in AI decision-making remains a complex challenge, as the opaque nature of many AI models complicates the attribution of responsibility for outcomes. This lack of transparency can lead to a diminished trust among stakeholders, including consumers, employees, and regulatory bodies. Therefore, the establishment of ethical guidelines and frameworks is essential to ensure that AI is deployed responsibly in business analytics, balancing innovation with ethical considerations.
Article
Full-text available
The integration of Artificial Intelligence (AI)-driven predictive analytics is revolutionizing business decision-making across industries. By leveraging vast datasets, AI systems apply machine learning algorithms, statistical models, and data mining techniques to forecast future trends, customer behaviors, and operational outcomes. This shift enables companies to transition from reactive to proactive strategies, optimizing decision-making processes. AI-driven insights can enhance supply chain management, marketing strategies, financial forecasting, and risk assessment, fostering data-driven decisions that improve efficiency and competitiveness. Key components such as pattern recognition, predictive modeling, and real-time data processing are central to the AI's capability to provide accurate and actionable insights. AI tools not only process historical data but also analyze dynamic, real-time inputs, allowing businesses to adapt quickly to changing conditions. This technology also mitigates human biases, uncovering hidden patterns that might go unnoticed by traditional analysis. However, challenges such as data privacy concerns, the need for quality data, and the complexity of integrating AI solutions with existing systems remain critical considerations. As businesses continue to harness the power of AI-driven predictive analytics, they stand to gain a competitive edge by anticipating market shifts, refining operational efficiency, and improving customer satisfaction. The future of business decision-making lies in the ability to make informed, data-backed predictions, with AI serving as a pivotal tool in this transformative process. INTRODUCTION Background Information In today's data-driven world, businesses are generating vast amounts of information from diverse sources such as customer transactions, social media interactions, market trends, and supply chain operations. Traditional data analysis methods, while effective in many cases, are often limited in their ability to handle the volume, variety, and velocity of this data. To address this challenge, predictive analytics, powered by Artificial Intelligence (AI), has emerged as a transformative tool for businesses looking to gain deeper insights and forecast future trends. Predictive analytics involves using historical data to predict future outcomes. It utilizes statistical algorithms, machine learning, and data mining techniques to identify patterns in historical data that can be used to anticipate future events. AI enhances this process by automating the analysis of large and complex datasets and continuously learning from new data, making predictions more accurate and timely. The evolution of AI has greatly expanded the capabilities of predictive analytics. Machine learning algorithms, a core part of AI, allow systems to learn from data without being explicitly programmed. This means that the more data AI systems process, the better they become at identifying trends and making accurate predictions. Furthermore, AI's ability to process real-time data allows businesses to make decisions based on the most current information available.
Article
Full-text available
Companies are increasingly using artificial intelligence (AI) to provide performance feedback to employees, by tracking employee behavior at work, automating performance evaluations, and recommending job improvements. However, this application of AI has provoked much debate. On the one hand, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity (“deployment effect”). On the other hand, employees may develop a negative perception of AI feedback once it is disclosed to them, thus harming their productivity (“disclosure effect”). We examine these two effects theoretically and test them empirically using data from a field experiment. We find strong evidence that both effects coexist, and that the adverse disclosure effect is mitigated by employees’ tenure in the firm. These findings offer pivotal implications for management theory, practice, and public policies. Managerial abstract Artificial Intelligence (AI) technologies are bound to transform how companies manage employees. We examine the use of AI to generate performance feedback for employees. We demonstrate that AI significantly increases the accuracy and consistency of the analyses of information collected, and the relevance of feedback to each employee. These advantages of AI help employees achieve greater job performance at scale, and thus create value for companies. However, our study also alerts companies to the negative effect of disclosing using AI to employee that results from employees’ negative perceptions about the deployment of AI, which offsets the business value created by AI. To alleviate value-destroying disclosure effect, we suggest that companies be more proactive in communicating with their employees about the objectives, benefits, and scope of AI applications in order to assuage their concerns. Moreover, the result of the allayed negative AI disclosure effect among employees with a longer tenure in the company suggests that companies may consider deploying AI in a tiered instead of a uniform fashion, i.e., using AI to provide performance feedback to veteran employees but using human managers to provide performance feedback to novices.
Article
Full-text available
Artificial intelligence (AI) is inducing a profound transformation of both the practice and structure of medicine. This implies changes in tasks, where certain processes may be taken over by AI applications, as well as novel ways of collaborating and integrating information. Consider a recent example where AI is used to avoid suicide attempts by using smartphones’ native sensors and signal processing techniques [1]. This new suicide prevention technique requires the psychiatrist to acquire new skills (handling and interpreting continuous patient data sent by a dedicated application) and interact with new actors (programmers, data managers, etc.). Further, the abundance of individual patient data may contribute to a shift in conceptualizing care—from the traditional identification of general risk factors towards more tailored prevention strategies in the sense of personalized medicine.
Article
Full-text available
How do employees perceive strategic technology initiatives? Understanding this is crucial for attaining employees’ acceptance, and so for successful initiative implementation. Drawing on timely cases of digitalization-induced change initiatives triggered by Industry 4.0, the digital networking of the manufacturing industry, we investigate manufacturing employees’ thoughts and feelings with regard to this proclaimed fourth industrial revolution. We employ the Zaltman Metaphor Elicitation Technique (ZMET), a semi-structured, in-depth interview format to unearth individuals’ deep-seated beliefs and values, and thereby identify five distinct frames (utilitarian, functional, anthropocentric, traditional, and playful), which drive employees’ attitudes towards Industry 4.0. Based on this inductive approach and our further analysis of frame adoption patterns, we make a first step towards a cognitive theory on the perception of digitalization-induced change that foregrounds employees’ perspectives and helps us understand why and when certain employees accept digitalization-induced change, while others do not. Our findings inform managerial practice on (i) how to promote far-reaching digitalization initiatives across employees and (ii) how to address the individual employee via frame-contingent communication in order to increase the likelihood of successful implementation. Furthermore, our study adds to theory on cognitive frames, ambivalent attitudes towards change, and framing effectiveness. Finally, our study makes a methodological contribution by adapting ZMET, a market research technique (geared towards customers), to the manufacturing shop floor (geared towards employees).
Article
Full-text available
How do professionals respond when they are required to conduct work that does not match with their identity? We investigated this situation in an English public services organization where a major work redesign initiative required professionals to engage in new tasks that they did not want to do. Based on our findings, we develop a process model of professional identity restructuring that includes the following four stages: (1) resisting identity change and mourning the loss of previous work, (2) conserving professional identity and avoiding the new work, (3) parking professional identity and learning the new work, and (4) retrieving and modifying professional identity and affirming the new work. Our model explicates the dynamics between professional work and professional identity, showing how requirements for new professional work can lead to a new professional identity. We also contribute to the literature by showing how parking one’s professional identity facilitates the creation of liminal space that allows professional identity restructuring.
Article
Artificial intelligence (AI) systems in the workplace increasingly substitute for employees' tasks, responsibilities, and decision-making. Consequently, employees must relinquish core activities of their work processes without the ability to interact with the AI system (e.g., to influence decision-making processes or adapt or overrule decision-making outcomes). To deepen our understanding of how substitutive decision-making AI systems affect employees' professional role identity and how employees adapt their identity in response to the system, we conducted an in-depth case study of a company in the area of loan consulting. We qualitatively analyzed more than 60 interviews with employees and managers. Our research contributes to the literature on IS and identity by disclosing mechanisms through which employees strengthen and protect their professional role identity despite being unable to directly interact with the AI system. Further, we highlight the boundary conditions for introducing an AI system and contribute to the body of empirical research on the potential downsides of AI.
Article
Artificial intelligence (AI) is set to influence every aspect of our lives, not least the way production is organised. AI, as a technology platform, can automate tasks previously performed by labour or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labour can be productively employed. The consequences of this choice have been stagnating labour demand, declining labour share in national income, rising inequality and lowering productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the ‘right’ kind of AI, with better economic and social outcomes.
Article
The goal of this paper is to develop an empirically-grounded framework to analyze how new technologies, particularly those used in the realm of datafication, alter or expand traditional organizational control configurations. Datafication technologies for employee-related data-gathering, analysis, interpretation and learning are increasingly applied in the workplace. Yet there remains a lack of detailed insight regarding the effects of these technologies on traditional control. To convey a better understanding of such datafication technologies in employee management and control, we employed a three-step, exploratory, multi-method morphological analysis. In step 1, we developed a framework based on twenty-six semi-structured interviews with technological experts. In step 2, we refined and redefined the framework in four workshops, conducted with scholars specializing in topics that emerged in step 1. In step 3, we evaluated and validated the framework using potential and actual users of datafication technology controls. As a result, our refined and validated "Datafication Technology Control Configurations" (DTCC) framework comprises eleven technology control dimensions and thirty-six technology control elements, offering the first insights into how datafication technologies can change our understanding of traditional control configurations.