PreprintPDF Available

Psychological Implications of AI-driven Automation for Employees – Insights from Healthcare and Financial Services in Germany

Authors:
  • Humboldt University Berlin / University of Applied Management / IQP
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Advances in artificial intelligence (AI) are shaking up the world of work. The capabilities of AI are becoming more and more sophisticated and AI systems are increasingly capable of performing not only routine but also non-routine and cognitive tasks autonomously. This triggered the widespread adoption of AI to automate an increasing number of job tasks. Subsequently, jobs are changing, both in quantity and quality. While the debate on the impact of AI and automation on employment has intensified, little attention has been paid to the subjective experience of employees and the psychological implications of AI-induced changes in the workplace. Thus, we examined the psychological implications of AI-driven automation with an explicit focus on employees’ experience and behaviour. To do so, we investigated the application fields of AI, the resulting changes in job tasks, and the associated psychological implications in two exemplary sectors, i.e., healthcare and financial services. Using an exploratory approach, we conducted five interview studies with a total of 91 German experts and employees, examining three occupational groups, including physicians, psychotherapists, and bank clerks. Using qualitative content analyses, we compiled comprehensive and detailed data that provide insights into the psychological implications of AI-induced changes of job tasks in the two investigated sectors. Our results serve as a starting point for further research in this area and highlight the importance of taking a human-centric approach.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 1
Psychological Implications of AI-driven Automation for Employees
Insights from Healthcare and Financial Services in Germany
Antonia Sureth1,* and Jens Nachtwei1,2
1Humboldt-Universität zu Berlin, Department of Psychology, 10099 Berlin, Germany
2University of Applied Management, Department of Business Psychology, 85737 Ismaning,
Germany
*Corresponding author: Antonia Sureth, Department of Psychology, Humboldt-Universität zu Berlin,
Unter den Linden 6, 10099 Berlin, Germany. E-Mail: antonia.sureth@hu-berlin.de
Acknowledgements: We thank Jannis Berning, Ksenia Danilova, Ayleen Nowoczyn, and Sonja Weiß
who supported the development of materials, collected the data, and conducted the initial data
analysis. We would further like to show our gratitude to the numerous feedback-givers who
reviewed our study materials and shared their expertise with us.
Note: This is a preprint of a manuscript. It has not been peer-reviewed yet
and is therefore not the final manuscript.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 2
Abstract
Advances in artificial intelligence (AI) are shaking up the world of work. The capabilities of AI
are becoming more and more sophisticated and AI systems are increasingly capable of performing
not only routine but also non-routine and cognitive tasks autonomously. This triggered the
widespread adoption of AI to automate an increasing number of job tasks. Subsequently, jobs are
changing, both in quantity and quality. While the debate on the impact of AI and automation on
employment has intensified, little attention has been paid to the subjective experience of employees
and the psychological implications of AI-induced changes in the workplace. Thus, we examined the
psychological implications of AI-driven automation with an explicit focus on employees’ experience
and behaviour. To do so, we investigated the application fields of AI, the resulting changes in job
tasks, and the associated psychological implications in two exemplary sectors, i.e., healthcare and
financial services. Using an exploratory approach, we conducted five interview studies with a total of
91 German experts and employees, examining three occupational groups, including physicians,
psychotherapists, and bank clerks. Using qualitative content analyses, we compiled comprehensive
and detailed data that provide insights into the psychological implications of AI-induced changes of
job tasks in the two investigated sectors. Our results serve as a starting point for further research in
this area and highlight the importance of taking a human-centric approach.
Keywords: automation, artificial intelligence, psychological implications, interview studies,
employees
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 3
1 Introduction
Technological advancement, further accelerated by the COVID-19 pandemic (Rodriguez-
Bustelo et al. 2020) is leading to rapid, severe, and parallel changes in the world of work (Susskind
2020; Wang and Siau 2019). Advanced technologies, first and foremost artificial intelligence (AI), are
developing at a tremendous pace and rapidly expanding their range of capabilities, becoming
increasingly capable of performing not only routine but also non-routine and cognitive tasks
autonomously (Parker and Grote 2022; Smids et al. 2020). This has major implications for jobs, as
these systems are being used to automate an ever-growing number of job tasks that were originally
performed by humans (Autor 2015; Glikson and Woolley 2020). As a result, jobs are changing, both
in quality and quantity, with enormous social consequences. These developments have inspired a
plethora of books, reports, and articles that are fuelling a highly emotional and ambivalent debate in
academia, as well as society and politics (among others Ford 2015; Frey and Osborne 2017; Maslej et
al. 2023; McAfee and Brynjolfsson 2017). Proponents of these changes promise a new ideal world
with less social inequality, cost savings for organizations, and improved working conditions for
employees. Critics, on the other hand, fear that social inequality will increase, organizations will
either not be able to keep pace with technological progress or automatize too fast, neglecting ethical
issues and risking failure, and employees will face worsened working conditions, job insecurity, and
unemployment.
Probably the most prominent discussion revolves around the question of whether there will
still be enough paid work for everyone in the future (i.e., the quantity of work). Some argue for the
loss of millions of jobs whereas others believe that new jobs will emerge and compensate for the
predicted loss (as with previous industrial revolutions). Ultimately, it is unclear whether there will be
more or fewer jobs in the future (Dengler and Gundert 2021). Where many agree, however, is that it
is not jobs but job tasks that will disappear or emerge, and that this will change jobs but hardly lead
to their entire disappearance (Kaplan and Haenlein 2019; Parker and Grote 2022).
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 4
At least as important is the question of changing working conditions at the individual level
(i.e., the quality of work). Regardless of whether and which jobs will disappear or emerge, job tasks
change so that employees see themselves confronted with changed working conditions (Spencer
2018). From a long history of work psychology research, we know that paid work, in addition to its
economic purpose, fulfils many psychological functions (Jahoda 1981) providing employees with
personal identity (Harpaz 2002), meaningfulness (Rosso et al. 2010), social integration, and much
more (Seubert et al. 2021). So, if job tasks change because of increased automation through the use
of AI, this has psychological implications for employees. These can be either positive or negative,
depending on how employees subjectively experience these changes (e.g., a reduced workload may
be perceived as less stressful or more boring, which may increase or decrease job satisfaction). On
top of that, not only the current experience of changed job tasks, but also expectations and beliefs
about possible changes that may occur in the future have psychological implications (Makarius et al.
2020; Schwabe and Castellacci 2020) (e.g., the expectation that the workload will decrease in the
future can trigger either concern (expected increase in boredom) or relief (expected decrease in
stress) and possibly influence job satisfaction in the present and this even though a lower workload
has not yet occurred and may never occur). Finally, both the current experience and expectations
and beliefs about the future impact employees’ behaviour (Baumeister et al. 2016; Sorgner 2017)
(e.g., looking out for additional tasks when experiencing or expecting boredom or finding ways to
hand off job tasks when experiencing or expecting stress).
Surprisingly, employees’ subjective experience of AI and AI-induced changes in the
workplace as well as the associated psychological implications have received comparatively little
attention so far (Ackerman and Kanfer 2020; Prahl and Van Swol 2021). The aim of our work is
therefore to investigate (1) the application fields of AI in the workplace, (2) the resulting changes of
job tasks, and (3) the psychological implications of that in two exemplary sectors, i.e., healthcare and
financial services. To this end, we conducted five interview studies with German experts and
employees and examined three occupational groups, i.e., physicians, psychotherapists, and bank
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 5
clerks. With this explorative approach, we aimed to collect extensive empirical data on our research
questions that would improve understanding as well as serve as a starting point for further research
and theory building in this area.
2 Background
2.1 Work, Occupations, Jobs, and Job Tasks
In the debate about the future of work, many different terms are being used, all of which
refer to work in a narrower or broader sense. What is usually missing, however, is a clear
classification and delineation of these terms. “Work” (or “labour”, prominent in economic contexts)
is very broad, usually referring to paid work in the sense of employment and mostly used as an
umbrella term. In contrast, “occupation”, “job”, and “job task” are subordinate terms and much
narrower concepts that are in a hierarchical order to each other. Occupations (e.g., physicians)
describe groups of different jobs (e.g., anaesthesiologist, neurologist) and jobs include job-specific as
well as occupation-specific job tasks (e.g., for an anaesthesiologist writing a doctor’s report is an
occupation-specific job task and narcotizing a job-specific one). In the future of work debate, it is
argued that it is job tasks, and not jobs as a whole, that are being automated, and that this leads to
changes of jobs, occupations and work as a whole (Kaplan and Haenlein 2019; Parker and Grote
2022). In the debate, a common distinction is made between blue-, white- and pink-collar jobs,
which are characterized by different job tasks and differ in terms of their susceptibility to
automation (Toshav-Eichner and Bareket-Bojmel 2022). Blue-collar jobs are characterized by manual
job tasks and have long been threatened by automation (especially robots), whereas so-called “pink-
collar jobs”, referring to care-oriented jobs predominantly performed by women, are expected to be
least likely affected by automation. However, white-collar jobs covering clerical, administrative, or
managerial job tasks were long considered spared from automation. With the increasing capability
of AI technologies, this has changed, and many white-collar jobs are now considered to be affected
(Parker and Grote 2022).
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 6
2.2 AI, AI-Driven Automation, and Challenges of Empirical Assessment
AI is discussed as a general-purpose technology (Goldfarb et al. 2023) that is turning the
world of work upside down. The application fields of AI are enormously diverse and rapidly
broadening, affecting many different sectors, e.g., healthcare, sales, service, finance, education
(Trenerry et al. 2021), and transportation (Pfeiffer 2016). However, AI is extremely difficult to
conceptualize (Hudson et al. 2023) and there is no uniform definition of AI available (Wang 2019).
Instead, there are many different approaches to define AI. Often definitions of AI are either very
technical and therefore difficult for lay people to understand, or easy to understand but lack
technical sufficiency. Aiming for a definition that is both easy to understand and technically
sufficient, we have drawn on non-academic expert sources as well as academic literature (e.g.,
Dukino et al. 2019; Kaplan and Haenlein 2019; KI-Campus 2020; Kreutzer and Sirrenberg 2019; Laird
et al. 2009; Russell and Norvig 2010). Accordingly, we define AI as technical systems that
independently and purposefully perform tasks that actually require human intelligence. These
include, for example, conducting dialogs, recognizing images, or writing texts. In order to be able to
do this, AI systems must be equipped with data. This can be done in two main ways: either
instructions are provided by the human, or AI systems learn themselves, based on the data available
to them, and perform tasks without the human providing specific instructions.
Strictly speaking, however, it is not AI directly, but the use of AI to automate job tasks (i.e.,
the partial or complete takeover of a job task by AI) that leads to changes in job tasks and thus to
changed working conditions for many employees (for a definition of automation see Parasuraman
and Riley 1997; Parasuraman et al. 2000). To be precise here, we therefore use the term “AI-driven
automation” when referring to the automation of job tasks specifically through AI.
However, empirical research examining the (psychological) implications of AI-driven
automation at the workplace is challenging due to several issues. First, there is much science-fiction
associated with AI, that has nothing to do with reality, but forms people's mental models of what AI
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 7
is and can do. This can lead to false expectations about the possibilities of using AI in the workplace,
affecting the validity of participant responses.
Second, besides incorrect mental models, there are a few people who may have no mental
model of AI at all and who are thus not able to provide information on the topic. However, due to
the prominence of the topic, this is highly unlikely. Nevertheless, both incorrect and non-existent
mental models of AI need to be addressed when examining AI-driven automation in the workplace.
Both are particularly present when questioning people who are relatively unfamiliar with the topic.
However, there are many people who develop, research, or work with AI applications in various
sectors, and therefore have sophisticated mental models of AI as well as sector-specific knowledge.
These experts are valuable participants to start off with. Nevertheless, employees who might be
more or less naive to the topic of AI, remain an indispensable source of information. To address
potential incorrect or non-existent mental models here, participants can be provided with an
introduction to the topic and easily consumable definitions of AI and AI-driven automation (for
example in the form of a text vignette). In fact, providing definitions to ensure a common
understanding of AI is also useful for expert studies because experts’ mental models, while more
sophisticated, may still differ.
Third, regardless of the quality of participants’ mental models of AI, it is often extremely
difficult to identify whether AI is built into the systems, tools, or machines used and to distinguish AI
from digitization in general (e.g., Giering et al. 2021, for a German perspective). Providing definitions
that create a common understanding of AI is certainly helpful. In some cases, however, the clear
demarcation between AI and digitization is not possible, so answers might refer to digitization
instead of AI, as originally intended.
Fourth, examining the (psychological) implications of AI-driven automation in the workplace
requires the assessment of current experiences as well as future expectations and beliefs. Some
employees might already experience AI-driven automation, either personally or as observers of
colleagues, while others expect its future impact on their job. However, both impact how employees
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 8
experience their jobs and behave accordingly and can thus have psychological implications in the
present (Makarius et al. 2020; Schwabe and Castellacci 2020). This requires research methods that
not only examine the status quo or past experiences, but also capture expectations and beliefs about
the future (see, e.g., White et al. 2022, for timelier technology research in psychology). A common
problem with this, however, is their susceptibility to error (e.g., forecasts are often subject to
positivity bias; Baumeister et al. 2016; Monroe et al. 2017). Participants’ expectations and beliefs
about the future may therefore not accurately reflect future outcomes. However, when investigating
the impact of expectations and beliefs about the future on participants’ current experience and
behaviour, the accuracy of those expectations and beliefs is irrelevant to their effect (e.g., expected
unemployment can cause fear, regardless of whether it actually occurs in the future; Makarius et al.
2020).
2.3 Psychological Implications of AI-Driven Automation
AI-driven automation is changing job tasks. Some tasks vanish completely, some are
changing, and some new ones emerge. As a result, working conditions are changing, and thus, the
quality of work is changing. Workload may increase or decrease, the complexity of tasks may fall or
rise, new competencies may be required, responsibilities may be shifted, and employees may need
to deal with new tools and team constellations. All this can have psychological implications that can
be either positive or negative, depending on how employees experience these changes
(Lichtenthaler 2019; Toshav-Eichner and Bareket-Bojmel 2022).
Currently, a new branch of research on the psychological impact of AI in the workplace is
emerging, discussing possible psychological implications and presenting sporadic initial empirical
results on selected psychological concepts. Giermindl et al. (2022), for example, argue that
technology could lead to a dehumanization of work that fosters a loss of autonomy and self-
determination. In line with that, Fréour et al. (2021) consider a decrease of self-determination at
work plausible when employees’ actions highly depend on instructions by technology. Ghislieri et al.
(2018) suggest that the introduction of innovative systems can make employees feel more controlled
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 9
leading to dissatisfaction, demotivation, and impaired well-being and Mirbabaie et al. (2022) assume
that the introduction of AI might challenge employees’ identification with their jobs. Demerouti
(2020) notes that job demands will increase with the introduction of technology, resulting in more
exhaustion and less work engagement, as job resources will not be able to compensate for new job
demands. Empirical evidence by Braganza et al. (2021) suggests that the introduction of AI indeed
results in lower work engagement and a study by Brougham and Haar (2018) found that awareness
for smart technology, AI, robotics, and algorithms was negatively associated with organizational
commitment and career satisfaction, as well as positively related to higher turnover intentions,
cynicism, and depression. However, when technology is perceived as an enabling factor, Toshav-
Eichner and Bareket-Bojmel (2022) argue that employees may also feel empowered to invest their
efforts in more meaningful tasks and by that are able to realize their full potential. Similarly, Floridi
et al. (2018) argue that AI could liberate people and enable self-realization, leaving more time for
interesting and rewarding work. At the same time, the authors believe it is plausible that old skills
are quickly devalued, creating a feeling of redundancy that affects personal identity, self-esteem,
and social standing. Other authors argue that especially when a particularly large number or
predominantly difficult tasks are automated, self-worth and self-esteem as well as the sense of
purpose (Smids et al. 2020), perceived meaningfulness (Danaher and Nyholm 2021), and
professional identities (Strich et al. 2021) may be threatened. In fact, a study by Abeliansky and
Beulmann (2021) found that automation had a negative impact on employees’ mental health.
Further, increased automation could result in downsizing, leaving employees with fewer colleagues,
which could worsen the sense of community (Danaher and Nyholm 2021) and lead to a lack of social
support (Fréour et al. 2021). However, when low-skill job tasks get automated, work might also
become more stimulating (Parker and Grote 2022) and automation in general could free up time for
job tasks that require creativity, social skills, and emotional intelligence (Pham et al. 2018). Smids et
al. (2020) also consider a positive effect on self-esteem and social recognition to be possible, as
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 10
employees need to invest more in their professional development to acquire new skills in order to
successfully collaborate with new technologies.
As touched upon previously, not only the current experience of changed job tasks but also
the expectation of changes that may occur in the future has psychological implications and impacts
behaviour at present (Baumeister et al. 2016; Makarius et al. 2020; Schwabe and Castellacci 2020). A
long history of research shows that people are in fact able to picture themselves in the future they
construct possible selves (Hoyle and Sherrill 2006; Markus and Nurius 1986) and future work selves
(Strauss et al. 2012), imagining possible future scenarios and events (Parmentier 2021) and using the
comparison of these constructions with their current selves to guide their behaviour (Baumeister et
al. 2016; Strauss et al. 2012). Future thinking can create intense emotions (Van Boven and Ashworth
2007) and influence psychological aspects such as job satisfaction, anxiety, mental stress, and
motivation (Schwabe and Castellacci 2020). A meta-analysis by DeWall et al. (2016) suggests that
anticipated emotion influences current behaviour, and that this influence appears to be even more
relevant than currently experienced emotion in driving behaviour. A very tangible and prominent
example in the context of AI-driven automation is fear. Fear of being replaced by smart machines in
the future can trigger mental stress and anxiety (Makarius et al. 2020) and reduce job satisfaction
(Schwabe and Castellacci 2020) as well as life satisfaction (Hinks 2021). Ivanov et al. (2020) assume
that the fear of replacement is based on concerns about whether it will be possible to find a new job
and thus avert financial hardship and emphasize that in many cases the main concern of employees
may not be the fear of being replaced, but fear of worsening working conditions (e.g., more
responsibilities and less support).
In summary, there is a growing awareness of the psychological impact of AI in the workplace,
and researchers are starting to address this issue, albeit mainly theoretically so far. Empirical
psychological research uncovering concrete positive and negative psychological effects, however, is
scarce, but is urgently needed to convince decision-makers that the introduction of AI has
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 11
psychological effects that can seriously affect the quality of work of employees and that these
effects must be considered in times of change in order to ensure decent working conditions.
3 Research Gap and Relevance of Conducted Research
The examination of psychological phenomena in the future of work debate is
underrepresented (Ackerman and Kanfer 2020; Prahl and Van Swol 2021). Much research in the
context of new technologies, AI, and automation deals with questions on employment (Toshav-
Eichner and Bareket-Bojmel 2022). How many jobs are dying and how many will be created? Which
jobs will be affected? Will there still be enough work for everyone in the future? Underrepresented,
however, is research on how employees subjectively experience these developments at work
(Dekker et al. 2017), how they affect their behaviour (Fukumura et al. 2021), and what specific
psychological implications can be expected (Toshav-Eichner and Bareket-Bojmel 2022). Although
there are some ideas, it is unclear what happens to professional identities (Mirbabaie et al. 2022;
Strich et al. 2021) and meaningfulness at work (Demerouti 2020; Smids et al. 2020), and there are
only some very preliminary attempts to understand the impact on psychological aspects such as self-
esteem (Smids et al. 2020), job satisfaction (Schwabe and Castellacci 2020), or health (Cheng et al.
2021). Not to mention employees’ expectations about the potential impact of AI-driven automation
on their job, which have barely been considered (Rodriguez-Bustelo et al. 2020) but are very likely to
impact employees’ experience and behaviour at present (Schwabe and Castellacci 2020). Thus,
regarding the psychological implications of AI-driven automation, it is still largely unknown which
psychological concepts are affected and whether these implications are predominantly positive or
negative and accordingly represent primarily opportunities or threats to employees. Furthermore, it
is unclear how these implications differ between jobs and occupational groups, and what job- and
occupation-specific implications can be expected.
To ensure a humane and psychologically healthy working world in times of massive
technological disruptions, psychological research is needed that is dedicated to the quality (as
opposed to quantity) of work and delves deeply into the social side of AI (see also, e.g., Sartori and
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 12
Bocca 2023). In fact, low quality jobs can have devastating effects on employees’ mental health, and
these effects can be even worse than unemployment (Blustein et al. 2016). On the one hand, the
entire working world is geared to becoming ever more efficient to generate as much output in as
little time as possible. Consequently, the many small, psychologically valuable details characterizing
work are eliminated step by step. Colleagues are transferred, teams are downsized, there is no
longer a fixed workplace, customer contact is centralized or outsourced. On the other hand,
historically, the psychological function of work is increasing. Work is no longer just a barter
transaction for money, but increasingly a source of meaning, identification, social integration and
much more (Seubert et al. 2021). And this is largely independent of the type and skill level of the job.
To maintain this psychological function of work, it is essential to design working conditions in such a
way that they primarily serve the psychological needs of employees. Automation does not have to
be a destroyer here, but can and must be used in a way that increases the quality of work for
employees (Demerouti 2020). To achieve this, it is necessary to understand how AI-driven
automation changes job tasks and working conditions and how these changes affect employees.
4 Research Questions and Study Design
We therefore dedicated our research to the psychological implications of AI-driven
automation, with an explicit focus on employees’ experience and behaviour. We focused our studies
on two exemplary sectors that are particularly vulnerable and already significantly affected by AI-
driven automation, i.e., healthcare and financial services (Chui et al. 2016; Trenerry et al. 2021).
Within these sectors, we focused on white-collar jobs with high and medium qualification levels, i.e.,
psychotherapists, physicians, and bank clerks. Our explorative qualitative studies were devoted to
the following five research questions: (1) what are the application fields of AI-driven automation, (2)
what are opportunities and threats, (3) how are job tasks changing, (4) how do employees
experience current and future changes of their job tasks, and (5) how do they react to them?
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 13
5 Method
5.1 Study Overview
To investigate the research questions, we conducted five interview studies with German
experts and employees. The two perspectives (i.e., overview and more focus on the future for
experts vs. detail and more focus on current experiences for employees) complement each other
and are thus very useful for tapping into the broad research field. All studies were conducted via
telephone in German using a semi-structured interview guideline that included either a definition of
AI and AI-driven automation or a vignette including these definitions as a basis for the interviews.
The studies were part of a larger research project designed, coordinated, and supervised by the
authors of this article, with one master’s student responsible for each of the studies. Despite their
common setting, the studies were conducted independently and, in some cases, had a distinct focus.
The study material was developed within the project group and in very close exchange with the
project coordinators. Since the studies were partly consecutive, later studies benefited from the
findings of the earlier studies.
Study 1 and 2 were developed in parallel and conducted between May and August 2020 with
study 1 targeting experts in the field of AI and psychotherapy and study 2 targeting experts in the
field of AI and medicine. In both studies, application fields of AI-driven automation and associated
opportunities and threats were investigated. While study 1 focused on potential application fields,
study 2 differentiated between existing and potential applications fields.
Incorporating the learnings from studies 1 and 2, studies 3 through 5 were developed and
conducted between September 2020 and January 2021. Study 3 targeted financial technology
experts and was conducted in conjunction with study 4, which targeted bank clerks as an exemplary
employee group in the financial services sector. Study 5, the healthcare counterpart to study 4,
targeted physicians as an exemplary group of healthcare workers. All three studies examined general
attitudes toward the use of AI in the world of work and the conceivable impact of that. Followed by
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 14
more specific questions about the current as well as expected impact of AI-driven automation on job
tasks and the experience and behaviour of the resulting changes.
5.2 Development of Materials
For all five studies, semi-structured interview guidelines were developed in numerous
rounds of feedback within the project group (see the Supplemental Material for all five interview
guidelines). All guidelines were pre-tested and, when necessary, revised accordingly. In the three
expert studies, an additional question at the end of each interview was used to explore one of three
selected job-related psychological concepts (i.e., self-esteem, perceived self-efficacy,
meaningfulness) and its potential relevance in the context of AI. For reasons of space and stringency,
this question will not be further considered in this article.
All guidelines included a definition of AI and AI-driven automation (studies 1 and 2) or a text
vignette that provided an introduction to the topic in addition to the definitions that were included
in the vignette (studies 3 through 5) to ensure a common understanding of the terms. The
definitions and the vignette were both developed in a comprehensive process within the project
group. The initial version of the definitions was based on an extensive literature review and went
through several feedback loops within the project group before being tested in studies 1 and 2. In
both studies, the experts were asked for their opinion on the definitions and for suggestions for
improvement. Their comments were evaluated and used to revise the definitions. In parallel, the
vignette was developed to provide participants, especially those less knowledgeable, with an easy-
to-understand introduction to the topic. The revised definitions were embedded in the vignette.
Both the revised definitions and the vignette were sent to experts in the field of AI, psychology, and
economics for professional feedback and suggestions for improvement (N = 9) and to people not
familiar with the subject for feedback on readability and comprehensibility (N = 10). All comments
were evaluated and used to revise the material that was then used in studies 3 through 5.
Based on the findings of studies 1 and 2, study 3 through 5 were further developed as
follows. First, the definitions of AI and AI-driven automation were replaced with the vignette.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 15
Second, throughout the interviews, the term AI-driven automation was avoided due to its
unwieldiness and circumscribed by speaking of the use of AI for the automation of job tasks. Third,
before starting with specific questions, general attitudes towards AI and the conceivable impact of AI
on the world of work were investigated at the beginning of the interviews. Fourth, the general
questions about the opportunities and threats of AI were replaced by more specific questions about
the subjective experience of changing job tasks as a result of AI-driven automation and behaviour in
response to these changes. Finally, a distinction between present and future was added so that
participants were asked both about their current experience with AI-driven automation and its
impact, and about their expectations in this regard for the future.
5.3 Participants and Procedure
The study procedure was comparable for all studies. To identify potential participants, either
an online search was conducted via LinkedIn, XING, or ResearchGate (studies 1, 2, and 4) or personal
networks of the authors (study 5) or of the master’s student in charge of the study (study 3) were
used. Moreover, to further expand the sample, all participants were asked to recommend additional
experts at the end of each interview (snowball selection). To include as many different perspectives
on the topic as possible, special care was taken during the acquisition process to assemble a
heterogeneous sample and to avoid nesting (i.e., participants were not part of a higher-level sample
or group, such as the same organization).
Potential candidates were contacted via Email and LinkedIn (study 1, 2, and 3), LinkedIn and
XING (study 4), or Email (study 5). In case of positive resonance, further information about the study
was provided, an interview appointment was scheduled, and participants were asked to carefully
read and sign a privacy statement agreeing to study participation, interview recording, and data
processing (in accordance with the data protection and research process practices of the authors'
research institution). Regarding recording, in addition to written informed consent, all participants
were asked for their consent again at the beginning of the interview before recording began. In
addition to audio recording notes were taken during the interview. In none of the studies did
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 16
participants receive financial compensation for their participation, but the provision of study results
was offered.
5.3.1 Study 1: AI and Psychotherapy Experts
Study 1 was conducted in June and July 2020. All participants were experts in the field of AI
and psychotherapy, psychiatry, and/or mental health in Germany. They were considered experts due
to their engagement with these topics in either (1) politically active associations of psychotherapists,
(2) research institutions, or (3) companies developing and selling AI applications. The interview
began with a brief introduction to the study, followed by the definitions of AI and AI-driven
automation and a request for comments on the definitions. Then four blocks of questions followed
with (1) questions on potential application fields of AI in diagnostics and therapy, (2) questions on
associated opportunities and threats, (3) the supplementary question on perceived self-efficacy, and
(4) demographic questions. The interview ended with a brief conclusion. The average interview
duration was M = 48 minutes (SD = 11.46, Min = 29, Max = 69).
The final sample comprised N = 20 participants, of which six were women and 14 were men.
The average age was M = 45.55 years (SD = 11.38, Min = 31, Max = 62) and all participants had a
university degree as the highest educational qualification. The average engagement with the topics
of AI and psychotherapy was M = 8.10 years (SD = 7.30) with three experts drawing their expertise
from politically active associations of psychotherapists, 11 experts from research institutions, and six
experts from companies developing and selling AI applications. In two cases, there was nesting, so
that in each case two experts worked in the same company or research institution.
5.3.2 Study 2: AI and Medical Experts
Study 2 was conducted between May and August 2020 and placed a focus on chief
physicians and how AI-driven automation might affect their role conflict (between being a physician
and a manager at the same time) in the profession. All participants were experts in the field of AI
and medicine in Germany. They were considered experts due to their engagement with these topics
in either (1) research institutions or (2) companies that provide consulting services on AI
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 17
applications, develop, and/or sell them. The interview began with a brief introduction to the study
followed by a first block of questions on the role conflict of chief physicians. Then, the definitions of
AI and AI-driven automation were presented, comments on the definitions were requested and
participants were asked how AI-driven automation might impact the job tasks of chief physicians.
This was followed by three blocks of questions with (1) questions on existing and potential
application fields of AI-driven automation in medicine and on opportunities and threats of AI for
chief physicians with special emphasis on their role conflict, (2) the supplementary question on self-
esteem, and (3) demographic questions. The interview ended with a brief conclusion. The average
interview duration was M = 52.5 minutes (SD = 10.61, Min = 45, Max = 60).
The final sample comprised N = 20 participants, of which three were women and 17 were
men. The average age was M = 33.75 years (SD = 10.48, Min = 23, Max = 58) and all participants had
a university degree as the highest educational qualification. The average engagement with the topics
of AI and medicine was M = 4.95 years (SD = 4.16) with nine experts drawing their expertise from
research institutions and 11 experts from companies providing consulting services on AI applications,
developing and/or selling them.
5.3.3 Study 3: Financial Technology Experts
Study 3 was conducted between November 2020 and January 2021. All participants were
experts in the field of AI and financial technology in Germany. They were considered experts due to
holding strategic responsibility in a FinTech company in Germany. The interview guidelines for
studies 3 and 4 were developed and coordinated together. The guideline for study 3 consisted of
both a self-report and a third-party assessment of bank clerks (see below). The interview began with
a brief introduction to the study and the presentation of the vignette. This was followed by five
blocks of questions with (1) questions on attitudes toward the use of AI in the world of work and the
possible impact of that, as well as an initial third-party assessment of whether bank clerks are
currently affected by AI-driven automation or will be affected in the future, and how their job tasks
are changing as a result or will change in the future, (2) questions about how bank clerks experience
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 18
changing job tasks due to AI-driven automation currently and in the future, (3) questions about how
they respond to these changes currently and in the future, (4) the supplementary question on
meaningfulness, and (5) demographic questions. The interview ended with a brief conclusion. The
average interview duration was between 20 and 45 minutes.
The final sample comprised N = 7 participants, of which one was a woman and six were men.
Four experts draw their expertise from (co-)founding at least one FinTech, two from working as
business developers and one from working as a FinTech consultant. The average age was M = 34.14
years (SD = 4.85, Min = 26, Max = 41) and all participants had a university degree. The average
professional experience in the current job profile was M = 3.86 years (SD = 1.65, Min = 0.5, Max = 5).
Three experts reported being moderately in touch with the topic of AI, two a lot, and two a great
deal. On average, participants had been involved with AI for M = 7.29 years (SD = 5.25, Min = 4, Max
= 19).
5.3.4 Study 4: Bank Clerks
Study 4 was conducted between September and December 2020. For this study, bank clerks
were selected as an exemplary employee group in the financial services sector. This target group was
chosen because the job profile is performed by many people, comparable across different
workplaces in Germany, and particularly affected by AI-driven automation. At the time of the study,
all participants had successfully completed their training as bank clerks, were employed in Germany,
did not hold a management position, had at least four years of professional experience as a bank
clerk, and had been in their current position for at least one year. The interviews were conducted
using the same guideline as in study 3, except that it was a pure self-report by the bank clerks,
question block 4 (supplementary question on meaningfulness) was omitted, and some demographic
questions were adapted to the target group. The average interview duration was between 20 and 45
minutes.
The final sample comprised N = 25 participants, of which 12 were women and 13 were men.
The average age was M = 36.84 years (SD = 9.84, Min = 25, Max = 60) and the highest educational
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 19
qualification varied from middle school leaving certificate to a completed master’s degree. The
average professional experience as bank clerks was M = 16.72 years (SD = 10.73, Min = 4, Max = 42).
The workforce size of the banks varied from 120 to 200,000 employees (M = 24,052.2, SD =
50,209.67). 36% of respondents said they planned to change their current job profile. 36% reported
little, 20% moderate, 36% a lot, and 8% a great deal of contact with the topic of AI. In most cases
(60%), this contact took place in both professional and private settings. On average, participants had
been involved with AI for M = 4.18 years (SD = 2.72, Min = 0, Max = 10).
5.3.5 Study 5: Physicians
Study 5 was conducted between November and December 2020 and placed a focus on how
AI-driven automation affects the role of empathy. For this study, physicians were selected as an
exemplary employee group in the healthcare sector. Originally, radiologists with at least 10 years of
working experience were the targeted, as they are particularly affected by AI-driven automation.
However, due to the special circumstances during the COVID-19 pandemic, it was not possible to
obtain this sample in sufficient size. For this reason, the target group was expanded and physicians
with different professional backgrounds and different lengths of professional experience were
recruited. All participants were working as assistant or specialist physicians without a management
position in a clinic or hospital in Germany. The interviews were conducted using the same guideline
as in study 4, except the wording of one question in block 2 was minimally adjusted, an additional
question block on the role of empathy was added before the demographic questions, and some
demographic questions were adapted to the target group. The average interview duration was M =
49 minutes (SD = 6.12, Min = 36, Max = 58).
The final sample comprised N = 19 participants, of which 13 were women and six were men.
17 participants worked as assistant and two as specialist physicians, and a total of 10 specialities
were represented. In two cases, there was nesting, so that in each case two participants worked in
the same hospital. The average age was M = 29.89 years (SD = 3.83, Min = 25, Max = 41) and all
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 20
participants had a university degree. The average professional experience was M = 2.61 years (SD =
3.16, Min = 0.3, Max = 13). Most participants (n = 13) reported little contact with the topic of AI.
5.4 Data Analysis
For all five studies, the audio files of the interviews were transcribed verbatim and the data
were analysed using qualitative content analysis. This was done separately for each study. The
interview transcription was done manually and followed the approaches by Kuckartz (2018) as well
as Dresing and Pehl (2018). Language and punctuation were smoothed to improve readability,
emphasis was placed on the content of what was said, and all written material was anonymized.
The data analyses followed Kuckartz (2018) as well as Walf and Nachtwei (2016) for studies
1, 2, and 5 and Mayring (2010) for studies 3 and 4. Depending on the study, the data analysis was
performed using MAXQDA Plus 2020 version 20.3.0. and Microsoft Excel (studies 1, 2, and 5) or
Microsoft Word and Microsoft Excel (studies 3 and 4). Each interview was considered a unit of
investigation. The data material was divided into segments. The content unit consisted of at least
one word and a maximum of one to three paragraphs (depending on the study). To develop the
marker guideline, a hierarchical category system was created for each study. For this purpose, a
deductive-inductive approach was followed: First, main categories were derived a priori based on
the research questions and literature (deductive). Then, based on the data material, the main
categories were further developed and differentiated into subcategories (inductive). Finally,
category definitions were formulated for all categories and subcategories and anchor examples were
added. The resulting marker guideline was discussed within the project team and revised
accordingly. All material was then coded again using the finalized marker guideline. Multiple
mentions of a person within the same category were coded multiple times if they did not refer to
exactly the same semantic content or if they followed a different category.
To determine the inter-rater reliability, between 10% and 25% of the material (depending on
the study) was coded by a second person (O’Connor and Joffe 2020) and Kappa coefficients were
computed based on Brennan and Prediger (1981); (Quarfoot and Levine 2016). For study 1 and 2,
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 21
five interviews each (25%) were coded by a second person. This resulted in a total of 255 codings in
study 1, of which 45 were deviant (𝜅n = .81), and a total of 441 codings in study 2, of which 79 were
deviant (𝜅n = .82). For study 3, two interviews (29%) and for studies 4 and 5, four interviews each
(16% in study 4 and 21% in study 5) were coded by a second person. This resulted in a total of 17
codings in study 3, of which 3 were deviant (𝜅n = .81), a total of 78 codings in study 4, of which 19
were deviant (𝜅n = .76), and a total of 334 codings in study 5, of which 64 were deviant (𝜅n = .81).
According to Landis and Koch (1977), the inter-rater reliability corresponds to near perfect
agreement between raters for studies 1, 2, 3, and 5 and substantial agreement for study 4. Coding
discrepancies were discussed among coders, agreement was reached, and codes were adjusted
accordingly.
5.5 Data Availability
The data that support the findings of our studies are available from the corresponding
author on reasonable request. The data are not publicly available because they contain information
that could compromise the privacy of research participants. The study materials used for data
collection are available in the Supplemental Material.
6 Results
The objective of our studies was to collect extensive empirical data on the psychological
implications of AI-driven automation in the workplace. This has been subject of little empirical
research to date. Thus, our studies aim to provide a better understanding of the research topic and
to serve as a starting point for further research in this area. The number of studies we conducted
and the richness of the data collected invite to provide a more detailed results presentation than is
possible in the following results section. However, as we believe that it is the detailed results from
the individual studies that are particularly fertile ground for future research, we supplement the
exemplary results report in the following section with an extensive appendix. In the following
section, we provide insights into the structure of the results and present selected examples that
were either mentioned particularly frequently by the participants or are particularly controversial in
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 22
the literature or in public discourse. The appendix then contains all the detailed results from the
individual studies. This way, readers who are particularly interested in individual study samples can
delve deeper into the corresponding detailed results.
6.1 Study 1: AI and Psychotherapy Experts
In study 1, experts in the field of AI and psychotherapy were interviewed about potential
application fields of AI-driven automation and associated opportunities and threats. With a total of
83 codings (c) in 20 subcategories, numerous application fields were named. Overall, the experts
considered the use of AI to be possible in both diagnostics (c=51) and therapy (c=32) (see Table A1
and A2 in the Appendix for the complete overview). In diagnostics, AI systems are particularly
conceivable in anamnesis (c=9), testing (c=15), and indication (c=7), but also for monitoring everyday
life (c=4) or the therapist-patient interaction (c=3) seem to be promising application fields. In
therapy, AI could be used, e.g., for self-help programs and apps (c=9), monitoring during therapy
sessions (c=7), or all-around patient care (c=5). And even the option of full adoption of AI was raised
for both, diagnostics (c=2) and therapy (c=1).
Regarding opportunities of AI-driven automation in psychotherapy, a total of 82 statements
were coded into 43 subcategories. These were assessed for the field of psychotherapy in general
(c=42, 20 subcategories) as well as specifically for diagnostics (c=20, 11 subcategories) and therapy
(c=20, 12 subcategories) (see Table A3 in the Appendix for the complete overview). Generally, the
experts mentioned particularly frequently that AI could support guideline-based implementation,
quality assurance, structuring and standardization of diagnostics and therapy and thus contribute to
error reduction (c=9). The use of AI would save both time (c=6) and costs (c=2), allowing the
treatment of more patients and better treatment for existing patients. The low-threshold
therapeutic offer of AI applications is a great opportunity especially for people with, e.g., particularly
shameful problems (c=4), psychotherapeutically underserved areas could benefit from AI
applications (c=2), and long waiting times could be bridged with AI-based therapy offers (c=1).
Therapists could benefit from AI by being relieved psychologically and physically (c=2), by being able
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 23
to focus more on job tasks that are essential for them (c=1), or by being strengthened in their
experience of competence (c=1).
In diagnostics, there seems to be a great opportunity to collect more differentiated data on
the patient via AI-based monitoring and thus make better diagnostic decisions (c=7). In therapy,
supporting the patient and therapist seems particularly promising. Patients could benefit from AI
applications in everyday life that support them, e.g., in reflecting on themselves (c=4). For therapists,
the greatest opportunity seems to lie in better intervention decisions informed by AI e.g., by
providing data on patients' behavioural patterns and mood changes in everyday life (c=2) or by
offering concrete intervention suggestions (c=2).
Regarding threats of AI-driven automation in psychotherapy, a total of 89 statements were
coded into 45 subcategories. Again, these were assessed for the field of psychotherapy in general
(c=67, 32 subcategories) as well as specifically for diagnostics (c=5, 4 subcategories) and therapy
(c=17, 9 subcategories) (see Table A4 in the Appendix for the complete overview). The most
frequently expressed concern was that AI-based recommendations would be blindly adopted by the
therapist without being checked for plausibility (c=12). Furthermore, different AI algorithms lack
standardization and depend heavily on the underlying training data, which can lead to incorrect
results and makes it difficult to compare different AI applications (c=7). So far, it is completely
unclear how liability issues are dealt with (c=5) and some experts expressed concern that acceptance
problems may arise (c=4). Some experts pointed out that there is a risk of data misuse due to a lack
of data protection concepts (c=3) and some emphasized that the quality of AI applications highly
depends on the quality of the data used (c=3).
In diagnostics, some experts indicated that AI cannot replace the therapist's subjective
experiential knowledge (c=2) and that pathologization may increase due to more sophisticated
information data (c=1). In therapy, some experts suggested that AI cannot identify and implement
essential psychotherapeutic factors or replace the therapeutic relationship (c=5). Further, AI
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 24
applications are not individually tailored to the patient and could therefore reduce the quality of
therapy (c=3).
6.2 Study 2: AI and Medical Experts
In study 2, experts in the field of AI and medicine were interviewed about existing and
potential application fields of AI-driven automation in medicine, the impact of AI-driven automation
on the job tasks of chief physicians, and the opportunities and threats of AI-driven automation for
chief physicians. Regarding existing application fields, a total of 50 statements were coded into 12
subcategories. AI currently appears to be used primarily in imaging specialties (c=9). The experts
mentioned a total of nine specialties in which AI is already used (c=31), e.g., radiology (c=13),
pathology (c=4), dermatology (c=4), or oncology (c=4). Six experts said that AI does not yet play a
role in any speciality (c=8). Regarding future application fields, a total of 57 statements were coded
into 15 subcategories. Here, AI appears to be becoming relevant in a large proportion of specialities
(c=10), and again, imaging specialities are likely to be most affected (c=8). In total, the experts
named 12 specialties in which AI could be used in the future (c=32), e.g., radiology (c=11), surgery
(c=5), pathology (c=4), dermatology (c=3), or ophthalmology (c=2) (see Table A5 in the Appendix for
the complete overview of existing and future application fields).
On the question of how AI-driven automation could impact the job tasks of chief physicians,
a total of 29 statements were coded into 7 subcategories (see Table A6 in the Appendix for the
complete overview). Generally, the experts believed that AI-driven automation would have a greater
impact on clinical tasks rather than leadership tasks (c=16). For example, anamnesis and diagnostics
(c=4) as well as image processing (c=2) appear to be particularly promising areas of application. As
far as leadership tasks are concerned, AI-driven automation seems to be particularly useful for
administrative tasks (c=3).
Regarding opportunities of AI-driven automation for chief physicians, a total of 195
statements were coded into 25 subcategories (see Table A7 in the Appendix for the complete
overview). The greatest opportunity of AI appears to be in improving the quality of chief physicians'
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 25
working conditions (c=18) and patient care in general (c=27). Patient care, for example, could be
highly individualized and thus be significantly improved (c=15), and the automation of non-patient-
related job tasks could allow for more patient contact (c=12). Furthermore, AI-driven automation
could increase effectiveness and efficiency at work, saving time and costs (c=33). Physicians'
resources could be preserved, allowing them to focus exclusively on job tasks that are essential to
them (c=17). Specifically, AI could take over administrative (c=9), routine (c=3), and unloved tasks
(c=3), thus relieving physicians. Comprehensive data collection using AI would allow much more
targeted patient care (c=21) and AI-based predictions and recommendations could support decision-
making processes (c=19).
Regarding threats, a total of 119 statements were coded into 13 subcategories (see Table A8
in the Appendix for the complete overview). The experts see the greatest threat in the change in the
multiple roles of chief physicians (c=18). For example, it is conceivable that medical competence will
become less important, as AI can take over many job tasks here, while leadership tasks will become
significantly more demanding. Furthermore, the quality of AI output may be poor, leading to
problems and dissatisfaction or even conflicts of interest (c=13), and it is conceivable that some
physicians may reject AI applications (c=11). Different AI algorithms are not standardized and highly
depend on the underlying training data set, so that resulting applications may be faulty on the one
hand and cannot be compared with each other or generalized across different contexts on the other
(c=10). Data protection may also pose a threat (c=9) on the one hand, patient data must be
adequately protected, and on the other hand, the data is central to training the algorithms and using
AI applications in a meaningful way.
6.3 Study 3: Financial Technology Experts
In study 3, experts in the field of AI and financial technology were interviewed about their
attitudes toward the use of AI in the world of work and the conceivable impact of that, as well as
their third-party assessment, both with regard to the present and the future, of whether bank clerks
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 26
are affected by AI-driven automation, how their job tasks change as a result, and how they
experience and react to these changes.
Regarding attitudes toward AI, a total of seven statements were coded into four
subcategories. All statements were positive. For example, three experts said that the use of AI is the
next industrial revolution that comes with great advantages for society (c=3). Regarding the
conceivable impact, a total of 21 statements were coded into 10 subcategories. The majority of
experts evaluated the impact as positive (n = 4). Among other things, the experts mentioned
downsizing (c=5), efficiency increases (c=3), automation of job tasks (c=3), and changes in job
profiles (c=2) as possible effects (see Table A9 and A10 in the Appendix for the complete overview of
statements on attitudes toward the use of AI and the conceivable impact).
The majority of experts (n = 6) believed that bank clerks are already affected by AI-driven
automation and all experts (n = 7) believed that they will be affected in the future. On the question
of how AI-driven automation is currently changing job tasks of bank clerks, a total of six statements
were coded into four subcategories. According to the experts, job tasks are currently changing in
customer service (c=3), e.g., through the use of AI systems for service tasks such as account openings
(c=2), in lending business by using AI systems for credit decisions (c=2), and in retail banking by using
AI systems in sales (c=1). Regarding the question of how AI-driven automation will change job tasks
of bank clerks in the future, a total of 12 statements were coded into seven subcategories.
Accordingly, job tasks will change, for example, in retail banking through the use of AI systems for
general product advice (c=3), in customer service by using AI systems to process customer inquiries
(c=3), and in clerical processing by using AI systems for general administrative tasks (c=2). Some
experts mentioned that new job tasks will emerge (c=3), e.g., involving more digital tasks (c=1) (see
Table A11 in the Appendix for the complete overview of statements on how job tasks are changing
currently and in the future).
Concerning the question of how bank clerks currently experience changes in their job tasks
due to AI-driven automation, a total of 16 statements were coded into 12 subcategories. With a total
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 27
of 11 statements with a negative connotation and only five statements with a positive connotation,
the majority of experts believed that bank clerks are currently experiencing the changes rather
negatively. For example, the changes are perceived as a threat to the job (c=3), lead to feelings of
resignation (c=2) and are accompanied by fear (c=1), exhaustion (c=1), and anger (c=1). On the other
hand, the experts also described that the changes are experienced as an opportunity to devote
themselves to more demanding tasks (c=2) and arouse interest, especially among those under the
age of 40 (c=1). On the question of how bank clerks will experience changes in their job tasks due to
AI-driven automation in the future, a total of eight statements were coded into six subcategories of
which five statements had a positive and three statements a negative connotation. On the one hand,
it is conceivable that bank clerks will see the changes as an opportunity (c=2) and awareness of
change will increase (c=2). On the other hand, the changes could also be perceived as a threat to
their jobs (c=1) and resignation could arise (c=1) (see Table A12 in the Appendix for the complete
overview of statements on current and future experiences).
Concerning the question of how bank clerks currently react to changes in their job tasks due
to AI-driven automation, a total of seven statements were coded into six subcategories. For
example, bank clerks, on the one hand, do not take any initiative (c=2) and, on the other hand, deal
with new systems (c=1), change jobs (c=1), or actively shape the changes (c=1). Regarding the
question of how bank clerks will react to changes in their job tasks due to AI-driven automation in
the future, a total of seven statements were coded into five subcategories. Again, it can be expected
that some bank clerks will not take any initiative (c=2), while others will invest in training (c=2),
change jobs (c=2), and adapt to the changes (c=1) (see Table A13 in the Appendix for the complete
overview of statements on current and future behaviour).
6.4 Study 4: Bank Clerks
In study 4, bank clerks were interviewed about their attitudes toward the use of AI in the
world of work and the conceivable impact of that, as well as their present- and future-related
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 28
assessment of whether they are affected by AI-driven automation, how their job tasks are changing
as a result, and how they experience and react to these changes.
Regarding attitudes towards the use of AI, a total of 42 statements were coded into 11
subcategories. Twenty-five statements were positive, 12 negative, and five neutral. On the one
hand, the respondents considered the use of AI, for example, to make their jobs easier (c=10) and to
make sense in principle (c=10). On the other hand, they also described, for example, that the use of
AI can lead to job cuts (c=7) and the loss of one's own job (c=2). Regarding the conceivable impact, a
total of 49 statements were coded into 14 subcategories. Fourteen respondents evaluated the
impact as positive, two as negative and nine were undecided. Possible effects of AI-driven
automation mentioned, included, for example, job cuts (c=12), increased efficiency (c=9), and
changed job descriptions (c=6), but also the loss of face-to-face interaction (c=3) and less
appreciation for the work performed (c=2) (see Table A14 and A15 in the Appendix for the complete
overview of statements on attitudes toward the use of AI and the conceivable impact).
Thirteen respondents said they were already affected by AI-driven automation, 10 said they
were not affected so far, and two were unsure, while almost all respondents (n = 24) believed they
would be affected in the future. On the question of how AI-driven automation is currently changing
their job tasks, a total of 27 statements were coded into 10 subcategories. According to
respondents, job tasks are currently changing in investment business (c=8), e.g., through the use of
AI systems in the form of robo-advisors (c=4), in customer services (c=7), e.g., by using AI systems to
process customer inquiries (c=4), or in the lending business (c=6), e.g., by using AI systems to check
creditworthiness (c=5). Regarding the question of how AI-driven automation will change their job
tasks in the future, a total of 28 statements were coded into 15 subcategories. Accordingly, job tasks
will change, for example, in retail banking (c=9), e.g., through the use of AI systems for general
product advice (c=4), in customer services by using AI systems for processing customer inquiries
(c=5), or in clerical processing (c=4), e.g., by using AI systems for filling out contract forms (c=1).
Some respondents mentioned that certain job descriptions would be eliminated (c=4), and one
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 29
respondent thought it likely that AI systems would completely replace the "bank" business model
(c=1) (see Table A16 in the Appendix for the complete overview of statements on how job tasks are
changing currently and in the future).
Concerning the question of how they currently experience changes in their job tasks due to
AI-driven automation, a total of 45 statements were coded into 18 subcategories. With a total of 31
statements with a positive connotation and 14 statements with a negative connotation, more
respondents indicated positive experiences. For example, the changes are experienced as making
work easier (c=9), explicitly positive (c=7), or exciting (c=4). Then again, respondents also described
scepticism about the changes (c=4), disappointment due to less customer contact (c=2), and
pressure to change (c=2). On the question of how they will experience changes in their job tasks due
to AI-driven automation in the future, a total of 30 statements were coded in 13 subcategories, of
which 15 statements had a positive and 15 statements a negative connotation. It is conceivable, for
example, that the changes are experienced as explicitly positive (c=7) and that there is openness to
future change (c=5). Then again, the changes could also cause anxiety (c=2), trigger concern about
job loss (c=2), or have psychological effects (c=2) (see Table A17 in the Appendix for the complete
overview of statements on current and future experience).
Regarding the question of how they currently react to changes in their job tasks due to AI-
driven automation, a total of 43 statements were coded into 15 subcategories. For example, some
invest in in-house training (c=9), a university degree (c=5), or actively engage with the new systems
and processes (c=9), and others resign (c=1) or change the employer (c=1). Concerning the question
of how they will react to changes in their job tasks due to AI-driven automation in the future, a total
of 29 statements were coded into 14 subcategories. There are some respondents who do not plan to
take the initiative (c=7) and others who want to take advantage of in-house training (c=4) or are
thinking about specializing in their job (c=2) (see Table A18 in the Appendix for the complete
overview of statements on current and future behaviour).
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 30
6.5 Study 5: Physicians
In study 5, physicians were presented with the same questions as in study 4. Regarding
attitudes towards the use of AI in the world of work, a total of 19 statements were coded into three
subcategories. Nine respondents reported a positive attitude (c=9) perceiving AI as useful, helpful, or
complementary and predominantly saw advantages in it. Ten respondents were undecided (c=10)
and saw both positive and negative consequences. Regarding the conceivable impact of using AI in
the world of work, a total of 153 statements were coded into 14 subcategories (see Table A19 in the
Appendix for the complete overview). Ten respondents evaluated the impact as positive, eight were
undecided and one abstained. As possible effects of AI-driven automation, e.g., support by AI (c=22),
efficiency increase (c=17), job reduction (c=17), the emergence of new job tasks (c=16), as well as
error prevention and reduction (c=14) were mentioned.
Six respondents said they were already affected by AI-driven automation, six said they were
not affected so far, and seven were unsure, while almost all respondents (n = 18) believed they will
be affected in the future. On the question of how AI-driven automation is currently changing their
job tasks, a total of 23 statements were coded into six subcategories. Accordingly, job tasks are
changing in diagnostics (c=6), e.g., through the use of AI systems to detect pathological
abnormalities in MRI, in drug administration (c=6), e.g., by using AI systems for dosage control, in
administrative tasks (c=4), e.g., by using AI systems for speech recognition, in surgery (c=2), and in
monitoring vital parameters (c=2). Three respondents indicated that they have not noticed any
changes in their job tasks to date. Concerning the question of how AI-driven automation will change
their job tasks in the future, a total of 69 statements were coded into 11 subcategories. Accordingly,
job tasks will change, for example, in administration (c=14), in diagnostics (c=12), in surgery (c=9),
but also in anamnesis (c=7), e.g., by using AI systems that assist in the retrieval, structuring and
processing patient information, or in therapy (c=6), e.g., by AI systems proposing treatment
pathways based on specific symptom constellations. Two respondents (c=3) assumed that new job
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 31
tasks, such as the control of AI, will emerge in the future (see Table A20 in the Appendix for the
complete overview of statements on how job tasks are changing currently and in the future).
Regarding the question of how they currently experience changes in their job tasks due to
AI-driven automation, a total of 31 statements were coded into 10 subcategories, of which 14
statements had a positive and 17 statements a negative connotation. Among others, the
respondents reported relief (c=6), e.g., because AI systems take over part of the work, light-
heartedness (c=3), e.g., because the fear of making mistakes decreases due to a lower workload, or
satisfaction (c=3). On the other hand, they also reported, for example, frustration (c=6), e.g., due to
technical difficulties and slow operation of the AI systems, or doubts, scepticism, and mistrust (c=5),
e.g., because physicians are ultimately responsible for incorrect decisions of the AI systems.
Concerning the question of how they will experience changes in their job tasks due to AI-driven
automation in the future, a total of 56 statements were coded into nine subcategories, of which 42
statements had a positive and only 14 statements a negative connotation. On the one hand, the
respondents expected, for example, relief (c=11), light-heartedness (c=11), curiosity (c=10), and
optimism (c=6). On the other hand, the changes could also lead to doubts, scepticism, and mistrust
(c=12), e.g., due to concerns about privacy issues, or trigger fear (c=2) (see Table A21 in the
Appendix for the complete overview of statements on current and future experience).
On the question of how they currently react to changes in their job tasks due to AI-driven
automation, a total of 26 statements were coded into eight subcategories. For example, some avoid
dealing with AI systems (c=5) by, for example, switching to analogue systems or disabling the
warning function because they are convinced that the systems do not work properly. Others develop
trust in the systems (c=5), control and question them (c=4), or inform themselves about them (c=3).
Five respondents reported no specific behaviour (c=5). Concerning the question of how they will
react to changes in their job tasks due to AI-driven automation in the future, a total of 58 statements
were coded into nine subcategories. For example, some respondents plan to use (c=13) or try out
(c=9) the AI systems, others want to inform themselves and acquire necessary knowledge (c=12),
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 32
e.g., through lectures or exchanges with others, to understand how AI works, and still others want to
question and control the systems (c=10). One expert was thinking about changing jobs (c=1) (see
Table A22 in the Appendix for the complete overview of statements on current and future
behaviour).
7 Discussion
7.1 Summary of Results
The aim of our studies was to investigate the application fields of AI-driven automation, the
associated opportunities and threats, and the psychological implications for employees (in terms of
experience and behaviour) in two exemplary sectors, i.e., healthcare and financial services.
Regarding application fields of AI-driven automation and resulting changes of job tasks (research
questions 1 and 3), imaging specialties as well as diagnostic and administrative tasks currently seem
to be most affected in the healthcare sector. In the future, it is likely that significantly more
specialties and a greater variety of job tasks will be affected. In the financial services sector, job tasks
are currently changing primarily in the investment business and in customer service, while other
areas, such as retail banking, are likely to be additionally affected in the future.
In terms of opportunities and threats (research question 2), the greatest opportunities in the
healthcare sector appear to be improved and highly individualized patient care that includes more
patient contact, better diagnostic and intervention decisions, and fewer errors; significant time and
cost savings; and improved working conditions for employees. The greatest threats seem to be the
lack of standardization of different AI algorithms and their high dependency on the underlying
training data, which can lead to incorrect results and make it difficult to compare different AI
applications. Furthermore, blind adoption of AI-based recommendations, potentially poor quality of
AI results, and acceptance problems on the part of employees seem to be relevant threats. In the
financial services sector, the biggest opportunity appears to be a relief at work for employees, while
the biggest threat seems to be job cuts (note that opportunities and threats for the financial services
sector were derived from the question on attitudes towards the use of AI).
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 33
Concerning how employees experience the changes of their job tasks (research question 4),
in both sectors positive and negative aspects appear to be relevant. While in the healthcare sector
positive and negative aspects were named in equal measure (primarily relief and frustration),
significantly more positive aspects were reported in the financial services sector (primarily relief and
a positive attitude). This is reversed in the case of future expectations, so that significantly more
positive aspects (primarily relief, light-heartedness, and curiosity) were named in the healthcare
sector, while positive and negative aspects were named in equal measure in the financial services
sector (primarily a positive attitude and anxiety or worry). Interestingly, the comparison of the third-
party assessment of the financial technology experts with the self-report of the bank clerks revealed
that they differ strongly regarding current experiences. The experts named predominantly negative
aspects (primarily threat and resignation), while the bank clerks named mainly positive aspects.
Regarding how employees react to changes of their job tasks (research question 5), both
proactive and passive behaviours were reported in both sectors. In the healthcare sector, employees
currently appear to be reacting mainly with avoidance, trust building, or no behaviour at all, while
they are planning significantly more proactive behaviour for the future (primarily active use and self-
information). In the financial services sector, many employees already seem to be proactive,
investing mainly in training, education, and active engagement with the issues, while for the future
some have not yet planned any initiative and others plan mainly to be open and invest in training
and education.
7.2 Limitations
There are several limitations to our studies. The samples were all ad-hoc samples, so self-
selection bias was likely, as only individuals who had access to one of the platforms used were able
to participate in the studies. Due to our inclusion criteria, especially the focus on Germany and the
selected occupational groups, our samples were drawn exclusively from western, educated,
industrialized, rich, and democratic (WEIRD) societies, which is known to have significant limitations
in representativeness (Henrich et al. 2010). The specific timing of the studies during the COVID-19
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 34
pandemic may have influenced the composition of the samples (some individuals were in particular
demand at that time, while others had an unusual amount of free time) and the response behaviour
of participants (the mood of upheaval, fear of job loss, economic uncertainty, and technological
advancements were particularly salient at that time) (e.g., Becker et al. 2022, find higher survey
participation during COVID-19 related lockdowns). Further, the average age and professional
experience of respondents were rather low in some cases.
To address that participants may differ in their understanding of AI (see, e.g., Ngo and
Krämer 2021), all respondents were presented with a definition at the beginning of the interview.
Nevertheless, it remains unclear whether the definition was fully understood and internalized so
that answers were actually based on the provided definition and not on differing understandings of
AI. Furthermore, by its very nature, it cannot be assumed that respondents were always able to
clearly separate between applications that do or do not involve AI as this is often not transparent to
users (according to a survey by Ipsos 2022, only 37% of German resondents said they know which
types of products and services use AI). Moreover, a specific challenge for the participants in our
studies was that they were asked to make statements about the future, which requires high
cognitive load (El Haj and Moustafa 2021) and may be associated with errors (Gilbert and Wilson
2007). Although human cognition inherently involves thinking about the future (Atance and O'Neill
2001), it remains unclear how challenging this was for participants.
Finally, the results of our studies cannot be generalized across countries, nor to sectors
other than the healthcare and financial services sector. Furthermore, also within these sectors, the
results are limited to the occupational groups investigated and cannot be generalized to other
occupational groups within the sectors (e.g., nursing or administration). Moreover, we have
considered occupational groups as a whole and have not taken into account any further
differentiation into, for example, departments or specialties, although the corresponding work
realities may well differ. Accordingly, the changes resulting from AI-driven automation and the
psychological implications may also differ to some extent.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 35
7.3 Implications
Above all, our findings have relevant implications for further research. Generally, it seems
promising to investigate additional target groups such as specific job profiles within the occupational
groups we considered (e.g., radiologists; see Gillan et al. 2019, for an example) and other
occupational groups within the sectors we examined (e.g., nurses; see Liang et al. 2019, for an
example) as well as other sectors that are particularly affected by AI-driven automation (e.g.,
journalism; see Cloudy et al. 2022, for an example). Both our findings and developed interview
guidelines can serve as a starting point for this research. Additionally, it is useful to consider
interindividual differences in future research to understand to what extent people with different
personality traits, values, interests, etc. differ with respect to the questions investigated here and to
understand which individuals are particularly vulnerable or resilient to the psychological implications
of AI-driven automation (see, e.g., Payne et al. 2018, who examined differential effects in the
context of mobile banking or Toscanelli et al., 2022, who examined individual differences in clusters
of technology appraisal).
Finally, longitudinal studies (both qualitative and quantitative) are essential to complement
cross-sectional studies by accounting for the highly dynamic nature of the topic (see, e.g., the study
by Långstedt et al. 2023, on the dynamic of occupational values in the context of AI).
In contrast to our screening approach, further qualitative studies could examine selected
aspects in greater detail, e.g., in the form of in-depth interviews. Such studies could examine in more
detail how, for example, working conditions change due to AI-driven automation and what concrete
psychological implications this has (e.g., AI-driven automation takes over a job task reducing overall
workload, leading to less stress).
Further, following the initial qualitative exploration of the topic, a quantitative approach is
necessary to capture the psychological implications of AI-driven automation in the workplace in an
economical and scalable manner (see e.g., Schepman and Rodway 2020, who developed a scale to
measure attitudes towards AI or Roll et al., 2023, who developed a scale to measure occupation
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 36
insecurity due to automation). Our research results can aid the development of such questionnaire
instruments serving as a basis to identify psychological constructs that seem to be particularly
relevant and promising. Additionally, our results provide guidance on aspects that should be
considered and controlled for, such as the extent to which respondents believe they are currently or
in the future affected or threatened by AI-driven automation.
Finally, our research findings can provide impetus for theory development and revision. The
changes in the world of work due to technological developments are so severe that they require
questioning and possibly revising classical work psychology models and theories.
In addition to the research implications, our studies have practical implications for the
design of work and change management processes around the introduction of AI-based systems in
organizations. Our results can inform job design and job crafting as well as change management
processes that rely on understanding drivers, inhibitors, fears, and hopes as well as motivations of
employees. However, due to the exploratory nature of our studies, these practical implications
should be approached with caution.
7.4 Conclusion
We explored the psychological implications of AI-driven automation for employees in the
healthcare and financial services sector in five interview studies. By providing high-resolution
qualitative data, our research aims to create a differentiated empirical basis that can serve as a
starting point for further research in this area. Overall, it can be summarized that AI-driven
automation plays a role in both sectors and will become increasingly important, which is associated
with both opportunities and threats. The resulting changes in job tasks are diverse, but primarily
concern manual and thus easily automated job tasks. Employees experience these changes positively
and negatively and respond with proactive as well as passive behaviour. Our work highlights the
importance of taking a human-centric approach in the implementation of AI-based systems in the
workplace. The multi-layered psychological implications of AI-driven automation underscore the
need to prioritize the well-being and needs of employees in the design and implementation of such
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 37
systems. Overall, our work contributes to the growing body of knowledge on the psychological
implications of AI-driven automation in the workplace and serves as a call to action for further
research in this important and emerging area.
8 References
Abeliansky A, Beulmann M (2021) Are they coming for us? Industrial robots and the mental health of
workers. SSRN. https://doi.org/10.2139/ssrn.3438287
Ackerman PL, Kanfer R (2020) Work in the 21st century: New directions for aging and adult
development. Am Psychol 75(4):486498. https://doi.org/10.1037/amp0000615
Atance CM, O'Neill DK (2001) Episodic future thinking. Trends Cogn Sci 5(12):533539.
https://doi.org/10.1016/S1364-6613(00)01804-0
Autor DH (2015) Why are there still so many jobs? The history and future of workplace automation. J
Econ Perspect 29(3):330. https://doi.org/10.1257/jep.29.3.3
Baumeister RF, Vohs KD, Oettingen G (2016) Pragmatic perospection: How and why people think
about the future. Rev Gen Psychol 20(1):316. https://doi.org/10.1037/gpr0000060
Becker R, Möser S, Moser N, Glauser D (2022) Survey participation in the time of corona: An
empirical analysis of an effect of the COVID-19 pandemic on survey participation in a swiss
panel study. Surv Res Methods 16(1):6174. https://doi.org/10.18148/srm/2022.v16i1.7896
Blustein DL, Olle C, Connors-Kellgren A, Diamonti A (2016) Decent work: A psychological perspective.
Front Psychol 7. https://doi.org/10.3389/fpsyg.2016.00407
Braganza A, Chen W, Canhoto A, Sap S (2021) Productive employment and decent work: The impact
of AI adoption on psychological contracts, job engagement and employee trust. J Bus Res
131:485494. https://doi.org/10.1016/j.jbusres.2020.08.018
Brennan RL, Prediger DJ (1981) Coefficient kappa: Some uses, misuses, and alternatives. Educ
Psychol Meas 41(3). https://doi.org/10.1177/001316448104100307
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 38
Brougham D, Haar J (2018) Smart technology, artificial intelligence, robotics, and algorithms (STARA):
Employees’ perceptions of our future workplace. J Manag Organ 24(2):239–257.
https://doi.org/10.1017/jmo.2016.55
Cheng WJ, Pien LC, Cheng Y (2021) Occupation‐level automation probability is associated with
psychosocial work conditions and workers' health: A multilevel study. Am J Ind Med
64(2):108117. https://doi.org/10.1002/ajim.23210
Chui M, Manyika J, Miremadi M (2016) Where machines could replace humans and where they
can’t (yet). McKinsey Quartely.
http://dln.jaipuria.ac.in:8080/jspui/bitstream/123456789/2951/1/Where-machines-could-
replace-humans-and-where-they-cant-yet.pdf
Cloudy J, Banks J, Bowman N (2022) AI journalists and reduction of perceived hostile media bias:
Replication and extension considering news organization cues. Technology, Mind, and
Behavior 3(3). https://doi.org/10.1037/tmb0000083
Danaher J, Nyholm S (2021) Automation, work and the achievement gap. AI Ethics 1(3):227237.
https://doi.org/10.1007/s43681-020-00028-x
Dekker F, Salomons A, van der Waal J (2017) Fear of robots at work: The role of economic self-
interest. Socio-Econ 15(3):539562. https://doi.org/10.1093/ser/mwx005
Demerouti E (2020) Turn digitalization and automation to a job resource. Appl Psychol 71(4):1205
1209. https://doi.org/10.1111/apps.12270
Dengler K, Gundert S (2021) Digital transformation and subjective job insecurity in germany. Eur
Sociol 37(5):799817. https://doi.org/10.1093/esr/jcaa066
DeWall CN, Baumeister RF, Chester DS, Bushman BJ (2016) How often does currently felt emotion
predict social behavior and judgment? A meta-analytic test of two theories. Emot Rev
8(2):136143. https://doi.org/10.1177/1754073915572690
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 39
Dresing T, Pehl T (2018) Praxisbuch Interview, Transkription & Analyse: Anleitungen und
Regelsysteme für qualitativ Forschende [Practice book interview, transcription & analysis:
Instructions and rule systems for qualitative researchers], 8th edn. Self-publishing
Dukino C, Ganz W, Hämmerle M, Renner T, Friedrich M, Kötter F, Meiren T, Neuhüttler J, Schuler S,
Zaiser H (2019) Künstliche Intelligenz in der Unternehmenspraxis: Studie zu Auswirkungen
auf Dienstleistung und Produktion [Artificial intelligence in business practice: Study on the
impact on service and production]. Fraunhofer Verlag.
https://biec.iao.fraunhofer.de/content/dam/iao/biec/documents/Digitalfestival-
BeSmart/kuenstliche-intelligenz-in-der-unternehmenspraxis.pdf
El Haj M, Moustafa AA (2021) Pupil dilation as an indicator of future thinking. Neurol Sci 42:647653.
https://doi.org/10.1007/s10072-020-04533-z
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U,
Rossi F (2018) AI4People An ethical framework for a good AI society: Opportunities, risks,
principles, and recommendations. Minds Mach 28(4):689707.
https://doi.org/10.1007/s11023-018-9482-5
Ford M (2015) The rise of the robots: Technology and the threat of mass unemployment. Oneworld
publications
Fréour L, Pohl S, Battistelli A (2021) How digital technologies modify the work characteristics: A
preliminary study. Span J Psychol 24. https://doi.org/10.1017/SJP.2021.12
Frey CB, Osborne MA (2017) The future of employment: How susceptible are jobs to
computerisation? Technol Forecast Soc Change 114:254280.
https://doi.org/10.1016/j.techfore.2016.08.019
Fukumura YE, Gray JM, Lucas GM, Becerik-Gerber B, Roll SC (2021) Worker perspectives on
incorporating artificial intelligence into office workspaces: Implications for the future of
office work. Int J Environ Res Public Health 18(4):1690.
https://doi.org/10.3390/ijerph18041690
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 40
Ghislieri C, Molino M, Cortese CG (2018) Work and organizational psychology looks at the fourth
industrial revolution: How to support workers and organizations? Front Psychol 9.
https://doi.org/10.3389/fpsyg.2018.02365
Giering O, Fedorets A, Adriaans J, Kirchner S (2021) Künstliche Intelligenz in Deutschland:
Erwerbstätige wissen oft nicht, dass sie mit KI-basierten Systemen arbeiten [Artificial
Intelligence in Germany: Employees often do not know that they work with AI-based
systems]. DIW Wochenbericht 88(48):783789. https://doi.org/10.18723/diw_wb:2021-48-1
Giermindl LM, Strich F, Christ O, Leicht-Deobald U, Redzepi A (2022) The dark sides of people
analytics: Reviewing the perils for organisations and employees. Eur J Inf Syst 31(3):410435.
https://doi.org/10.1080/0960085X.2021.1927213
Gilbert DT, Wilson TD (2007) Prospection: Experiencing the future. Science 317(5843):13511354.
https://doi.org/10.1126/science.1144161
Gillan C, Milne E, Harnett N, Purdie TG, Jaffray DA, Hodges B (2019) Professional implications of
introducing artificial intelligence in healthcare: An evaluation using radiation medicine as a
testing ground. J Radiother Pract 18(1):59. https://doi.org/10.1017/S1460396918000468
Glikson E, Woolley AW (2020) Human trust in artificial intelligence: Review of empirical research.
Acad Manag Ann 14(2):627660. https://doi.org/10.5465/annals.2018.0057
Goldfarb A, Taska B, Teodoridis F (2023) Could machine learning be a general purpose technology? A
comparison of emerging technologies using data from online job postings. Res Policy
52(1):104653. https://doi.org/10.1016/j.respol.2022.104653
Harpaz I (2002) Expressing a wish to continue or stop working as related to the meaning of work. Eur
J Work Organ 11(2):177198. https://doi.org/10.1080/13594320244000111
Henrich J, Heine SJ, Norenzayan A (2010) The weirdest people in the world? Behav Brain Sci 33(2-
3):6183. https://doi.org/10.1017/S0140525X0999152X
Hinks T (2021) Fear of robots and life satisfaction. Int J Soc Robot 13(2):327340.
https://doi.org/10.1007/s12369-020-00640-1
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 41
Hoyle RH, Sherrill MR (2006) Future orientation in the self-system: Possible selves, self-regulation,
and behavior. J Pers 74(6):16731696. https://doi.org/10.1111/j.1467-6494.2006.00424.x
Hudson AD, Finn E, Wylie R (2023) What can science fiction tell us about the future of artificial
intelligence policy? AI & Soc 38:197211. https://doi.org/10.1007/s00146-021-01273-2
Ipsos (2022) Global opinions and expectations about artificial intelligence A global advisor survey
https://www.ipsos.com/sites/default/files/ct/news/documents/2022-01/Global-opinions-
and-expectations-about-AI-2022.pdf
Ivanov S, Kuyumdzhiev M, Webster C (2020) Automation fears: Drivers and solutions. Technol Soc
63:101431. https://doi.org/10.1016/j.techsoc.2020.101431
Jahoda M (1981) Work, employment, and unemployment: Values, theories, and approaches in social
research. Am Psychol 36(2):184191. https://doi.org/10.1037/0003-066X.36.2.184
Kaplan A, Haenlein M (2019) Siri, Siri, in my hand: Who’s the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 62(1):1525.
https://doi.org/10.1016/j.bushor.2018.08.004
KI-Campus (2020) Künstliche Intelligenz in 2 Minuten erklärt: Was ist eigentlich KI? [Artificial
intelligence explained in 2 minutes: What actually is AI?]. YouTube.
https://www.youtube.com/watch?v=sDt5bTQBJis. Accessed 03 June 2023
Kreutzer RT, Sirrenberg M (2019) Künstliche Intelligenz verstehen [Understanding artificial
intelligence]. Springer, Wiesbaden
Kuckartz U (2018) Qualitative Inhaltsanalyse. Methoden, Praxis, Computerunterstützung [Qualitative
content analysis. Methods, practice, computer support], 4th, revised edn. Beltz
Laird JE, Wray III RE, Marinier III RP, Langley P (2009) Claims and challenges in evaluating human-
level intelligent systems. Proceedings of the 2nd Conference on Artificiel General
Intelligence. 8085. https://doi.org/10.2991/agi.2009.17
Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics
33(1):159174. https://doi.org/10.2307/2529310
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 42
Långstedt J, Spohr J, Hellström M (2023) Are our values becoming more fit for artificial intelligence
society? A longitudinal study of occupational values and occupational susceptibility to
technological substitution. Technol Soc 72:102205.
https://doi.org/10.1016/j.techsoc.2023.102205
Liang H-F, Wu K-M, Weng C-H, Hsieh H-W (2019) Nurses' views on the potential use of robots in the
pediatric unit. J Pediatr Nurs 47:e58e64. https://doi.org/10.1016/j.pedn.2019.04.027
Lichtenthaler U (2019) Extremes of acceptance: Employee attitudes toward artificial intelligence. J
Bus Strategy 41(5):3945. https://doi.org/10.1108/JBS-12-2018-0204
Makarius EE, Mukherjee D, Fox JD, Fox AK (2020) Rising with the machines: A sociotechnical
framework for bringing artificial intelligence into the organization. J Bus Res 120:262273.
https://doi.org/10.1016/j.jbusres.2020.07.045
Markus H, Nurius P (1986) Possible selves. Am Psychol 41(9):954969.
https://doi.org/10.1037/0003-066X.41.9.954
Maslej N, Fattorini L, Brynjolfsson E, Etchemendy J, Ligett K, Lyons T, Manyika J, Ngo H, Niebles JC,
Parli V, Shoham Y, Wald R, Clark J, Perrault R (2023) The AI index 2023 annual report.
https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
Mayring P (2010) Qualitative Inhaltsanalyse: Grundlagen und Techniken [Qualitative content
analysis: Basics and techniques], 11th, revised edn. Beltz, Weinheim Basel
McAfee A, Brynjolfsson E (2017) Machine, platform, crowd: Harnessing our digital future. W. W.
Norton & Company
Mirbabaie M, Brünker F, Möllmann Frick NR, Stieglitz S (2022) The rise of artificial intelligence
understanding the AI identity threat at the workplace. Electron Mark 32(1):7399.
https://doi.org/10.1007/s12525-021-00496-x
Monroe AE, Ainsworth SE, Vohs KD, Baumeister RF (2017) Fearing the future? Future-oriented
thought produces aversion to risky investments, trust, and immorality. Soc Cogn 35(1):66
78. https://doi.org/10.1521/soco.2017.35.1.66
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 43
Ngo T, Krämer N (2021) It’s just a recipe? – Comparing expert and lay user understanding of
algorithmic systems. Technology, Mind, and Behavior 2(4).
O’Connor C, Joffe H (2020) Intercoder reliability in qualitative research: Debates and practical
guidelines. Int J Qual Methods 19:113. https://doi.org/10.1177/1609406919899220
Parasuraman R, Riley V (1997) Humans and automation: Use, misuse, disuse, abuse. Hum Factors
39(2):230253. https://doi.org/10.1518/001872097778543886
Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction
with automation. IEEE Trans Syst Man Cybern Syst 30(3):286297.
https://doi.org/10.1109/3468.844354
Parker SK, Grote G (2022) Automation, algorithms, and beyond: Why work design matters more than
ever in a digital world. Appl Psychol 71(4):11711204. https://doi.org/10.1111/apps.12241
Parmentier M (2021) Emotional anticipation of future vocational transitions: A person-centered
approach. PsyArXiv. https://doi.org/10.31234/osf.io/8mxy5
Payne EM, Peltier JW, Barger VA (2018) Mobile banking and AI-enabled mobile banking: The
differential effects of technological and non-technological factors on digital natives’
perceptions and behavior. J Res Interact Mark 12(3):328346. https://doi.org/10.1108/JRIM-
07-2018-0087
Pfeiffer S (2016) Robots, industry 4.0 and humans, or why assembly work is more than routine work.
Societies 6(2):16. https://doi.org/10.3390/soc6020016
Pham Q, Madhavan R, Righetti L, Smart W, Chatila R (2018) The impact of robotics and automation
on working conditions and employment. IEEE Robot Autom Mag 25(2):126128.
Prahl A, Van Swol LM (2021) Out with the humans, in with the machines?: Investigating the
behavioral and psychological effects of replacing human advisors with a machine. Hum
Commun Res 2:209234.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 44
Quarfoot D, Levine RA (2016) How robust Are multirater interrater reliability indices to changes in
frequency distribution? Am Stat 70(4):373384.
https://doi.org/10.1080/00031305.2016.1141708
Rodriguez-Bustelo C, Batista-Foguet JM, Serlavós R (2020) Debating the future of work: The
perception and reaction of the Spanish workforce to digitization and automation
technologies. Front Psychol 11. https://doi.org/10.3389/fpsyg.2020.01965
Roll LC, De Witte H, Wang H-J (2023) Conceptualization and validation of the occupation insecurity
scale (OCIS): Measuring employees’ occupation insecurity due to automation. Int J Environ
Res Public Health 20(3):2589. https://doi.org/10.3390/ijerph20032589
Rosso BD, Dekas KH, Wrzesniewski A (2010) On the meaning of work: A theoretical integration and
review. Res Organ Behav 30:91127. https://doi.org/10.1016/j.riob.2010.09.001
Russell SJ, Norvig P (2010) Artificial intelligence a modern approach, 3rd edn. Pearson
Sartori L, Bocca G (2023) Minding the gap(s): Public perceptions of AI and socio-technical
imaginaries. AI & Soc 38:443458. https://doi.org/10.1007/s00146-022-01422-1
Schepman A, Rodway P (2020) Initial validation of the general attitudes towards artificial intelligence
scale. Comput Hum Behav Rep 1:100014. https://doi.org/10.1016/j.chbr.2020.100014
Schwabe H, Castellacci F (2020) Automation, workers’ skills and job satisfaction. PLoS One
15(11):e0242929. https://doi.org/10.1371/journal.pone.0242929
Seubert C, Hopfgartner L, Glaser J (2021) Living wages, decent work, and need satisfaction: An
integrated perspective. Eur J Work Organ 30(6):808823.
https://doi.org/10.1080/1359432X.2021.1966094
Smids J, Nyholm S, Berkers H (2020) Robots in the workplace: A threat to or opportunity for
meaningful work? Philos Technol 33(3):503522. https://doi.org/10.1007/s13347-019-
00377-4
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 45
Sorgner A (2017) The automation of jobs: A threat for employment or a source of new
entrepreneurial opportunities? Foresight STI Gov 11(3):3748.
https://doi.org/10.17323/2500-2597.2017.3.37.48
Spencer DA (2018) Fear and hope in an age of mass automation: Debating the future of work. New
Technol Work Employ 33(1):112. https://doi.org/10.1111/ntwe.12105
Strauss K, Griffin MA, Parker SK (2012) Future work selves: How salient hoped-for identities motivate
proactive career behaviors. J Appl Psychol 97(3):580598.
https://doi.org/10.1037/a0026423
Strich F, Mayer A-S, Fiedler M (2021) What do I do in a world of artificial intelligence? Investigating
the impact of substitutive decision-making AI systems on employees’ professional role
identity. J Assoc Inf Syst 22(2). https://doi.org/10.17705/1jais.00663
Susskind D (2020) A world without work: Technology, automation, and how we should respond.
Metropolitan Books, New York
Toscanelli C, Udayar S, Massoudi K (2022) Antecedents and consequences of technology appraisal: A
person-centered approach. Technology, Mind, and Behavior 3(3).
https://doi.org/10.1037/tmb0000081
Toshav-Eichner N, Bareket-Bojmel L (2022) Yesterday's workers in tomorrow's world. Pers Rev
51(5):15531569. https://doi.org/10.1108/PR-02-2020-0088
Trenerry B, Chng S, Wang Y, Suhaila ZS, Lim SS, Lu HY, Oh PH (2021) Preparing workplaces for digital
transformation: An integrative review and framework of multi-level factors. Front Psychol
12. https://doi.org/10.3389/fpsyg.2021.620766
Van Boven L, Ashworth L (2007) Looking forward, looking back: Anticipation is more evocative than
retrospection. J Exp Psychol Gen 136(2):289300. https://doi.org/10.1037/0096-
3445.136.2.289
Walf J, Nachtwei J (2016) Qualitative Inhaltsanalyse: Ein Methodenbaukasten an einem
Forschungsbeispiel aus der Vertriebspsychologie. In: Beyer R, Krause B, Nachtwei J (eds)
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 46
Empirische Evaluationsmethoden Band 20 Workshop 2015 [Qualitative content analysis: A
toolbox of methods using a research example from sales psychology]. Zentrum fr
empirische Evaluationsmethoden e.V., Berlin, pp 5368
Wang P (2019) On defining artificial intelligence. J Artif Gen Intell 10(2):137.
https://doi.org/10.2478/jagi-2019-0002
Wang W, Siau K (2019) Potential impact of artificial intelligence on mental well-being (Conference
session). 25th Americas Conference on Information Systems, Cancun, Mexico
https://aisel.aisnet.org/amcis2019/treo/treos/73
White JC, Ravid DM, Siderits IO, Behrend TS (2022) An urgent call for IO psychologists to produce
timelier technology research. Ind Organ Psychol 15(3):441459.
https://doi.org/10.1017/iop.2022.26
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 47
9 Appendix
This appendix contains all the detailed results of our five interview studies.
Table A1
Study 1 with AI and Psychotherapy Experts: Potential Application Fields of AI-Driven Automation in
Diagnostics
Potential application fields in the diagnostic process of psychotherapy
Number of
codings
Phase-specific
Phase 1: Initial
consultation
Structured recording of symptoms, comparison with ICD-10a as a preliminary
classification of the disorder. Verification of the completeness of the recording of the
symptomatology in the sense of quality assurance and for diagnostic support and
indication.
1
Chatbot conducts initial conversation with patient and compares conversation with
conversations from big data pool and makes recommendation for further procedure
with patient.
1
Chatbot conducts the initial consultation. AI analyses speech and incorporates data from
monitoring the patient's everyday life into the analysis.
1
Phase 2:
Anamnesis
Life history is written down by the patient on the computer and AI analyses the wording
with regard to disease-valent symptoms present.
2
AI has access to electronic medical records and relates information to symptomatology.
Looking into the future: Access to all of a person's stored data. AI collects structured
anamnestic information by interacting with patients on the computer.
2
AI as a support for anamnesis.
2
For anamnesis, EMAsb are used and evaluated with AI or AI obtains anamnestic
information by interacting with patients on the computer.
3
Phase 3:
Testing
The use of test procedures was mentioned by all, referring to the implementation and
evaluation of tests. Test are filled out on the computer. Inclusion of gestures and facial
expressions during the test to obtain additional information. AI should make diagnostic
assessments based on this and give recommendations for further diagnostic procedures.
This procedure would save time for psychotherapists.
4
AI helps to standardize test procedures. AI-based item-response testing, where AI
calculates which further questions can be excluded after each answer. Automatic
execution and evaluation of cognitive tests, e.g., word fluency.
4
AI evaluates test by comparison with data pool consisting of many other test results.
3
Use of highly structured test procedures or EMAsb. AI combines different questionnaires
and evaluates them. The psychotherapist retains the decision-making authority.
4
Phase 4:
Indication
A treatment plan tailored to the patient by the patient: AI makes guideline-based
treatment recommendations and creates an individualized treatment plan. Expert
system that uses Big Data to suggest psychotherapy approaches that have benefited
people of the same age or with the same disorder, for example.
4
AI provides a recommendation for the appropriate psychotherapy approach.
1
Data from the automated anamnesis and testing procedures flow into an AI evaluation
program, which makes an ICD-10 diagnosis and recommends therapy based on this data.
2
Phase 5:
Patient
education &
treatment
contract
Assuming that these process steps are handled completely digitally, AI could analyse
where an abandonment occurred in the process.
1
AI-based software communicates all necessary information. Patient must agree to all
information.
1
Total codings
36
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 48
Potential application fields in the diagnostic process of psychotherapy
Number of
codings
General
Monitoring
patients’
everyday life
Digital phenotyping to enable personalized indication. In addition, the integration of
personal values with environmental factors would be conceivable. Monitoring via mobile
sensing (GPS, sleep data, smartphone usage data) and AI-based evaluation.
1
Capturing sentiment through mobile sensing and AI-based evaluation.
2
Automatic collection and AI-based readout of sensor, motion, and voice data over
several weeks.
1
Monitoring
patient-
therapist
interaction
AI-based evaluation of facial expressions and gestures.
1
Multimodal recording by microphones, camera, physiological measuring instruments
during the conversation between patient and psychotherapist.
1
Real-time mirroring: personal values of the patient are recorded and displayed
graphically to the psychotherapist.
1
Speech
monitoring
Speech analysis of the patient where AI looks for correlations with pathologies.
2
Monitoring
whole life
Video and audio recordings of the entire life as a basis for AI-based evaluation of mental
health assessment.
1
Multimodal AI-
based analysis
Multi-modal analysis by biomarkers, imaging, genetics, progression and behavioural
markers for personalized diagnosis and indication.
2
Automated
diagnosis via
brain scan
The entire diagnostic process uses AI-based pattern analysis of brain scans for
clarification, indication, and prognosis. This expertise is based on a neurobiological
model of mental disorders.
1
Complete
replacement
Technical capabilities exist to use AI throughout the entire diagnostic process, but it is
not socially accepted.
2
Total codings
15
Total codings
51
Note. The diagnostic process in psychotherapy is divided into five phases. Potential application fields were reported
specifically for these five phases and in general, i.e., phase independent. The interviews and the qualitative content
analysis were conducted in German. The results presented here are an English translation.
aICD-10 stands for "International Statistical Classification of Diseases and Related Health Problems, Version 10" and is the
most important, globally recognized classification system for medical diagnoses. bEMA stands for "Ecological Momentary
Assessments" and is a method that assesses physiological and psychological data of people in real time in their everyday
life.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 49
Table A2
Study 1 with AI and Psychotherapy Experts: Potential Application Fields of AI-Driven Automation in
Therapy
Potential application fields in the therapeutic process of psychotherapy
Phase-specific
Phase 1:
Initial phase
Increase quality by using an AI-based avatar.
1
Phase 2:
Implementation
phase
AI supports process diagnostics.
1
AI-based support for the implementation of the interventions.
1
AI evaluates the patient's assessment of the therapy session and forwards it to the
psychotherapist.
1
Phase 3:
Termination of
the therapy
AI-based support for the termination of therapy by identifying risk factors and health-
promoting factors through everyday patient monitoring.
2
AI-based process evaluation at the end of therapy.
1
Total codings
7
General
Self-help
programs/apps
Chatbot that queries mood and educates about one's own clinical picture or chatbot as
an additional tool that sets tasks between therapy sessions. These tasks are evaluated by
AI and assessed by the psychotherapist.
4
There is no AI yet that can process psychotherapeutic process. The use of AI-based
psychoeducation programs can lead to cost reduction.
2
Self-help programs support patients in their daily lives or in difficult situations.
1
AI-based chatbot as a support system for the patient. Use of AI-based chatbots
conceivable for less severe diseases.
2
Monitoring
during the
therapy session
AI as a support system for the psychotherapist's self-reflection. Processing and
evaluation of interaction events multimodally via speech, gestures, facial expressions
during therapy by AI to support intervention decisions. Thus, higher quality assurance of
the therapeutic process.
4
Monitoring of the therapy process by AI.
2
Processing and evaluation of interaction events during therapy by AI to support
intervention decisions.
1
Monitoring in
the everyday life
of the patient
Capturing behaviour and mood by monitoring the patient in everyday life and the
evaluation of this by AI enables the identification of critical moments and provides
guidance for intervention.
1
Capturing behaviour and mood by monitoring the patient in everyday life and the
evaluation of this by AI provides intervention advice. Additionally, behaviour and mood
can be related to environmental factors.
2
All-round care
through
KI/JITAIsa
AI-based programs care for patients around the clock by constantly collecting mobile
sensing data and intervening autonomously before problem phases based on prior
agreement between the patient and psychotherapist.
3
AI-based programs care for patients around the clock. EMIsb are used here, which
intervene independently in situations or conditions previously defined with the
psychotherapist.
2
Complete
replacement
Every person has a personal AI-based psychotherapist from childhood on.
1
Total codings
25
Total codings
32
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 50
Note. The interviews and the qualitative content analysis were conducted in German. The results presented here are an
English translation.
aJITAIs stands for "Just-in Time adaptive Interventions". bEMIs stands for "Ecological Momentary Interventions". Both are
methods in which mood or behavioural data, sometimes in combination with environmental data, are collected through
monitoring in the person's daily life and analysed in real time by AI to immediately provide the person with individualized
intervention suggestions tailored to the situation.
Table A3
Study 1 with AI and Psychotherapy Experts: Opportunities of AI-Driven Automation in Psychotherapy
Opportunities
Number of
codings
In general
The use of AI can help to carry out the diagnostic and therapeutic process in a guideline-assured, quality-
assured, structured, and standardized manner.
9
Time saving through AI, thus more patients can be treated or existing patients can be treated with better
quality.
6
More objective data through AI.
3
AI applications represent a low-threshold offering for:
- digitally affine people
- young people due to their high digital affinity
- people with very shameful problems
- patients with anxiety disorder or social-phobic disorders.
4
The use of AI can improve the care situation in psychotherapy.
2
Physical and psychological relief of the psychotherapist.
2
AI is available to the patient without a time limit.
2
Cost-saving, allowing more patients to be treated or patients to receive better care.
2
The use of AI stimulates the psychotherapist's self-reflection (AI provides feedback).
1
Partial automation through AI allows psychotherapists to focus on activities they deem essential.
1
AI-based systems are useful for summarizing and analysing conversations.
1
Psychotherapist learns from AI when assisted by AI.
1
Quality improvement through competition between AI and the psychotherapist.
1
Use of AI has a positive impact on the psychotherapist's sense of competence.
1
Conversational interactions through chatbots are technically possible.
1
Control and regulation function of AI in terms of quality assurance.
1
AI-based therapy services as a bridging service due to long waiting times.
1
Use of AI-based chatbots conceivable for less severe diseases.
1
AI can compensate for deficits of the psychotherapist.
1
Human-looking AI systems appear more trustworthy to humans than non-human-looking AI systems.
1
Total codings
42
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 51
Opportunities
Number of
codings
In diagnostics
More differentiated data through monitoring than the traditional differential diagnostic survey.
7
AI can better assess crisis escalations.
3
AI data analytics detects comorbidities and identifies them earlier than the psychotherapist through long-
term monitoring.
2
Expert systems as a possibility, in which expert-, experience- and scientific knowledge is bundled and on
this basis recommendations for action are given.
1
Improvement of test diagnostics through automated execution and evaluation.
1
Supporting the psychotherapist through AI in classification, diagnosis, and early detection.
1
AI-based systems are very well suited for:
- the recording and evaluation of precise tasks and tests - the comparison of data
- the processing and summarization of data.
1
Quality enhancement through more precise recording of data and test performance and thus more
precise validation of diagnosis and indication. This can lead to further development of diagnostic criteria.
1
Evaluation of video sequences with regard to the fit between psychotherapist and patient and thereby in
connection with the success of the therapy.
1
Replacement of questionnaires with serious gaming. AI filters out important information.
1
Capturing individual resilience and risk factors through AI.
1
Total codings
20
In therapy
AI-based applications as a support system in the patient's daily life for self-examination of the problem
and as a therapy motivation.
4
AI can support with recommendations for interventions.
2
AI as a support system in the patient's daily life to identify behaviour patterns and mood changes to
facilitate intervention decisions.
2
Monitoring at all levels facilitates reflection on the therapy process.
2
AI assists with process diagnostics and makes intervention suggestions.
2
Monitoring at all levels enables an individualized therapeutic approach.
2
AI applications can be helpful in the therapy of patients with autism, sexual disorders, or dementia.
1
Cost-effectiveness through delegable interventions to AI (psychoeducational interventions, mindfulness
exercises).
1
All-around care through AI as an opportunity in terms of extending the therapeutic relationship to virtual
therapy.
1
Long-term and comprehensive multimodal data collection can relate events from the past to current
therapy events.
1
AI collects data of psychotherapeutic experience and derives intervention suggestions.
1
Monitoring the patient's everyday life provides intervention information.
1
Total codings
20
Total codings
82
Note. The interviews and the qualitative content analysis were conducted in German. The results presented here are an
English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 52
Table A4
Study 1 with AI and Psychotherapy Experts: Threats of AI-Driven Automation in Psychotherapy
Threats
Number of
codings
In general
AI as an information source of objective data must be checked for plausibility by the psychotherapist
before it is used in a supportive manner.
12
Lack of standardization makes it much more difficult to compare the algorithms of different AI
applications: The amount of data, the representativeness of the data, the AI method and the algorithm
behind it, as well as the capacity of the data processing, all have an impact on the output. Possible biases
in the data would be adopted and reproduced by AI. This could lead to structural misjudgements.
7
Liability unresolved with respect to diagnostic and therapeutic decisions.
5
AI cannot replace human relationships.
4
Acceptance problems regarding:
- professional identity of the psychotherapist
- lack of knowledge to understand AI
- fear that psychotherapist will be replaced by AI
- patients reject digital applications
- due to different understanding of disease
4
Risk of data misuse due to insufficient data protection and lack of data protection concepts for the use of
AI in psychotherapy.
3
Quality of AI applications depends on the quality of the data used.
3
Basic ethical principles are not incorporated into the algorithms.
2
Lack of quality assurance with regard to psychotherapeutic expertise of developers.
2
Without interdisciplinary collaboration, viable and functioning AI systems are not possible.
2
Results of AI-based algorithms partly not comprehensible by their developers.
2
Risk of absolute transparency of the patient.
2
For patients with traumatic experience or paranoid disorders, video or audio recording can be
retraumatizing or disturbing.
1
Quality of AI applications depends on the quality of the data used.
1
AI can only understand what AI can perceive, thus AI misses various aspects in
communication that the psychotherapist does not miss.
1
Current AI applications have high abandonment rates.
1
Long-term effectiveness of AI applications not yet proven.
1
Possible limits of technical processing of the complex amount of data.
1
AI use may contribute to the economization of the healthcare system and replace psychotherapy.
1
Complete replacement of psychotherapist by AI systems too difficult due to complexity.
1
Development of psychotherapeutic AI applications is extremely complex, time-consuming, expensive, and
requires very large amounts of data.
1
Technology-shy patients could be put off by AI applications in psychotherapy.
1
Lack of psychotherapist qualification to review AI data.
1
Loss of the holistic human image and, associated with this, a fundamental ethical problem.
1
Non-use of AI may lead to decline in quality of psychotherapy. AI-based recommendation could become
mandatory for psychotherapists.
1
Increased use of AI may lead to new care structures.
1
Use of AI could lead to qualification requirements for psychotherapists on the part of payers and become
a quality feature of psychotherapy.
1
Use of AI could lead to lack of separation of patients' life and psychotherapeutic space.
1
There is insufficient research on data collection to make it useful.
1
No more choice between human psychotherapist and AI as psychotherapist especially problematic for
elderly.
1
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 53
Threats
Number of
codings
AI cannot learn human capabilities such as acceptance, tolerance, and empathy.
1
Total codings
67
In diagnostics
AI cannot replace subjective experiential knowledge of the psychotherapist.
2
Action based on suicidality should not be taken by AI.
1
Increased pathologisation possible through differentiated information data.
1
If AI could only be used for mild cases, psychotherapist would have to take all severe cases.
1
Total codings
5
In therapy
The essential psychotherapeutic effective factors cannot be recognized and implemented by an AI;
exclusive use of AI cannot replace the therapeutic relationship as an effective factor.
5
AI applications as a substitute for therapy are not individually tailored to patients and reduce the quality
of the psychotherapeutic process.
3
AI could dominate therapeutic decisions.
2
Research and development level of AI insufficient to perform psychotherapy.
2
Use of AI may threaten therapist-patient relationship.
1
Monitoring in the patient's daily life can mean additional workload for psychotherapists because they
have to act immediately in dangerous situations or situations that are difficult to assess.
1
Use of AI complicates or prevents work on patient relational skills.
1
Mandatory use of AI could threaten psychotherapy process quality.
1
AI does not have personality traits that are a basis of relationship formation (sympathy-antipathy
matching).
1
Total codings
17
Total codings
89
Note. The interviews and the qualitative content analysis were conducted in German. The results presented here are an
English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 54
Table A5
Study 2 with AI and Medical Experts: Existing and Future Application Fields of AI-Driven Automation
in Medicine
Number of codings
Application fields
Existing
Future
General
In imaging specialities
9
8
In medicine in general
2
7
In a large part or all of medicine
-
10
Total codings
11
25
Specific
Radiology
13
11
Pathology
4
4
Dermatology
4
3
Oncology
4
1
Cardiology
2
1
Neurology
1
1
Internal Medicine
1
-
Orthopaedics
1
1
Ophthalmology
1
2
Surgery
-
5
Psychiatry
-
1
Intensive care medicine
-
1
Dentistry
-
1
Total codings
31
32
None
8
-
Total codings
50
57
Note. The interviews and the qualitative content analysis were conducted
in German. The results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 55
Table A6
Study 2 with AI and Medical Experts: Potential Impact of AI-Driven Automation on Job Tasks of Chief
Physicians
Potential impact on job tasks
Number of
codings
Leadership tasks
Administrative tasks
3
Unspecific application possibilities
1
Total codings
4
Clinical tasks
Decision support
1
Anamnesis/pre-diagnostics/diagnostics
4
Image processing
2
Unspecific application possibilities
9
Total codings
16
Leadership and clinical tasks
9
Total codings
29
Note. The interviews and the qualitative content analysis were
conducted in German. The results presented here are an English
translation.
Table A7
Study 2 with AI and Medical Experts: Opportunities of AI-Driven Automation for Chief Physicians
Opportunities
Number of
codings
Quality improvement of the working environment
Execution of core job tasks
4
Optimization of working conditions and improvement of quality
14
Quality improvement of patient care
Personalized treatments and therapies
15
More contact and exchange with patients
12
Increased effectiveness and efficiency at work
33
Preservation of relevant resources
Administrative tasks are taken over
9
Routine tasks are taken over
3
Use of AI for unpopular job tasks
3
Use of AI for diagnoses and/or interventions
1
Other
17
Better data collection and/or data storage
21
Decision support
19
Speciality-specific opportunities
Neurology
1
Dermatology
1
Ophthalmology
1
Gastroenterology
1
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 56
Opportunities
Number of
codings
Oncology
1
Internal medicine
1
Radiology
4
Surgery
4
Interdisciplinary information flow and/or transparency of patient data
10
Bias mitigation and/or error reduction
6
Securing junior staff
2
New jobs and/or medical specialties
1
Other opportunities
11
Total codings
195
Note. For better readability of the table, an originally existing intermediate category level
was omitted in the presentation of the results. The categories affected are: "Quality
improvement of the working environment", "Quality improvement of patient care", and
"Preservation of relevant resources". "Quality improvement of the working environment"
and "Quality improvement of patient care" originally consisted of two levels: "Quality
improvement" at the first level and "Working environment" and "Patient care" at the second
level. For this table, this was combined at the first level. "Preservation of relevant resources"
was originally followed by the subcategory "specific use for preservation of resources", but
this only served as another heading for the same content. For the present presentation, this
intermediate level was omitted.
The interviews and the qualitative content analysis were conducted in German. The results
presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 57
Table A8
Study 2 with AI and Medical Experts: Threats of AI-Driven Automation for Chief Physicians
Threats
Number of
codings
Change in multiple roles
18
Dissatisfaction and/or problems with AI output
13
Rejection and/or lack of acceptance of AI by chief physicians and/or in the team
11
Lack of generalizability and/or proneness to errors
10
Problems with data situation and/or data privacy
9
Error-proneness of AI
9
Lack of competencies of chief physicians in dealing with AI
9
Job loss and/or elimination of socially relevant job tasks
8
high effort and/or costs
8
Ethical aspects and/or moral aspects of concern
8
Liability question in case of AI errors
7
Intransparency
5
General problems in the development of AI
4
Total codings
119
Note. The interviews and the qualitative content analysis were conducted in German. The results
presented here are an English translation.
Table A9
Study 3 with Financial Technology Experts: Attitudes Toward the Use of AI in the World of Work
Attitudes
Number of
codings
Positive
Next industrial revolution
3
Useful for the automation of processes
2
Improving the customer interface for companies
1
Elimination of routine tasks and focus on demanding tasks
1
Total codings
7
Note. The interviews and the qualitative content analysis were conducted in
German. The results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 58
Table A10
Study 3 with Financial Technology Experts: Conceivable Impact of Using AI in the World of Work
Impact
Number of
codings
Job cuts
5
Efficiency increase
3
Automation of job tasks
3
Change of job profiles
2
Improvement of customer satisfaction
2
Need for individual development
2
Tendency towards higher qualification
1
Need for other competencies and attitudes
1
Emergence of ethical and legal issues in dealing with AI
1
Emergence of new jobs
1
Total codings
21
Note. The interviews and the qualitative content analysis were conducted in
German. The results presented here are an English translation.
Table A11
Study 3 with Financial Technology Experts: Third-party Assessment of Current and Future Changes in
Job Tasks of Bank Clerks Due to AI-Driven Automation
Number of codings
Changes in job tasks
Currently
In the future
Customer service
AI systems for service tasks (e.g., account opening)
2
-
AI systems for customer care
1
-
AI system for processing customer inquiries
-
3
Lending business
AI systems for credit decision making
2
1
Retail banking
AI systems in sales
1
-
AI systems for general product advice
-
3
Creation of new job tasks
Development towards a life coach
-
1
Increase in digital task
-
1
Important competence: Dealing with the latest media
-
1
Clerical work
AI systems for general administrative tasks
-
2
Total codings
6
12
Note. The interviews and the qualitative content analysis were conducted in German. The results
presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 59
Table A12
Study 3 with Financial Technology Experts: Third-party Assessment of Bank Clerks' Current and Future
Experience of Changes in Job Tasks Due to AI-Driven Automation
Number of codings
Experience of changes in job tasks (3rd-party assessment)
Currently
In the future
Positive
Perception as an opportunity to devote oneself to more demanding tasks
2
-
No fear of change
1
-
Interest among under 40s
1
-
Awareness of digitization
1
-
Change is perceived as an opportunity
-
2
Awareness of change will increase
-
2
Openness to change
-
1
Total codings
5
5
Negative
Threat to own job
3
1
Resignation during the change
2
1
Fear of job loss
1
-
Exhaustion
1
-
Anger to have to deal with new systems
1
-
Worry about automation of job tasks
1
-
Displacement of reality
1
1
Rather reticence among over 40-year-olds
1
-
Total codings
11
3
Total codings
16
8
Note. The interviews and the qualitative content analysis were conducted in German. The results presented here are
an English translation.
Table A13
Study 3 with financial technology experts: Third-Party Assessment of Bank Clerks' Current and Future
Behaviour in Reaction to Changes in Job Tasks Due to AI-Driven Automation
Number of codings
Behaviour in reaction to changes in job tasks (3rd-party assessment)
Currently
In the future
No initiative
2
2
Dealing with new systems
1
-
Job change
1
1
Actively shaping the changes
1
1
Job training
1
2
Initial defensive behaviour
1
-
Adaptation to change
-
1
Total codings
7
7
Note. The interviews and the qualitative content analysis were conducted in German. The results
presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 60
Table A14
Study 4 with Bank Clerks: Attitudes Toward the Use of AI in the World of Work
Attitudes
Number of
codings
Positive
Work relief
10
Use of AI makes sense in principle
10
Elimination of standardized job tasks
2
Competitive advantage
1
More effective work
1
Learning through engagement with machines
1
Total codings
25
Negative
Job cuts
7
Loss of one's own job
2
Loss of face-to-face interaction
2
Danger from autonomous systems
1
Total codings
12
Neutral
Change unstoppable
5
Total codings
42
Note. The interviews and the qualitative content analysis were conducted
in German. The results presented here are an English translation.
Table A15
Study 4 with Bank Clerks: Conceivable Impact of Using AI in the World of Work
Impact
Number of
codings
Job cuts
12
Efficiency increase
9
Change of job profiles
6
Societal changes
3
Individual willingness to change required
3
Elimination of face-to-face exchange
3
Changed customer behaviour
3
Creation of new jobs
2
Less appreciation for the work performed
2
Increasing competition
2
Change of business models
1
Cybercrime
1
Work demands increase
1
Work relief for employees
1
Total codings
49
Note. The interviews and the qualitative content analysis were conducted
in German. The results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 61
Table A16
Study 4 with Bank Clerks: Current and Future Changes in Job Tasks Due to AI-Driven Automation
Number of codings
Changes in job tasks
Currently
In the future
Investment business
Robo-Advisor
4
-
AI system for portfolio management
2
-
AI system for determining risk propensity
2
-
AI systems to provide general investment advice
-
3
AI systems for the implementation of asset structures
-
1
Customer service
AI system for processing customer inquiries (including chatbots)
4
5
AI system for legitimizing customer orders
3
-
Lending business
AI system for checking the creditworthiness
5
-
AI system in online banking for loans
1
-
AI systems for credit decision
-
1
Retail banking
AI system for product suggestions
4
-
Avatar in sales
1
-
AI systems for general product advice
-
4
AI systems for advising on construction financing
-
2
AI systems in online banking
-
2
Avatar as first contact
-
1
Clerical work
AI system for archiving of files
1
-
AI systems for filling contract documents
-
1
AI systems for garnishment processing
-
1
AI systems for data analysis
-
1
AI systems for general administrative tasks
-
1
Elimination of job profiles
Job profile "Bank clerk"
-
3
Job profile "Clerk"
-
1
Elimination of the "bank" business model
AI systems replace the "bank" business model
-
1
Total codings
27
28
Note. The interviews and the qualitative content analysis were conducted in German. The results
presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 62
Table A17
Study 4 with Bank Clerks: Current and Future Experience of Changes in Job Tasks Due to AI-Driven
Automation
Number of codings
Experience of changes in job tasks
Currently
In the future
Positive
Work relief
9
-
Explicitly positive
7
7
Exciting/thrilling
4
-
Personal acceptance
3
-
Openness for changes
3
5
Awareness of the changes
2
-
Enthusiasm for the changes
1
-
Joyful anticipation of change
1
-
Motivated for job training
1
-
Confidence
-
3
Total codings
31
15
Negative
Initial scepticism
4
-
Disappointment due to less customer contact
2
-
Pressure to change
2
-
Fear of competence loss
1
-
Wistfulness
1
-
Overload due to changes
1
-
Performance pressure
1
1
Frustration when dealing with new systems
1
-
Harmful to mental health
1
-
Fear
-
2
Worries about job loss
-
2
Psychological effects
-
2
Rejection of current developments
-
2
Scepticism about change
-
2
Poor well-being due to lack of customer contact
-
1
Pressure to learn continuously
-
1
Worry due to uncertainty
-
1
Increased workload
-
1
Total codings
14
15
Total codings
45
30
Note. The interviews and the qualitative content analysis were conducted in German. The
results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 63
Table A18
Study 4 with Bank Clerks: Current and Future Behaviour in Reaction to Changes in Job Tasks Due to
AI-Driven Automation
Number of codings
Behaviour in reaction to changes in job tasks
Currently
In the future
In-house job training
9
4
Dealing with new systems/processes
9
-
University studies
5
-
No private training
5
-
Explicit dealing with the topic
Basically read in
2
-
Having dealt with the topic of Robo-advisors
1
-
Read about Fintechs
1
-
Exchange with colleagues on new, internal systems
3
-
Private training
Further training out of interest
1
-
Training as a Scrum Master
1
-
State degree in business administration
-
1
Part-time studies
-
1
University studies in business informatics
-
1
Cousera courses on AI
-
1
No change in behaviour
2
-
Reflection of the situation
1
-
Resignation
1
-
Change of employer
1
-
More customer interaction
1
-
No planning/initiative
-
7
Facing change with openness
-
5
Taking initiative independently
-
2
Job specialization
In the direction of innovation
-
1
Specializing in principle
-
1
Expanding network
-
2
Possible job change
-
1
Planned job change
-
1
Maintaining willingness to learn
-
1
Total codings
43
29
Note. The interviews and the qualitative content analysis were conducted in German. The
results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 64
Table A19
Study 5 with Physicians: Conceivable Impact of Using AI in the World of Work
Impact
Number of
codings
AI support
22
Efficiency increase
17
Job cuts
17
Creation of new job tasks
16
Error prevention and error reduction
14
Takeover of job tasks by AI
11
Quality improvement
11
Creation of new jobs
9
Knowledge expansion
9
More errors and more difficult error checking
8
Elimination of interpersonal interaction
6
Data privacy
5
Time saving
5
Loss of competence and knowledge
3
Total codings
153
Note. The interviews and the qualitative content analysis were conducted
in German. The results presented here are an English translation.
Table A20
Study 5 with Physicians: Current and Future Changes in Job Tasks Due to AI-Driven Automation
Number of codings
Changes in job tasks
Currently
In the future
Diagnostics
6
12
Drug administration
6
2
Administrative tasks
4
14
Surgeries
2
9
Monitoring of vital parameters
2
2
Continuation of the medical profession
-
10
Anamnesis
-
7
Therapy
-
6
Control of AI by physicians
-
3
Patient education
-
1
No change
3
3
Total codings
23
69
Note. The interviews and the qualitative content analysis were
conducted in German. The results presented here are an English
translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 65
Table A21
Study 5 with Physicians: Current and Future Experience of Changes in Job Tasks Due to AI-Driven
Automation
Number of codings
Experience of changes in job tasks
Currently
In the future
Positive
Relief
6
11
Light-heartedness
3
11
Satisfaction
3
-
Joy
1
-
Fascination
1
-
Interest, curiosity
-
10
Confidence
-
6
Joyful anticipation
-
4
Total codings
14
42
Negative
Frustration
6
-
Doubts, scepticism, mistrust
5
12
Anger, fury
4
-
Fear
2
2
Total codings
17
14
Total codings
31
56
Note. The interviews and the qualitative content analysis were conducted
in German. The results presented here are an English translation.
PSYCHOLOGICAL IMPLICATIONS OF AI-DRIVEN AUTOMATION FOR EMPLOYEES 66
Table A22
Study 5 with Physicians: Current and Future Behaviour in Reaction to Changes in Job Tasks Due to AI-
Driven Automation
Number of codings
Behaviour in reaction to changes in job tasks
Currently
In the future
Avoidance behaviour
5
-
Developing trust in AI
5
-
Controlling and questioning AI
4
10
Inform oneself and acquire knowledge
3
12
Using AI
2
13
More simultaneous work
1
9
Trying out AI
1
5
Participation in AI development process
-
4
Maintaining own competencies
-
2
Patient education
-
1
Consider job change
-
2
No specific behaviour
5
-
Total codings
26
58
Note. The interviews and the qualitative content analysis were conducted in
German. The results presented here are an English translation.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Increased use and implementation of automation, accelerated by the COVID-19 pandemic, gives rise to a new phenomenon: occupation insecurity. In this paper, we conceptualize and define occupation insecurity, as well as develop an Occupation Insecurity Scale (OCIS) to measure it. From focus groups, subject-matter expert interviews, and a quantitative pilot study, two dimensions emerged: global occupation insecurity, which refers to employees’ fear that their occupations might disappear, and content occupation insecurity, which addresses employees’ concern that (the tasks of) their occupations might significantly change due to automation. In a survey-study sampling 1373 UK employees, psychometric properties of OCIS were examined in terms of reliability, construct validity, measurement invariance (across gender, age, and occupational position), convergent and divergent validity (with job and career insecurity), external discriminant validity (with organizational future time perspective), external validity (by comparing theoretically secure vs. insecure groups), and external and incremental validity (by examining burnout and work engagement as potential outcomes of occupation insecurity). Overall, OCIS shows good results in terms of reliability and validity. Therefore, OCIS offers an avenue to measure and address occupation insecurity before it can impact employee wellbeing and organizational performance.
Article
Full-text available
As news organizations struggle with issues of public distrust, artificially intelligent (AI) journalists may offer a means to reduce perceptions of hostile media bias through activation of the machine heuristic—a common mental shortcut by which audiences perceive a machine as objective, systematic, and accurate. This report details the results of two experiments (n = 235 and 279, respectively, U.S. adults) replicating the authors’ previous work. In line with that previous work, the present studies found additional support for the argument that AI journalists’ trigger machine-heuristic evaluations that, in turn, reduce perceptions of hostile media bias. Extending that past work, the present studies also indicate that the bias-mitigation process (if AI, then machine-heuristic activation, therefore perceived bias reduction) was moderated by source/self-ideological incongruity—though differently across coverage of two issues (abortion legalization and COVID-19 vaccine mandates).
Article
Full-text available
In line with the recent literature, the aim of this article is to adopt a psychological approach to understand how technology is subjectively perceived and experienced at work, where the use of technology is seldom an individual choice, as well as its effects on employee’s well-being. This study aims to adopt a person-centered approach to create clusters of technology appraisal, explain such clusters’ membership through sociodemographics, and use these clusters to predict work-related well-being outcomes. In a sample of 692 Swiss working adults (Mage = 39.56, SD = 12.45, 60% female) active in both private and public sectors, this study first analyzed clusters of technology appraisal taking into account perceived usefulness, ease to use, and limitation of autonomy using a TwoStep cluster analysis. Then, these clusters’ membership was predicted by sociodemographic and individual characteristics (i.e., age, sex, education level, and generalized self-efficacy) using a multinomial logistic regression. Finally, differences in burnout, work engagement, and job boredom between clusters were examined using analyses of variance. Three different clusters of technology appraisal were found: The Tech-Enthusiasts, the Tech-Ambivalents, and the Tech-Detractors. Age, sex, educational level, and self-efficacy predicted clusters’ membership. Differences in burnout and work engagement were found between the clusters. No difference was found in boredom between the clusters. These findings highlight the importance of developing relevant and inclusive interventions to promote well-being and equality at work.
Article
Full-text available
Der Begriff der Künstlichen Intelligenz (KI) bleibt im Arbeitsalltag für Erwerbstätige oftmals undurchschaubar. Das zeigt eine Auswertung der Verbreitung von KI in der Arbeitswelt in Deutschland mithilfe des neuen Datenmoduls des SOEP-IS zum Thema Digitalisierung. Fragt man die Erwerbstätigen direkt nach der Nutzung von digitalen Systemen mit „Künstlicher Intelligenz“, geben rund 20 Prozent an, solche Systeme zu nutzen. Erfragt man die Nutzung dagegen indirekt, also ohne Nennung des Begriffs KI, geben fast doppelt so viele Personen an, täglich eine oder mehrere digitale Systeme mit entsprechenden Funktionen zu nutzen. Viele arbeiten demnach schon mit KI-basierten Systemen, ohne dies zu wissen. Das legt nahe, dass die aktuelle Debatte um befürchtete Arbeitsplatzverluste durch KI (Substitution) um Perspektiven der Zusammenarbeit (Kollaboration) zwischen Menschen und Maschinen erweitert werden muss. Tatsächlich erledigen derzeit viele Erwerbstätige bestimmte Tätigkeiten noch selbst, werden dabei aber (auch) von KI-basierten Systemen unterstützt. Damit möglichst viele Menschen vom technologischen Fortschritt in Deutschland profitieren und diesen mitgestalten können, sollten Weiterbildungen angeboten werden, die Wissen über KI vermitteln und nötige Fähigkeiten stärken.
Article
Full-text available
Artificial intelligence (AI) is being increasingly integrated into enterprises to foster collaboration within humanmachine teams and assist employees with work-related tasks. However, introducing AI may negatively impact employees’ identifications with their jobs as AI is expected to fundamentally change workplaces and professions, feeding into individuals’ fears of being replaced. To broaden the understanding of the AI identity threat, the findings of this study reveal three central predictors for AI identity threat in the workplace: changes to work, loss of status position, and AI identity predicting AI identity threat in the workplace. This study enriches information systems literature by extending our understanding of collaboration with AI in the workplace to drive future research in this field. Researchers and practitioners understand the implications of employees’ identity when collaborating with AI and comprehend which factors are relevant when introducing AI in the workplace.
Article
Many emerging technologies have aspects of General Purpose Technologies (GPTs). However, true GPTs are rare and hold potential for large-scale economic impact. Thus, it is important for policymakers and managers to assess which emerging technologies are likely GPTs. We describe an approach that uses data from online job ads to rank emerging technologies on their GPT likelihood. The approach suggests which technologies are likely to have a broader economic impact, and which are likely to remain useful but narrower enabling technologies. Our approach has at least 5 years predictive power distinct from prevailing patent-based methods of identifying GPTs. We apply our approach to 21 different emerging technologies, and find that a cluster of technologies comprised of machine learning and related data science technologies is relatively likely to be a GPT.
Article
The rapid pace at which technology changes creates a challenge for industrial-organizational (I-O) psychologists, who often conduct hypothetico-deductive research. In this article, we examine technology research in the I-O psychology community by asking three questions: Why should I-O psychologists study new technologies? How timely is I-O psychologists’ technology research? How can I-O psychologists produce timelier technology research? Using archival data from 23 years of SIOP conferences and a historical timeline of technology innovations, we find that I-O psychologists study technology milestones an average of 6.10 years after they first enter widespread awareness and adoption. We discuss the implications of this lag and conclude by urging I-O psychologists to study technology with an eye toward action, exploration, collaboration, dissemination, and creation.
Preprint
Most future educational and career transitions represent major life events that individuals anticipate to a considerable extent, possibly with multiple emotions at the same time. However, few studies have examined the emotions that individuals experience when they anticipate a future educational or career transition, imagine how it will occur, the consequences it will have for them, and visualize their coping efforts. The aims of the present dissertation are fourfold. First, we explore individuals’ combinations of multiple future-oriented emotions at the prospect of three major educational and career transitions: (a) the transition from high school to higher education, (b) the transition from higher education to the job market, and (c) the transition from unemployment to employment. Due to the rather exploratory nature of our first research question, our second objective pertains to the replication of these combinations and the investigation of similarities between several groups of individuals based on (a) gender, (b) institutional context, and (c) the temporal distance before the transition. Third, we examine several antecedents of individuals’ combinations of future-oriented emotions. These antecedents ranged from career-related constructs such as career decidedness and career adaptability to affective mechanisms such as cognitive appraisals, trait affect, and emotion regulation. Finally, we examine the behavioral effects of future-oriented emotions in terms of anticipated vocational planning and effort. Overall, the present dissertation brings several implications in highlighting the combinations of future-oriented emotions that individuals experience when anticipating important vocational transitions, a research strand that is scarce both in vocational and emotion research. From a practical point of view, the evidence of several combinations—and the differences and similarities among several groups or contexts—carries practical implications for designing and implementing career-related interventions. Finally, examining antecedents and outcomes of future-oriented emotions combinations underlines the importance of taking emotional anticipation processes into account when individuals prepare for and cope with major educational and career transitions.