ArticlePDF Available

It’s Scary to Use It, It’s Scary to Refuse It: The Psychological Dimensions of AI Adoption—Anxiety, Motives, and Dependency

MDPI
Systems
Authors:

Abstract and Figures

The current study examines the psychological factors shaping AI adoption, focusing on anxiety, motivation, and dependency. It identifies two dimensions of AI anxiety: anticipatory anxiety, driven by fears of future disruptions, and annihilation anxiety, reflecting existential concerns about human identity and autonomy. We demonstrate a U-shaped relationship between AI anxiety and usage, where moderate engagement reduces anxiety, and high or low levels increase it. Perceived utility, interest, and attainment significantly correlate with AI engagement, while frequent AI usage is linked to high dependency but not to anxiety. These findings highlight the dual role of psychological factors in hindering and alleviating AI usage. This study enriches the understanding of emotional and motivational drivers in AI adoption and highlights the importance of balanced implementation strategies to foster sustainable and effective AI integration while mitigating the risks of over-reliance.
Content may be subject to copyright.
Academic Editor: Maja Meško
Received: 15 December 2024
Revised: 12 January 2025
Accepted: 25 January 2025
Published: 29 January 2025
Citation: Frenkenberg, A.; Hochman,
G. It’s Scary to Use It, It’s Scary to
Refuse It: The Psychological
Dimensions of AI Adoption—Anxiety,
Motives, and Dependency. Systems
2025,13, 82. https://doi.org/10.3390/
systems13020082
Copyright: © 2025 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license
(https://creativecommons.org/
licenses/by/4.0/).
Article
It’s Scary to Use It, It’s Scary to Refuse It: The Psychological
Dimensions of AI Adoption—Anxiety, Motives, and Dependency
Adi Frenkenberg and Guy Hochman *
Baruch Ivcher School of Psychology, Reichman University, Herzliya 4610101, Israel;
adi.frenkenberg@post.runi.ac.il
*Correspondence: ghochman@runi.ac.il; Tel.: +972-9-9602422
Abstract: The current study examines the psychological factors shaping AI adoption,
focusing on anxiety, motivation, and dependency. It identifies two dimensions of AI
anxiety: anticipatory anxiety, driven by fears of future disruptions, and annihilation anxiety,
reflecting existential concerns about human identity and autonomy. We demonstrate a U-
shaped relationship between AI anxiety and usage, where moderate engagement reduces
anxiety, and high or low levels increase it. Perceived utility, interest, and attainment
significantly correlate with AI engagement, while frequent AI usage is linked to high
dependency but not to anxiety. These findings highlight the dual role of psychological
factors in hindering and alleviating AI usage. This study enriches the understanding
of emotional and motivational drivers in AI adoption and highlights the importance of
balanced implementation strategies to foster sustainable and effective AI integration while
mitigating the risks of over-reliance.
Keywords: digital transformation; AI anxiety; anticipatory anxiety; annihilation anxiety;
AI usage; motivational factors; problematic use; technology adoption; AI dependency
1. Introduction
Artificial intelligence (AI) has rapidly become a transformative force, shaping in-
dustries and societies alike. Its adoption has sparked significant debate, with some
tools facing regulatory bans, such as temporary restrictions in Italy and ongoing pro-
hibitions in China [
1
]. AI is predicted to contribute $15.7 trillion to the global economy by
2030—surpassing the current output of China and India combined [
2
]. While offering
immense opportunities, AI also raises critical concerns, including distinguishing real from
artificial content, addressing ethical issues like bias and transparency, managing uncon-
trolled growth, and mitigating job displacement risks [
3
]. Moreover, there is growing
recognition that algorithms, in specific contexts, may improve decision-making accuracy
compared to human judgment. These complexities highlight the need for a deeper under-
standing of the psychological and societal dimensions of AI adoption.
AI has been widely recognized as the current technological revolution, provok-
ing both interest and debate. Over 73% of U.S. companies have adopted or plan to
adopt AI technologies by 2024 [
4
], and 75 to 375 million workers—3–14% of the global
workforce—may need to change occupations or upgrade their skills by 2030, depending
on the pace of AI adoption [
5
]. PwC’s 2024 Global CEO Survey provides similar indica-
tions, with 69% of global CEOs expecting AI to necessitate workforce reskilling [
6
]. These
projections reflect the profound impact of AI on both industries and the global workforce.
The benefits of AI, e.g., enhanced efficiency, effectiveness, and decision-making ac-
curacy, are well-documented [
7
10
]. Still, many individuals and organizations remain
Systems 2025,13, 82 https://doi.org/10.3390/systems13020082
Systems 2025,13, 82 2 of 25
hesitant to adopt AI tools despite their advantages. For instance, a 2024 U.S. survey found
that 37% of respondents had never used AI tools, and 20% were unsure if they had [
11
].
Cautious (54%), concerned (49%), and skeptical (40%) attitudes toward AI are prevalent [
11
],
particularly among small and medium-sized enterprises, with half reporting no plans to
implement AI [
12
]. This reluctance reflects “algorithm aversion,” [
13
] where individuals
resist relying on AI for decision-making, even when it could improve outcomes, such as in
healthcare [
14
,
15
]. Addressing this hesitation is crucial for realizing AI’s potential benefits.
As AI continues to advance, its constraints and ethical considerations are becom-
ing increasingly critical in shaping adoption and integration. AI’s reliance on massive
amounts of human data raises significant concerns about data privacy [
16
] and questions
of ownership over AI-generated outputs [
17
]. A major barrier to adoption is the lack of
trust in AI-driven systems, which is essential for both managers and employees as trust
directly influences acceptance and usage [
18
]. One source of distrust is the perception
that AI systems, trained on vast datasets of human behaviors, often inherit and amplify
human biases. For instance, machine learning algorithms trained on historical hiring data
have been shown to perpetuate gender and racial biases, creating ethical and operational
challenges for organizations striving for fairness [
19
]. AI has also been found to produce
biased decisions based on individuals’ dialects and even fabricate fake references, further
eroding trust in its outputs [
20
,
21
]. Such issues contribute to perceptions of AI as opaque,
unreliable, and unfair, especially for those negatively impacted by its biases [22].
Ethical concerns surrounding AI extend beyond bias to include transparency, control,
accuracy, and accountability [
23
,
24
]. Many view AI systems as “black boxes,” leading to
confusion and anxiety about their outputs [
25
]. Employees may distrust these systems,
fearing errors or a lack of clear accountability pathways [
26
]. Concerns about data misuse
or exposure also amplify resistance to adoption, especially regarding data privacy and
surveillance [
27
]. To address these challenges, organizations are increasingly implementing
ethical guidelines for AI. Initiatives like the Organization for Economic Co-operation and
Development’s “Principles for AI” and the European Commission’s “Ethical Guidelines for
Trustworthy AI” aim to promote responsible AI usage, reduce public anxieties, and foster
trust [
28
,
29
]. These measures emphasize transparency, fairness, and ethical governance to
enable sustainable AI adoption.
Although several concerns about AI may seem unique to the current technological
landscape, they resonate with historical patterns of anxiety accompanying disruptive
technologies. From the Industrial Revolution to the rise of digital technologies, these
anxieties often stem from fears of displacement, erosion of social norms, or loss of control,
which mirror contemporary apprehensions about AI systems. This phenomenon, termed
“technology anxiety,” increases cognitive load and impedes adoption [
30
]. Over time,
various forms of technology anxiety have been conceptualized, like “Technophobia” [
31
],
reflecting a fear of new technology; “Computer Anxiety” [
32
], describing apprehension
toward human–computer interaction; and “Technostress” [
33
], characterized by an inability
to cope with technological change, often leading to mental health issues [
34
]. The im-
portant effect of these anxieties on education [
35
,
36
] and consumer behavior [
37
,
38
] was
demonstrated in earlier technological transitions. This historical context highlights the
importance of understanding how user perceptions and anxieties affect the adoption of
novel technologies.
Wach et al. [
39
] provide a thorough examination of the factors shaping perceptions
of AI systems. They categorize these barriers into seven critical domains, including reg-
ulatory shortcomings, algorithmic biases, and technostress. These barriers emphasize
the intricate interplay between ethical, social, and psychological concerns in fostering or
hindering AI adoption. For instance, the absence of robust regulations exacerbates trust
Systems 2025,13, 82 3 of 25
issues, while algorithmic biases erode perceptions of fairness and equity. Furthermore, the
psychological condition stemming from difficulties in adapting to new technologies [
33
]
underscores the mental toll that AI integration can impose on individuals and organizations.
Together, these dimensions frame the complex environment within which AI adoption must
be understood.
The current study addresses these multifaceted challenges by focusing on the psycho-
logical dimensions of adopting AI conversational agents (e.g., ChatGPT), decision-support
systems, and productivity tools. These systems, widely used by individuals and organiza-
tions, directly engage users in decision-making and cognitive tasks. Drawing from existing
literature, we explore how AI adoption is shaped by two distinct AI-anxiety constructs.
Anticipatory anxiety, which is conceptualized as a forward-looking apprehension about
the potential disruptions AI might bring, particularly concerning job displacement and
ethical dilemmas, and annihilation anxiety, which reflects existential fears about the erosion
of human identity and autonomy in the face of increasingly advanced AI systems. The
study hypothesizes distinct patterns of interaction between these anxieties and AI usage,
emphasizing both immediate apprehensions and deeper existential concerns that influence
adoption behaviors (see Figure 1).
Systems 2025, 13, x FOR PEER REVIEW 3 of 25
Wach et al. [39] provide a thorough examination of the factors shaping perceptions
of AI systems. They categorize these barriers into seven critical domains, including regu-
latory shortcomings, algorithmic biases, and technostress. These barriers emphasize the
intricate interplay between ethical, social, and psychological concerns in fostering or hin-
dering AI adoption. For instance, the absence of robust regulations exacerbates trust is-
sues, while algorithmic biases erode perceptions of fairness and equity. Furthermore, the
psychological condition stemming from diculties in adapting to new technologies [33]
underscores the mental toll that AI integration can impose on individuals and organiza-
tions. Together, these dimensions frame the complex environment within which AI adop-
tion must be understood.
The current study addresses these multifaceted challenges by focusing on the psy-
chological dimensions of adopting AI conversational agents (e.g., ChatGPT), decision-
support systems, and productivity tools. These systems, widely used by individuals and
organizations, directly engage users in decision-making and cognitive tasks. Drawing
from existing literature, we explore how AI adoption is shaped by two distinct AI-anxiety
constructs. Anticipatory anxiety, which is conceptualized as a forward-looking apprehen-
sion about the potential disruptions AI might bring, particularly concerning job displace-
ment and ethical dilemmas, and annihilation anxiety, which reects existential fears about
the erosion of human identity and autonomy in the face of increasingly advanced AI sys-
tems. The study hypothesizes distinct paerns of interaction between these anxieties and
AI usage, emphasizing both immediate apprehensions and deeper existential concerns
that inuence adoption behaviors (see Figure 1).
Figure 1. An illustration of the proposed theoretical model and hypotheses.
Building on cognitive-behavioral theories, we posit that the relationship between AI
usage and anxiety follows a U-shaped trajectory due to distinct psychological processes.
Initially, exposure to AI can alleviate anxiety by fostering familiarity and competence,
consistent with the principles of exposure therapy, where repeated interactions with a
perceived threat diminish fear over time [40]. Studies on technology adoption and hu-
man–computer interaction show that increased usage correlates with reduced anxiety and
improved condence in digital tools [41,42]. This suggests that familiarity can reduce un-
certainty and build trust in AI systems.
However, as engagement with AI deepens, users may encounter unforeseen com-
plexities, ethical dilemmas, or dependencies that reignite anxiety. Research shows that
overexposure to technology can amplify concerns about trust, autonomy, and reliability
Figure 1. An illustration of the proposed theoretical model and hypotheses.
Building on cognitive-behavioral theories, we posit that the relationship between AI
usage and anxiety follows a U-shaped trajectory due to distinct psychological processes.
Initially, exposure to AI can alleviate anxiety by fostering familiarity and competence,
consistent with the principles of exposure therapy, where repeated interactions with a
perceived threat diminish fear over time [
40
]. Studies on technology adoption and human–
computer interaction show that increased usage correlates with reduced anxiety and
improved confidence in digital tools [
41
,
42
]. This suggests that familiarity can reduce
uncertainty and build trust in AI systems.
However, as engagement with AI deepens, users may encounter unforeseen com-
plexities, ethical dilemmas, or dependencies that reignite anxiety. Research shows that
overexposure to technology can amplify concerns about trust, autonomy, and reliabil-
ity [
43
,
44
]. For instance, studies indicate that high reliance on AI systems can lead to
perceptions of diminished control and increased vulnerability to system failures [
18
]. Ethi-
cal concerns, such as bias or opacity in decision-making processes, may exacerbate anxieties
among users who feel over-reliant on AI for critical tasks [
45
,
46
]. These dynamics illustrate
the dual effects of AI adoption: while moderate engagement reduces anxiety through
Systems 2025,13, 82 4 of 25
familiarity, excessive reliance can heighten anxiety due to dependency and trust-related
concerns. Accordingly, we hypothesize the following:
Hypothesis 1. There is a U-shaped relationship between AI usage and AI anxiety, with a turning
point at which increased usage is associated with an increase in AI anxiety.
This hypothesis provides a nuanced perspective on how engagement with AI in-
fluences user reactions. Understanding this relationship will offer critical insights for
organizations striving to balance usage and familiarity-building initiatives with strategies
to address emergent concerns as AI integration deepens.
1.1. The Dual Nature of AI Anxiety
AI anxiety can lead to a myriad of cognitive and emotional responses to AI adoption,
reflecting both its potential benefits and inherent risks. Another aim of the current work
is to show that two important dimensions of AI anxiety are anticipatory anxiety, which
arises from fears of future disruptions or challenges, and annihilation anxiety, which reflects
deeper existential concerns about human identity and autonomy. These dimensions provide
a framework for understanding the dual nature of AI anxiety, where moderate engagement
may alleviate fears while deeper reliance could amplify them.
In recent years, researchers have focused on understanding the dimensions of AI
anxiety [
3
,
47
,
48
]. Studies highlight concerns such as fear of being superseded, anxiety
about AI integration into the workplace, and inadequate preparation for the AI era [
3
].
Wang and Wang [
47
] developed the AI Anxiety Scale (AIA) to measure the fear or anxiety
that inhibits individuals from engaging with it. They identify four key dimensions: job
replacement anxiety, anxiety over human dependency on AI, anxiety toward humanoid AI,
and anxiety about acquiring AI-related skills. These dimensions reflect both anticipatory
and annihilation anxieties: anticipatory anxiety is evident in fears about job replacement
and inadequate preparation, while annihilation anxiety manifests in concerns about so-
ciotechnical dependency and the erosion of human autonomy. A critical implication of these
anxieties is their potential to hinder technology adoption. For instance, Chang et al. [
49
]
found a significant negative relationship between AI anxiety and AI adoption intention,
underscoring the importance of addressing these fears to foster broader acceptance and
effective integration of AI into organizational and societal contexts.
Barlow, Chorpita, and Turovsky [
50
] describe anticipatory anxiety as the inclination to
anticipate or predict a possible future threat, danger, or negative event, leading to anxiety
in response to that expectation. Anticipatory anxiety, the shadow side of AI anxiety, reflects
a future-oriented unease about what AI might disrupt or replace. This fear, much like
past technology-related anxieties, often stems from uncertainty about the implications
of AI adoption [
3
,
51
]. Anticipatory anxiety introduces unease, just like earlier instances
of computer anxiety, where concerns about dehumanization were prominent [
52
]. This
type of anxiety presents significant challenges for AI adoption by influencing individual
decisions and shaping organizational and governance strategies. Thus, understanding
the relationship between the shadow side of AI anxiety and technology acceptance is
crucial to informing policymakers and designing strategies to facilitate effective AI system
implementation [51].
The current study posits that anticipatory anxiety may serve as a foundational driver
of AI anxiety, particularly during the early adoption phase when users have limited or no
direct experience with these systems. According to the fear extinction model [
53
], avoidance
behaviors can reinforce these anxieties, preventing individuals from challenging unrealistic
beliefs about AI. This perpetuates mistrust, hinders adoption, and underscores the need for
strategies to address anticipatory fears during initial interactions with AI.
Systems 2025,13, 82 5 of 25
When linked to AI, anticipatory anxiety reflects a cognitive state focused on future
uncertainties rather than immediate realities. Research on AI anxiety [
47
,
49
] highlights
primary fears, including job loss, unemployment, and AI surpassing human intelligence.
While valid, these concerns often remain abstract, influencing users’ perceptions more
than their everyday experiences. This aligns with findings that anticipatory anxiety fre-
quently exacerbates concerns about potential rather than current threats [
51
]. As a result,
anticipatory anxiety represents a psychological barrier that influences both individual and
organizational decision-making processes.
Given the growing implementation of AI tools in our daily and organizational lives,
understanding the intersection of AI and anticipatory anxieties can inform strategies to
reduce psychological barriers to AI adoption. For example, addressing anticipatory anxiety
could mitigate avoidance behaviors, foster public confidence, and promote adoption, which
are key factors for leveraging AI systems effectively in organizational and decision-making
contexts. Furthermore, reducing anticipatory anxiety could improve trust in AI-driven
decision-making, mitigate user resistance, and facilitate ethical and effective system design.
By addressing this anxiety, organizations can create a pathway for broader acceptance and
more sustainable implementation of AI technologies.
Hypothesis 2. Anticipatory anxiety is positively correlated with AI anxiety.
The hypothesized positive correlation between AI anxiety and anticipatory anxiety is
grounded in theories that connect anticipatory emotions with domain-specific anxieties.
Cognitive-behavioral frameworks suggest that anticipatory anxiety reflects heightened
emotional arousal triggered by perceived threats, even when those threats remain hypothet-
ical [
54
,
55
]. This theoretical perspective aligns closely with the psychological mechanisms
underlying AI anxiety, where individuals experience fear related to potential future disrup-
tions caused by AI technologies. These disruptions often include concerns about the loss of
human agency or autonomy, further amplifying anticipatory anxiety in contexts where AI
systems are perceived as unpredictable or overly autonomous [3].
Based on the role that anticipatory anxiety plays in shaping technology adoption, it is
crucial to understand the connection between anticipatory anxiety and digital tools usage.
Digital transformation initiatives often rely on managers’ ability to evaluate engagement
levels and identify resistance among employees and stakeholders. Proactively addressing
this question allows for a more effective allocation of resources, mitigates barriers, and
refines implementation strategies. Thus, it maximizes the efficiency and impact of these
initiatives [
56
]. A pivotal aspect of this process is addressing anticipatory anxiety, which
often arises from unfamiliarity with new technologies. When individuals lack exposure to
emerging digital tools, they tend to overestimate risks and complexity, exacerbating their
unease [
51
]. In the current work, we argue that experience plays a critical role in mitigating
such anxieties. This is based on the premise that as individuals interact more frequently with
digital tools, they become less intimidated by their perceived complexity [
57
]. Familiarity
reduces both cognitive and emotional uncertainty, fostering confidence and diminishing
avoidance behaviors. In line with this claim, research consistently shows that repeated
exposure to technology reduces uncertainty, builds confidence, and decreases emotional
resistance [
53
,
54
]. These findings align with findings that familiarity fosters competence
and control, thereby alleviating domain-specific anxieties [
55
]. Accordingly, we hypothesize
the following:
Hypothesis 3. Increased usage of AI is associated with decreased anticipatory anxiety.
Systems 2025,13, 82 6 of 25
While anticipatory anxiety focuses on the immediate uncertainties and risks of AI
adoption, a deeper layer of concern emerges as AI becomes increasingly integrated into
daily life. This deeper layer, referred to as annihilation anxiety, shifts the focus to the abyss
side of AI anxiety, which represents more profound psychological concerns. The shadow
and abyss sides form a spectrum of anxieties, ranging from fears about adoption and
adaptation to existential threats. On the surface, anticipatory anxiety reflects individuals’
apprehension about the uncertainties and potential disruptions caused by AI. On the other
side, annihilation anxiety delves into fears about the erosion of human identity, autonomy,
and the boundaries that have traditionally distinguished humans from machines. As AI
continues to reshape human interaction and even foster dependency, these existential fears
underscore the psychological depth of AI anxiety.
Annihilation anxiety reflects a fear for the survival of the self or ego [
58
], encompassing
the dread of mental, physical, or symbolic destruction and extinction [
59
]. Richardson [
60
]
explored this concept in the context of AI and machines, proposing that annihilation
anxiety is triggered when humans and nonhumans become comparable. This comparison
threatens to “erase the difference between humans and nonhumans” (p. 6), symbolically
undermining what makes humans unique. The rapid advancements in AI systems that
mimic or surpass specific aspects of human cognition intensify this psychological unease.
As AI is increasingly perceived not only as a tool but also as a potential rival, it fosters a
profound sense of vulnerability and existential threat [61,62].
This existential fear is not merely theoretical. The rapid development of AI challenges
traditional visions of harmonious human–machine collaboration, raising concerns about
whether synergy and mutual enhancement can be achieved [
63
]. Instead, there is grow-
ing apprehension that AI might dominate or undermine human roles, fueling a broader
narrative of technological displacement. This narrative exacerbates annihilation anxiety,
as individuals fear not only the loss of jobs or societal roles but also the erosion of what it
means to be human in the face of increasingly intelligent and autonomous machines.
To fully grasp the complexities of AI anxiety, it is crucial to address both its immediate
(shadow) and existential (abyss) dimensions. Annihilation anxiety delves into deeper,
more abstract concerns about the erosion of human identity, autonomy, and the uniqueness
of human intelligence. Together, these dimensions offer a comprehensive framework for
examining AI anxiety, underscoring its dual nature—rooted both in present-day interactions
and in long-term existential threats.
Hypothesis 4. Annihilation anxiety is positively correlated with AI anxiety.
This hypothesis highlights the interplay between existential fears and broader concerns
about AI adoption. Annihilation anxiety, rooted in apprehensions about the erosion of
human identity and autonomy, amplifies general AI anxiety by framing AI systems as both
a technological tool and a potential existential threat. Individuals experiencing heightened
annihilation anxiety may perceive AI as challenging fundamental human distinctions,
which intensifies their overall sense of unease. These findings emphasize the importance of
understanding how existential dimensions of anxiety shape attitudes toward AI, offering
critical insights for designing interventions that address these deeper concerns.
Hypothesis 5. Increased usage of AI is associated with an increase in annihilation anxiety.
This hypothesis suggests that deeper engagement with AI may intensify existential
concerns by challenging the perceived boundaries between human and machine capabil-
ities. As individuals interact with systems that mimic or surpass human skills, fears of
obsolescence and the erosion of human distinctiveness may heighten [
60
]. Such anxieties
Systems 2025,13, 82 7 of 25
often stem from a perceived loss of autonomy and control as AI systems increasingly
perform tasks traditionally associated with human expertise [
18
,
45
]. This dynamic re-
flects the broader psychological impact of advanced technologies, where the blurring of
human–machine boundaries can evoke a sense of vulnerability and existential threat [
61
,
62
].
Understanding the interplay between annihilation anxiety and AI usage provides valuable
insights into how individuals and organizations navigate these psychological barriers.
Such insights are crucial for developing governance frameworks that optimize AI’s benefits
while mitigating its psychological and existential risks.
1.2. From Barriers to Drivers: Motivational Factors in AI Adoption
While addressing anticipatory and annihilation anxieties is crucial, focusing solely
on these barriers provides an incomplete perspective on AI adoption. Equally important
is the understanding of the motivations that drive the use of AI tools. As organizations
increasingly integrate AI into their operations, recognizing both the hindrances and enablers
of adoption becomes essential. Strategies to mitigate user concerns must be complemented
by initiatives that foster positive engagement and sustain motivation, thereby facilitating
smoother digital transformation efforts.
To this end, Yurt and Kasarci [
64
] developed the Questionnaire of AI Use Motives
(QAIUM), which identifies five key motivational dimensions influencing AI adoption.
These dimensions include (1) Expectancy, reflecting self-efficacy and the perceived ability
to effectively use AI; (2) Attainment, representing the value placed on mastering and
enhancing AI-related skills; (3) Utility Value, highlighting the practical benefits of AI for
personal and professional growth; (4) Intrinsic/Interest Value, capturing the enjoyment
and satisfaction derived from interacting with AI; and (5) Cost, which accounts for the
effort, time, and sacrifices required to adopt and use AI tools. Together, these dimensions
provide a robust framework for understanding the factors that motivate user engagement
with AI technologies.
The importance of these motivational dimensions is further supported by research
identifying specific drivers of AI adoption. Competitive advantages and performance
improvement have been highlighted as powerful motivators, particularly in professional
contexts [
65
]. Similarly, digital competence, which encompasses the skills and confidence
needed to use AI effectively [
66
], and hedonic motivation, the enjoyment derived from
using AI tools [
67
], underscore the role of positive reinforcement in driving adoption.
These motivators align with Self-Determination Theory (SDT) [
68
], which emphasizes the
significance of perceived autonomy, competence, and relatedness in fostering intrinsic
motivation [
69
]. Studies show that intrinsic motivation often yields greater engagement
and sustained effort than extrinsic factors, providing critical insights for organizations
aiming to enhance AI integration.
Building on this foundation, we propose that increased usage of AI is positively
associated with heightened AI use motives. As individuals engage more frequently with
AI tools, they may develop stronger intrinsic motivations, such as a sense of competence
and enjoyment, alongside extrinsic motivations, such as perceived utility and professional
benefits. This dynamic interaction creates a positive feedback loop, where motivation
reinforces engagement and fosters deeper, more sustained interactions with AI technologies.
These findings highlight the importance of addressing not only the barriers to AI adoption
but also the motivational factors that encourage its effective and responsible use.
Hypothesis 6. Increased usage of AI is associated with an increase in AI Use Motives.
Systems 2025,13, 82 8 of 25
1.3. AI Dependency: A New Behavioral Addiction
Building on the critical understanding of the psychological and motivational factors
that influence AI usage, it is essential to examine how the use of AI technologies evolves,
particularly as initial anxieties give way to dependency. Historically, technological advance-
ments have followed a trajectory marked by early apprehension and eventual reliance, a
process well described by the Diffusion of Innovations theory [
70
]. For example, when per-
sonal computers were first introduced, concerns about dehumanization and the potential
loss of control were widespread [
52
]. Similarly, the early adoption of mobile phones was
met with fears about their societal impact, including associations with crime, social anxiety,
and uncertainty [
71
,
72
]. Over time, however, these technologies became indispensable tools
for communication, work, and personal organization, transforming initial skepticism into
deep-seated dependency [
73
76
]. The goal of this study is to identify the factors driving
this progression, thereby informing strategies that promote healthy engagement with AI
while mitigating the risks of maladaptive reliance.
Examining the transition from AI anxiety and motivators to dependency through the
lens of historical patterns offers valuable insights into the lifecycle of AI adoption. As
discussed earlier, AI systems, like their technological predecessors, are often met with skep-
ticism and apprehension [
3
,
45
]. However, their integration into daily and organizational
life suggests the potential for a parallel trajectory, where increased usage alleviates initial
fears but also fosters dependency. This shift has profound implications for understanding
how users interact with AI over time. As familiarity with AI tools grows, initial anxieties
may diminish, driven by enhanced competence and perceived utility, as highlighted in
earlier sections. Nevertheless, this familiarity can increase reliance as users turn to AI
for decision-making, task execution, and productivity improvements. This dual dynamic,
where usage reduces barriers while simultaneously creating risks of over-reliance, rep-
resents a critical area for exploration. Understanding how these transitions occur will
illuminate the psychological and organizational factors contributing to dependency and
guide developing governance frameworks and ethical strategies that balance the benefits
of AI adoption with its inherent challenges.
As individuals and organizations increasingly adopt AI, understanding this evolution
is essential to anticipate its broader impacts. If initial fears are gradually replaced by
dependency, the ethical and operational challenges surrounding AI will likely shift. For
instance, increased reliance on AI systems may exacerbate concerns about accountability,
transparency, and the unequal distribution of technological benefits. Dependency on
AI also raises critical questions about human agency in decision-making and the risks
associated with over-reliance on automated systems. By examining whether AI follows a
trajectory from initial anxiety to dependency, the current work aims to provide a deeper
understanding of the psychological and governance challenges posed by AI tools. These
insights are crucial for informing policies and practices that foster responsible AI adoption,
balancing innovation’s benefits with the need to address its risks and uncertainties.
With the increasing prevalence of AI in daily life, a distinct behavioral pattern known
as AI Dependency is emerging. While earlier research often used terms like “overreliance”,
“addiction”, and “dependency” interchangeably [
77
79
], AI dependency is now being
recognized as a specific form of behavioral addiction. Behavioral addictions are defined
by excessive engagement in activities that are generally acceptable when practiced in
moderation. However, individuals experiencing such addictions often struggle to regulate
their behavior, even when it leads to significant personal distress, impaired academic or
occupational functioning, or strained relationships [80].
In the context of AI, problematic usage can manifest as compulsive engagement with
tools like ChatGPT, where excessive reliance impairs various aspects of an individual’s
Systems 2025,13, 82 9 of 25
life [
81
]. The Problematic ChatGPT Use Scale (PCUS) identifies key predictors of such
behaviors, including increased weekly usage time and elevated levels of depression, il-
lustrating the broader psychological toll of excessive AI use. This highlights the need to
understand how certain patterns of interaction with AI evolve into maladaptive behaviors.
However, AI dependency extends beyond general problematic use by reflecting a deeper,
compulsive reliance on AI systems to perform everyday tasks, often at the expense of per-
sonal well-being and autonomy. For example, Zhang et al. [
82
] found that AI dependency
in academic contexts is strongly linked to performance expectancy—the belief that using a
specific system will enhance one’s job performance [
42
]. This suggests that individuals who
perceive AI as indispensable for achieving success are more likely to develop dependency,
especially in high-pressure environments where efficiency and outcomes are emphasized.
AI dependency has significant consequences. Laestadius et al. [
83
] highlighted that exces-
sive reliance on AI, especially in social media and digital technology, is associated with
adverse mental health outcomes, including heightened anxiety and depression. These find-
ings align with broader research on behavioral addictions, where dependency exacerbates
underlying psychological vulnerabilities [
79
]. Consequently, AI dependency may create a
feedback loop: increased usage leads to dependency, which, in turn, amplifies anxiety and
reinforces reliance on AI systems as a coping mechanism.
In the context of the current work, prior studies indicate that AI anxiety may play a
pivotal role in driving compulsive behaviors, such as excessive reliance on AI tools. For
instance, findings from digital addiction research suggest that anxiety often exacerbates
problematic usage patterns [
84
]. Users with heightened AI anxiety may over-engage
with these tools, either as an ineffective coping strategy to manage their fears or due to
heightened concerns about staying competent in an increasingly AI-driven world. While
problematic usage patterns are often regarded as detrimental, they may also signal greater
engagement and adaptability in certain contexts, such as digital transformation initiatives.
For example, Qiao et al. [
85
] observed that problematic usage behaviors among digital
leaders implementing AI adoption strategies could reflect proactive efforts to integrate
emerging technologies effectively. This dual perspective highlights the nuanced nature of
“problematic” use in professional environments, suggesting it can sometimes indicate a
high level of commitment and adaptability rather than purely maladaptive behavior. This
complexity points to the importance of examining how psychological factors like anxiety
influence AI usage. By exploring the interplay between AI anxiety and dependency, we
aim to provide a deeper understanding of this relationship, contributing to the broader
discourse on AI adoption and its implications for organizational and individual behavior.
Based on the theoretical rationale outlined above, we hypothesize the following:
Hypothesis 7. AI dependency is positively correlated with AI anxiety.
Finally, the frequency and intensity of AI usage might also play a key role in driving
the development of dependency. As individuals increasingly interact with AI tools, the line
between productive use and compulsive engagement can become blurred. The accessibility
and perceived utility of these tools, particularly for professional or personal tasks, may
encourage habitual reliance, creating opportunities for dependency to emerge. At the global
level, overuse of AI tools may lead to negative environmental consequences (e.g., energy
use and dissipation of thermal energy into the atmosphere). This phenomenon aligns with
findings from prior research, which highlight how repeated exposure can foster problematic
usage patterns [
81
]. Building on these foundations, the current work also examines the
role of AI usage in shaping dependency. As reliance on AI tools grows, individuals may
become increasingly dependent on them to perform everyday tasks, particularly in contexts
emphasizing efficiency, performance, and outcomes. This dynamic underscores the need to
Systems 2025,13, 82 10 of 25
understand how the frequency and intensity of AI interactions contribute to dependency
and its broader implications for users’ well-being and autonomy.
Hypothesis 8. Increased usage of AI is associated with increased AI dependency.
Building on behavioral addiction frameworks, we expect increased frequency and
intensity of AI usage to contribute to the development of dependency as individuals
increasingly perceive these systems as indispensable [
86
]. This dynamic may escalate
from habitual reliance to compulsive behaviors, blurring the line between productive
engagement and problematic use [
87
]. Understanding the interplay between AI usage,
dependency, and anxiety is essential for addressing the psychological and organizational
challenges posed by the widespread adoption of AI tools. By identifying these dynamics,
this work aims to inform policymakers and design interventions that promote healthier
relationships with AI, particularly in high-pressure environments where efficiency and
performance demands may amplify the risks of over-reliance.
2. Materials and Methods
2.1. Participants
A total of 242 participants completed the study. The sample comprised individuals
aged 18 to 73 years old (M = 34.08, SD = 11.36), with a balanced representation of gender
identities: 48% female and 52% male. Participants were geographically diverse, represent-
ing multiple countries, with most residing in English-speaking regions. Participants were
recruited through the Prolific platform. To ensure data integrity, only individuals with a
prior approval rating of 95% or higher on the platform were invited to participate, adhering
to established best practices in online research [
45
]. Data collection took place in October
2024. All participants provided informed consent and completed the study questionnaire
in its entirety.
Ethical approval was obtained from the authors’ institutional review board. Participa-
tion was entirely voluntary, and the study complied with GDPR and other relevant data
protection laws. The survey was administered anonymously, with no personal identifying
information collected beyond what was managed by Prolific for payment purposes. This
approach ensured participant privacy and confidentiality. The sampling strategy was
guided by the current focus on organizational implications. While the general population
was targeted, additional efforts were made to include participants employed in organiza-
tional contexts where AI tools are increasingly relevant. Recruitment was not restricted by
occupational background, ensuring a diverse range of professional experiences with AI
tools. This approach aligns with our aim to generalize findings to organizational adoption
of AI tools.
2.2. Design and Procedure
The study utilized a cross-sectional survey design to explore the psychological dimen-
sions of AI adoption, including anxiety, motivational factors, and dependency. Data were
collected through an online survey administered via the Qualtrics platform. The survey
included five validated scales measuring distinct constructs related to AI adoption. To
maintain consistency and minimize potential biases, the scales were presented in the same
order to all participants. Clear instructions were provided at the beginning of each section
to ensure comprehension.
After providing informed consent, participants completed the survey, which was
designed to take approximately 15–20 min. Participants who indicated familiarity with AI
tools completed all sections of the survey, while those unfamiliar with AI were directed to
Systems 2025,13, 82 11 of 25
the end of the survey. This approach ensured that only relevant data were collected for the
study hypotheses.
The first part of the questionnaire was the AI Anxiety Scale (AIA) [
47
]. This scale
measured participants’ general anxiety about interacting with AI. Sample items included
statements such as, “Taking a class about the development of AI techniques/products
makes me anxious”, and “I am afraid that AI techniques/products will replace some-
one’s job”. Participants responded using a 7-point Likert scale (1 = Strongly Disagree to
7 = Strongly Agree). This scale demonstrated high reliability, with a Cronbach’s alpha
of 0.94.
The second part of the survey included the Anticipatory Anxiety Inventory (AAI) [
88
],
which evaluated anxiety experienced in anticipation of anxiety-provoking events. Sample
items included, “As the time for the anxiety-provoking event approaches, I feel more
panicky than I would feel earlier”, and “Before an anxiety-provoking event, I am under
a lot of pressure”. Responses were recorded on a 4-point Likert scale (1 = Not True to
4 = Totally True), with a Cronbach’s alpha of 0.95.
The third and fourth parts of the survey were the Annihilation Anxiety Scale (ANAS)
and the AI Dependency Scale (ADS), respectively. The ANAS [
89
] assessed existential
threats and fears, such as the loss of identity or group extermination. Sample items included,
“I sometimes worry that I will lose my sense of self” and “I feel personally threatened
by extreme inequalities in society”. Responses were captured on a 5-point Likert scale
(1 = Strongly Disagree to 5 = Strongly Agree). This scale demonstrated a Cronbach’s alpha
of 0.78. The ADS [
90
] measured reliance on AI tools in daily life. Sample items included “I
feel unprotected without AI tools” and “I rely heavily on AI for my tasks”. Responses were
recorded on a 5-point Likert scale (1 = Completely False to 5 = Completely True), with a
Cronbach’s alpha of 0.75.
The final part of the survey utilized the Questionnaire of AI Use Motives (QAIUM) [
64
],
which explored motives for AI usage across five dimensions: Expectancy, Attainment,
Utility Value, Intrinsic/Interest Value, and Cost. Sample items included, “I can learn the
skills that enable effective use of artificial intelligence applications” (Expectancy) and “I take
pleasure in using artificial intelligence applications” (Intrinsic/Interest Value). Participants
responded using a 5-point Likert scale (1 = Completely False to 5 = Completely True). This
scale demonstrated a high reliability, with a Cronbach’s alpha of 0.96 across motives.
2.3. Data Analysis
The data were analyzed using SPSS software (version 29). Descriptive statistics were
calculated to summarize participant demographics and responses on the study scales. To
test the first research hypothesis, a quadratic regression analysis was conducted to examine
the U-shaped relationship between AI Usage and AI Anxiety. For subsequent hypotheses,
Pearson correlation analyses were employed to investigate the relationships between AI
Usage and anxiety-related variables, including Dependency and Motivation. In cases
where linear correlations were not significant, quadratic regression models were applied to
explore potential non-linear relationships. The criterion for statistical significance was set
at an alpha level of 0.05 (
α
= 0.05). These analytical approaches enabled a comprehensive
examination of both linear and non-linear associations across the study variables.
3. Results
3.1. Descriptive Statistics
Table 1presents the descriptive statistics for all study variables, including means,
standard deviations (SD), and observed ranges. AI Usage exhibited the highest mean score
(M = 4.42, SD = 0.93), reflecting frequent engagement with AI technologies within the
Systems 2025,13, 82 12 of 25
sample. AI Anxiety (M = 3.61, SD = 1.10) and AI Use Motives (M = 3.62, SD = 0.75) were
also moderately high, suggesting that both concerns about AI and motivations for using
AI were prevalent among participants. In contrast, the measures of Anticipatory Anxiety
(M = 2.38, SD = 1.07), Annihilation Anxiety (M = 2.42, SD = 0.80), and AI Dependency
(M = 2.29, SD = 0.87) indicated more moderate levels of these psychological dimensions.
These trends suggest that while participants frequently engaged with AI and demonstrated
significant motivational factors, concerns such as dependency and extreme anxieties were
less pronounced.
Table 1. Mean, SDs, Minimum, and Maximum for study variables.
Variable M SD Min Max
AI Usage 4.42 0.93 1.00 5.00
AI Anxiety 3.61 1.10 1.05 6.19
Anticipatory Anxiety 2.38 1.07 1.00 4.00
Annihilation Anxiety 2.42 0.80 1.00 5.00
AI Dependency 2.29 0.87 1.00 5.00
AI Use Motives 3.62 0.75 1.29 5.00
3.2. Hypothesis Tests
Hypothesis 1. There is a U-shaped relationship between AI usage and AI anxiety, with a turning
point at which increased usage is associated with an increase in AI anxiety.
To test the U-shaped relationship between AI Usage and AI Anxiety, we conducted a
quadratic regression analysis. The results confirmed the significance of the quadratic term,
indicating a U-shaped relationship (Figure 2). Specifically, the linear coefficient (b
1
=
0.69,
SE = 0.29, p= 0.047) and the quadratic coefficient (b
2
= 0.94, SE = 0.04, p= 0.007) were
both statistically significant. The model accounted for 8.6% of the variance in AI Anxiety
(R
2
= 0.086) and was statistically significant (F(2,229) = 10.83, p< 0.001). These results
suggest that AI Anxiety initially decreases as AI Usage increases, consistent with the notion
that moderate engagement fosters familiarity and reduces fear. However, at higher levels
of usage, AI Anxiety begins to rise again, potentially reflecting heightened concerns about
dependency, control, or ethical implications. This finding emphasizes the complexity of AI
adoption processes, highlighting the dual effects of usage on anxiety. Thus, organizations
should encourage balanced AI engagement, where users are supported in their adoption
journey to avoid over-reliance or heightened concerns at extreme usage levels.
To explore the associations among all study variables and specifically address the
research hypotheses, we conducted a Pearson correlation matrix analysis (Table 2). This
method enabled us to assess the direction, strength, and significance of the relationships
between the variables, providing valuable insights into their connections.
Table 2. Pearson correlation matrix for study variables (N = 242).
Variable 2 3 4 5 6
AI Usage 0.24 *** 0.15 * 0.06 0.27 *** 0.38 ***
AI Anxiety --- 0.45 *** 0.51 *** 0.02 0.40 ***
Anticipatory
Anxiety --- 0.36 *** 0.01 0.18 **
Annihilation
Anxiety --- 0.17 ** 0.05
Systems 2025,13, 82 13 of 25
Table 2. Cont.
Variable 2 3 4 5 6
AI Dependency --- 0.55 ***
AI Use Motives ---
*p< 0.05, ** p< 0.01, *** p< 0.001.
Systems 2025, 13, x FOR PEER REVIEW 13 of 25
Figure 2. U-shaped quadratic relationship between AI usage and AI anxiety.
To explore the associations among all study variables and specically address the
research hypotheses, we conducted a Pearson correlation matrix analysis (Table 2). This
method enabled us to assess the direction, strength, and signicance of the relationships
between the variables, providing valuable insights into their connections.
Table 2. Pearson correlation matrix for study variables (N = 242).
Variable 2 3 4 5 6
AI Usage 0.24 *** 0.15 * 0.06 0.27 *** 0.38 ***
AI Anxiety --- 0.45 *** 0.51 *** 0.02 0.40 ***
Anticipatory Anxiety --- 0.36 *** 0.01 0.18 **
Annihilation Anxiety --- 0.17 ** 0.05
AI Dependency --- 0.55 ***
AI Use Motives ---
* p < 0.05, ** p < 0.01, *** p < 0.001.
Hypothesis 2. Anticipatory anxiety is positively correlated with AI anxiety.
A strong positive correlation was observed between Anticipatory and AI Anxieties
(r = 0.45, p < 0.001). This supports the hypothesis that individuals experiencing greater
anticipatory fears about potential disruptions caused by AI also report higher overall AI
Anxiety. This nding aligns with theoretical frameworks suggesting that anticipatory
anxiety amplies broader anxieties when individuals perceive future threats as unpredict-
able or signicant.
Hypothesis 3. Increased usage of AI is associated with decreased anticipatory anxiety.
A statistically signicant negative correlation was found between AI Usage and An-
ticipatory Anxiety (r = –0.15, p = 0.028), indicating that higher levels of AI engagement are
associated with reduced anticipatory fears. This nding suggests that familiarity with AI
tools may alleviate unease about their perceived complexity or disruptive potential, con-
sistent with the principles of exposure therapy. These results emphasize the role of early
exposure and training in reducing psychological barriers to AI adoption.
Hypothesis 4: Annihilation anxiety is positively correlated with AI anxiety.
Figure 2. U-shaped quadratic relationship between AI usage and AI anxiety.
Hypothesis 2. Anticipatory anxiety is positively correlated with AI anxiety.
A strong positive correlation was observed between Anticipatory and AI Anxieties
(r = 0.45, p< 0.001). This supports the hypothesis that individuals experiencing greater
anticipatory fears about potential disruptions caused by AI also report higher overall AI
Anxiety. This finding aligns with theoretical frameworks suggesting that anticipatory anxi-
ety amplifies broader anxieties when individuals perceive future threats as unpredictable
or significant.
Hypothesis 3. Increased usage of AI is associated with decreased anticipatory anxiety.
A statistically significant negative correlation was found between AI Usage and An-
ticipatory Anxiety (r = –0.15, p= 0.028), indicating that higher levels of AI engagement
are associated with reduced anticipatory fears. This finding suggests that familiarity with
AI tools may alleviate unease about their perceived complexity or disruptive potential,
consistent with the principles of exposure therapy. These results emphasize the role of early
exposure and training in reducing psychological barriers to AI adoption.
Hypothesis 4. Annihilation anxiety is positively correlated with AI anxiety.
Annihilation and AI Anxieties exhibited a strong positive correlation (r = 0.51,
p< 0.001), supporting the hypothesis that existential concerns about the erosion of human
identity and autonomy intensify broader anxieties about AI. This finding highlights the
role of deeper psychological fears in shaping attitudes toward AI technologies, particularly
as they challenge traditional human–machine distinctions.
Hypothesis 5. Increased usage of AI is associated with an increase in annihilation anxiety.
Systems 2025,13, 82 14 of 25
Hypothesis 5 was not supported, as no significant linear relationship was found
between AI Usage and Annihilation Anxiety (r =
0.06, p= 0.364). However, a quadratic
relationship was identified (Figure 3). Regression analysis revealed an inverted U-shaped
curve, with the linear coefficient (b
1
= 0.82, SE = 0.45, p= 0.038) and quadratic coefficient
(b
2
=
0.89, SE = 0.06, p= 0.024) both contributing to the model. The model accounted for
2.6% of the variance in Annihilation Anxiety (R
2
= 0.026) and was statistically significant
(F(2,229) = 3.00, p= 0.050).
Systems 2025, 13, x FOR PEER REVIEW 14 of 25
Annihilation and AI Anxieties exhibited a strong positive correlation (r = 0.51, p <
0.001), supporting the hypothesis that existential concerns about the erosion of human
identity and autonomy intensify broader anxieties about AI. This nding highlights the
role of deeper psychological fears in shaping aitudes toward AI technologies, particu-
larly as they challenge traditional humanmachine distinctions.
Hypothesis 5. Increased usage of AI is associated with an increase in annihilation anxiety.
Hypothesis 5 was not supported, as no signicant linear relationship was found be-
tween AI Usage and Annihilation Anxiety (r = 0.06, p = 0.364). However, a quadratic
relationship was identied (Figure 3). Regression analysis revealed an inverted U-shaped
curve, with the linear coecient (b
1
= 0.82, SE = 0.45, p = 0.038) and quadratic coecient
(b
2
= 0.89, SE = 0.06, p = 0.024) both contributing to the model. The model accounted for
2.6% of the variance in Annihilation Anxiety (R
2
= 0.026) and was statistically signicant
(F(2,229) = 3.00, p = 0.050).
These results suggest that initial increases in AI usage are associated with heightened
Annihilation Anxiety, potentially due to exposure to new and unfamiliar systems that
challenge human distinctiveness. However, at higher levels of usage, these anxieties de-
crease, possibly reecting users’ adaptation and acceptance of AI as a complement to hu-
man capabilities. This nding highlights the nuanced relationship between usage inten-
sity and existential fears.
Figure 3. Quadratic relationship between AI usage and annihilation anxiety.
Hypothesis 6: Increased usage of AI is associated with an increase in AI Use Motives.
A positive correlation was observed between AI usage and AI Use Motives (r = 0.27,
p < 0.001), supporting the hypothesis that greater engagement with AI tools enhances us-
ers’ motivations for their adoption. This nding aligns with motivational frameworks sug-
gesting that frequent interaction with AI fosters both intrinsic and extrinsic motivators,
such as perceived utility, enjoyment, and skill aainment. These results emphasize the
dynamic feedback loop between AI usage and user motivation, where increased exposure
strengthens users’ perceived benets and enjoyment of AI tools. This insight is particu-
larly relevant for organizations aiming for sustained engagement by leveraging motiva-
tional strategies during onboarding and implementation processes.
Figure 3. Quadratic relationship between AI usage and annihilation anxiety.
These results suggest that initial increases in AI usage are associated with heightened
Annihilation Anxiety, potentially due to exposure to new and unfamiliar systems that
challenge human distinctiveness. However, at higher levels of usage, these anxieties
decrease, possibly reflecting users’ adaptation and acceptance of AI as a complement
to human capabilities. This finding highlights the nuanced relationship between usage
intensity and existential fears.
Hypothesis 6. Increased usage of AI is associated with an increase in AI Use Motives.
A positive correlation was observed between AI usage and AI Use Motives (r = 0.27,
p< 0.001), supporting the hypothesis that greater engagement with AI tools enhances
users’ motivations for their adoption. This finding aligns with motivational frameworks
suggesting that frequent interaction with AI fosters both intrinsic and extrinsic motivators,
such as perceived utility, enjoyment, and skill attainment. These results emphasize the
dynamic feedback loop between AI usage and user motivation, where increased exposure
strengthens users’ perceived benefits and enjoyment of AI tools. This insight is particularly
relevant for organizations aiming for sustained engagement by leveraging motivational
strategies during onboarding and implementation processes.
Hypothesis 7. AI dependency is positively correlated with AI anxiety.
Hypothesis 8. Increased usage of AI is associated with increased AI dependency.
No significant relationship was found between AI dependency and AI anxiety
(r =
0.02, p= 0.701), failing to support Hypothesis 7. This result does not align with
findings that link dependency and anxiety [
83
], suggesting that these constructs may oper-
ate independently in the context of AI adoption. One possible explanation is that while
Systems 2025,13, 82 15 of 25
dependency may reflect a behavioral reliance on AI tools, it does not inherently evoke the
emotional distress or existential concerns captured by AI Anxiety. This finding illustrates
the complexity of AI-related psychological factors and highlights the need for further
investigation into how dependency interacts with other dimensions of AI use, such as
motivation or perceived utility.
Lastly, Hypothesis 8 was supported by a positive correlation between AI usage and
AI dependency (r = 0.27, p< 0.001). This indicates that greater engagement with AI tools
is associated with higher reliance on these technologies. These findings suggest that as
individuals use AI more frequently, they may view these tools as indispensable, particularly
for tasks requiring efficiency or accuracy. While this dependency might be beneficial in
terms of productivity, it also raises concerns about over-reliance. Organizations should aim
to foster healthy engagement by ensuring AI tools are positioned as supportive aids rather
than replacements for human skills. Training programs emphasizing balanced use may
help mitigate the risks associated with excessive reliance.
Table 3provides a comprehensive summary of the results for each hypothesis, detailing
the associated variables, correlation coefficients (R and R
2
), type of correlation observed,
significance levels, and the acceptance status of each hypothesis.
Table 3. Results of Hypotheses testing (H1 to H8).
Hypothesis
Variables R R2Correlation Type Sig Hypotheses
Acceptance
H1 AI usage and AI anxiety 0.29 0.086 U-shape <0.001 Accepted
H2 Anticipatory and AI anxiety 0.45 0.202 Linear <0.001 Accepted
H3 AI usage and anticipatory anxiety 0.15 0.022 Linear 0.028 Accepted
H4 Annihilation anxiety and AI anxiety 0.51 0.260 Linear <0.001 Accepted
H5 Annihilation anxiety and AI usage 0.16 0.026
Reversed U-shape
0.050 Declined (Post
Hoc Analysis)
H6 AI usage and AI Use
motives 0.38 0.144 Linear <0.001 Accepted
H7 AI dependency and AI anxiety 0.02 <0.001 Linear 0.710 Declined
H8 AI usage and AI dependency 0.27 0.073 Linear <0.001 Accepted
3.3. Additional Findings
The correlation matrix presented in Table 2highlights several noteworthy relation-
ships. Notably, there were strong positive correlations between Anticipatory Anxiety and
Annihilation Anxiety (r = 0.36, p< 0.001), as well as between AI dependency and AI Use
Motives (r = 0.55, p< 0.001). Conversely, a strong negative correlation was observed be-
tween AI anxiety and AI Use Motives (r =
0.40, p< 0.001), suggesting that higher levels
of anxiety toward AI are associated with reduced motivational factors for its use. These
findings emphasize the intricate interplay between anxiety, dependency, and usage motives
within the context of AI-related behaviors.
4. Discussion
In a world where AI increasingly integrates into our lives, the current study examines
the dual role of Use AI motives and AI anxiety—encompassing both anticipatory and anni-
hilation dimensions—in successfully adopting conversational AI agents, decision-support
systems, and productivity tools. While AI can enhance efficiency [
91
] and support strate-
gic decision-making [
92
], our findings highlight significant emotional and motivational
challenges that must be addressed to ensure smooth and sustainable digital transforma-
tion. These challenges are particularly pronounced in the U-shaped relationship observed
between AI usage and anxiety, which is in line with our first hypothesis. This dual rela-
tionship demonstrates that moderate engagement reduces anxiety, but both low and high
Systems 2025,13, 82 16 of 25
levels of usage amplify it. Such insights are crucial for organizations seeking to mitigate
psychological barriers while maximizing the benefits of AI.
One significant finding of the current study is the strong association between anticipa-
tory and AI anxiety, which signifies the shadow side of AI anxiety and supports Hypothesis
2. Anticipatory anxiety often stems from perceived risks, such as job displacement, loss of
autonomy, or ethical concerns surrounding AI’s expanding role in decision-making [
93
].
This emotional response is not unique to AI but reflects broader anxiety patterns observed
during technological disruption. Historical parallels, such as the mechanization anxieties
of the Industrial Revolution [
31
], illustrate that such fears are deeply rooted in human
responses to change. These findings align with prior research indicating that anticipatory
emotions significantly influence technology adoption behaviors [94].
To address anticipatory anxiety, organizations should design interventions that reduce
uncertainty, dissolve irrational fears, and set clear expectations. Transparent communication
in the organization about the transition can reduce ambiguity and foster trust by providing
regular updates about the purpose, scope, and impact of AI implementation [95]. Equally
important can be targeted training programs, which offer employees hands-on experience
and tailored support to demystify AI systems, enhance confidence, and reduce fear [
96
].
Additionally, establishing ongoing support systems, such as channels for feedback and
dialogue, ensures that employees feel heard and supported throughout the transformation
process [
97
]. Together, these interventions not only alleviate immediate concerns but also
create an environment where employees are psychologically prepared to engage with
AI systems.
The shadow side of AI anxiety highlights a critical gap in traditional technology
adoption models. These models (e.g., the Unified Theory of Acceptance and Use of
Technology—UTAUT [
42
]) prioritize rational factors like performance expectancy, effort
expectancy, social influence, and facilitating conditions as primary determinants of user
intentions. While valuable, such frameworks often fail to account for emotional and psy-
chological factors influencing technology adoption. The current work emphasizes the
need for a comprehensive approach that integrates strategies to overcome both cognitive
and emotional barriers, ensuring the successful and sustainable incorporation of AI in
the workplace or at home. This can be obtained, for example, with human-centric AI de-
signs [
98
], which emphasize the development of transparent, user-friendly, and explainable
systems. By clearly demonstrating how AI systems function and how decisions are made,
this approach helps alleviate feelings of intimidation and uncertainty [
99
]. Transparency
fosters trust in AI systems [
100
], while intuitive, user-friendly designs empower employees
to interact confidently with AI, reducing the sense of being overwhelmed by technical
complexities [101].
An essential component of this holistic approach is effective change management,
which involves recognizing and addressing the emotional responses employees may ex-
perience during transitions to AI systems. Applying Kotter’s change management prin-
ciples [
102
], managers can implement practical steps to support employees through the
change process. This could include forming a guiding coalition to lead the change, identify-
ing short-term goals that demonstrate early wins with AI deployment, and empowering
employees through active participation. Ensuring employees’ voices are heard fosters
a sense of ownership and psychological safety, making them more receptive to change
and actively engaged in the transformation process. By addressing both the rational and
emotional dimensions of AI adoption, organizations can create a balanced and inclusive
approach that maximizes the benefits of AI while minimizing its psychological costs. This
holistic approach enhances the effectiveness of AI integration and ensures employees feel
supported and valued throughout the transition process.
Systems 2025,13, 82 17 of 25
The weak but significant negative correlation between the shadow side of AI anxiety
and adoption supports Hypothesis 3, indicating that heightened uncertainty about AI
discourages individuals from engaging with it. While anticipatory anxiety may not be a
primary determinant of AI adoption, its influence is notable. Individuals with higher levels
of shadow-side anxiety are less likely to use AI tools, potentially due to the discomfort
and hesitation arising from uncertainties inherent in emerging technologies. This finding
demonstrates the critical need to address emotional uncertainty as part of a comprehensive
digital adoption strategy. By mitigating anticipatory anxiety, organizations can foster an
environment of confidence and openness, ultimately encouraging more widespread and
effective AI engagement [103].
Another key finding is the strong association between annihilation and AI anxiety,
which signifies the abyss side of AI anxiety. Supporting Hypothesis 4, this finding suggests
that individuals who experience existential fears, such as concerns about losing personal
identity or autonomy due to AI technologies, are more likely to experience heightened
AI anxiety. The abyss side of AI anxiety highlights the profound psychological impact
that AI technologies can evoke, extending beyond practical concerns to deeply ingrained
existential fears [
104
]. The perception that AI threatens qualities unique to humans—such
as autonomy, decision-making, and personal identity—intensifies anxiety. For managers,
this finding indicates the importance of addressing not only the functional and procedu-
ral implications of AI adoption but also the existential concerns (abyss side) employees
may harbor. Strategies to support employees could include fostering transparency and
explainability through clear communication, promoting ethical self-governance with well-
defined policies and procedures, and implementing continuous monitoring across the
system lifecycle [105].
While Hypothesis 5 was not supported, a significant inverted U-shaped relationship
was found between the abyss side of AI anxiety and AI adoption. Initially, individuals
experience heightened existential fears as their use of AI increases. This may be attributed
to a growing awareness of the potential existential threats posed by AI systems among
those with low to moderate levels of familiarity and usage [
104
,
106
]. However, as users
become more familiar with AI technology, existential anxieties tend to diminish [
104
]. In
an organizational context, this relationship highlights the importance of framing AI as a
supportive tool. Managers should emphasize that AI complements, rather than replaces,
human work. By highlighting how AI can support employees’ existing roles and enhance
efficiency rather than threatening their professional relevance or value, organizations can
alleviate fears of obsolescence and foster a more productive and collaborative relationship
with technology.
While the shadow and abyss sides of AI anxiety were found to be strong factors
that hinder adoption, the current work also focused on identifying use motives. In line
with Hypothesis 6, we found a significant positive correlation between AI usage and
AI use motives [
64
]. These motives encompass expectancy, attainment, utility value,
intrinsic/interest value, and cost. This finding suggests that individuals’ beliefs about their
ability to successfully use AI tools, particularly to achieve personal or professional goals,
are critical factors in AI adoption. Additionally, adoption is influenced by the perceived
practical benefits of AI, such as increased efficiency or problem-solving capabilities, and by
the inherent enjoyment or curiosity associated with engaging with AI systems.
This relationship suggests that as individuals’ motives to use AI across these dimen-
sions increase, their engagement with AI tools also increases. For organizations aiming
to enhance AI adoption, fostering these motivational components is crucial. For instance,
they can improve expectancy by providing comprehensive training and support, empha-
size attainment by aligning AI tools with employee goals, and promote utility value by
Systems 2025,13, 82 18 of 25
showcasing the tangible benefits of AI in real-world tasks. Cultivating intrinsic value might
encourage exploration and innovation with technologies while mitigating perceived costs
through accessible and user-friendly designs. By addressing these motivational factors,
organizations can drive sustained engagement with AI, ultimately boosting employee
motivation, satisfaction, and productivity [107].
The third and final goal of the current study was to provide an initial understanding
of the factors contributing to AI dependency. Two key findings emerged. First, in contrast
to Hypothesis 7 and previous research [108], no correlation was found between AI depen-
dency and AI anxiety. This lack of a significant correlation suggests that while anxiety
may influence the decision to adopt AI, it does not necessarily translate into dependency
among users. This aligns with research indicating that ChatGPT reduces technostress [
109
],
potentially buffering against anxiety-related misuse. Instead, problematic use appears more
closely tied to high levels of AI engagement, where over-reliance and excessive interaction
may emerge as primary drivers of maladaptive behaviors.
Second, a significant positive correlation was found between AI usage and depen-
dency, suggesting that frequent interaction with AI tools may foster reliance, which can
escalate into maladaptive use. This demonstrates the danger of habitual use in cultivating
dependency. If left unchecked, this habitual use could hinder critical thinking and reduce
interpersonal engagement [
78
,
110
]. These findings highlight the dual-edged nature of AI
adoption. While increased usage can enhance productivity [
111
] and innovation [
112
],
it also poses risks, such as over-reliance, diminished social interaction, and uncritical ac-
ceptance of AI outputs. For organizations aiming to maximize the benefits of AI while
minimizing its drawbacks, it is crucial to implement strategies that promote balanced and
mindful adoption. This includes fostering a healthy relationship with AI by providing
clear guidelines, monitoring usage patterns, and addressing the emotional and behavioral
factors that shape engagement. Managers should remain vigilant in mitigating dependency
risks, ensuring that AI is a tool to complement, rather than replace, essential human skills
like critical thinking and collaboration. By adopting these strategies, organizations can
harness the potential of AI while safeguarding employee well-being and performance.
4.1. Limitation
It is important to recognize several limitations of this study that may influence the
interpretation and generalizability of the findings. First, the research was conducted
during the early stages of AI adoption. Widely accessible AI tools have only recently
entered mainstream use, and as these technologies become more integrated and familiar
in daily life, anxiety levels may naturally decline over time. This limits our ability to fully
capture the long-term psychological and behavioral dynamics associated with AI usage.
Furthermore, concepts such as AI dependency, currently framed as emerging challenges,
may shift over time as they become normalized aspects of human–AI interactions.
Second, our cross-sectional design limited our ability to capture the dynamic and
longitudinal nature of psychological responses to AI. Factors such as anxiety, dependency,
and acceptance are likely to evolve as individuals gain more experience with AI technolo-
gies and as these tools become more ubiquitous in organizational and personal contexts.
Longitudinal studies are essential for understanding these temporal changes, offering a
clearer picture of how psychological responses develop parallel to AI integration. For
example, longitudinal data could illuminate whether anxieties in early adoption stages
diminish into confidence or intensify with usage due to dependency.
Third, we used a sample from Prolific, which is likely to have higher baseline techno-
logical familiarity and experience compared to the general population. This sample could
lead to an underestimation of AI anxiety and dependency levels and an overestimation of
Systems 2025,13, 82 19 of 25
usage. Thus, results might be limited to organizations where digital literacy is prominent.
Additionally, many AI applications operate seamlessly within everyday platforms without
explicit labeling, making it difficult for users to recognize when using AI-powered tools.
This lack of transparency introduces potential biases in participants’ responses, which can
be added to the biases that may stem from the reliance on self-reported data (e.g., social
desirability effects and recall inaccuracies).
Lastly, as noted earlier, we did not calculate effect sizes a priori, which could have
enhanced the study’s methodological rigor. While post hoc effect size estimates confirmed
the findings’ statistical significance, incorporating power analyses during the planning
phase would have ensured more precise sample size determinations. Future studies should
prioritize these methodological considerations to bolster transparency and reproducibility.
By addressing these limitations in future research, we can build a more comprehensive
understanding of the complex relationships between AI usage, psychological well-being,
and organizational outcomes.
4.2. Future Work
Future research can adopt a longitudinal approach to better understand how AI-related
anxiety and dependency evolve over time. Such studies would provide valuable insights
into the long-term psychological effects of increased familiarity with AI and its progressive
integration into everyday life. For instance, longitudinal designs could explore whether the
normalization of AI tools reduces anticipatory anxiety or, conversely, intensifies dependency
concerns. These findings would enhance our understanding of whether societal perceptions
of AI dependency and problematic use shift over time as these behaviors become more
ingrained in daily routines. Moreover, future studies should investigate demographic
factors such as age, education, and cultural context and differentiate between different
types of uses. This can help uncover variations in how different populations experience
AI adoption. For example, generational differences in technology familiarity or cultural
attitudes toward innovation may significantly influence levels of anxiety or dependency.
Examining these factors would provide a more nuanced understanding of how diverse
groups adapt to the challenges and opportunities presented by AI.
Research within organizational contexts, where AI adoption is often mandatory, repre-
sents another promising avenue for exploration. Understanding how employees respond
to AI integration under structured, professional conditions could reveal whether psy-
chological responses differ from voluntary or personal adoption scenarios. Such studies
could examine the effectiveness of tailored interventions, such as customized training
programs, skill-building initiatives, and transparent communication strategies, in miti-
gating AI anxiety and enhancing adaptability. By focusing on organizational contexts,
future research could also provide managers with practical tools to empower employees to
engage confidently with AI technologies. Strategies such as phased implementation, clear
articulation of AI’s role and benefits, and fostering psychological safety would support
smoother transitions and greater acceptance. These efforts are critical for fostering an inclu-
sive and productive environment where the potential of AI can be fully realized without
compromising employee well-being.
Finally, future research could employ more complex analytical approaches, such as
PLS-Structural Equation Modeling, to capture potential feedback loops and reciprocal
relationships between anticipatory anxiety, existential fears, and technological adaptation.
While our current study established directional relationships between these constructs,
the dynamic nature of human–AI interaction suggests that these relationships may also
be more intricate, with possible bidirectional influences and recursive patterns. For in-
stance, heightened abyss side of AI anxiety might not only result from existential fears but
Systems 2025,13, 82 20 of 25
could potentially reinforce them over time, creating self-reinforcing cycles that influence
technological adaptation.
4.3. Conclusions
The findings in the current work emphasize the importance of adopting a phased,
strategic approach to AI integration within organizations. Gradual implementation min-
imizes disruption and anxiety among employees [
113
]. This phased approach ensures
that employees first adapt to simpler digital tools before transitioning to complex AI sys-
tems [
114
]. Incremental exposure enables individuals to build confidence and familiarity,
reducing cognitive and emotional resistance to change [
115
]. A phased implementation
strategy also aligns with recent research emphasizing the value of transparent communi-
cation and hands-on training during technological transitions. For example, Phillips and
Klein [
116
] highlight that training programs and open communication alleviate uncertainty
and foster trust. Furthermore, providing opportunities for feedback ensures that employees
feel supported throughout the transition process, mitigating the shadow side of AI anxi-
ety [
100
]. Managers can foster psychological safety and trust by maintaining transparency
about the potential benefits and challenges of AI adoption [
117
]. Clear communication
about AI’s role in enhancing workflows, rather than replacing employees, reduces antic-
ipatory fears of job displacement or obsolescence. Incorporating ethical guidelines and
governance frameworks, such as those emphasized in global AI ethics initiatives, further
bolsters trust [43].
Author Contributions: Conceptualization, writing—original draft preparation, formal analysis,
software, data curation, A.F. methodology, supervision, writing—reviewing and editing, validation,
G.H. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Informed Consent Statement: Informed consent was obtained from all subjects involved in
the study.
Data Availability Statement: The data is available from the corresponding author upon request.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1.
Forbes. Available online: https://www.forbes.com/sites/siladityaray/2023/02/22/chatgpt-reportedly-blocked-on-chinese-
social-media-apps-as- beijing-claims-ai-is-used-to-spread-propaganda/ (accessed on 16 October 2024).
2.
PwC. Available online: https://www.pwc.com/gx/en/issues/artificial-intelligence/publications/artificial-intelligence- study.
html (accessed on 2 November 2024).
3.
Kim, J.; Kadkol, S.; Solomon, I.; Yeh, H.; Soh, J.Y.; Nguyen, T.M.; Ajilore, O.A. AI anxiety: A comprehensive analysis of
psychological factors and interventions. SSRN 2023, 4573394. [CrossRef]
4.
PwC. Available online: https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html (accessed on 2
November 2024).
5.
Manyika, J.; Lund, S.; Chui, M.; Bughin, J.; Woetzel, L.; Batra, P.; Ko, R.; Sanghvi, S. Jobs Lost, Jobs Gained: What the Future of
Work Will Mean for Jobs, Skills, and Wages. Available online: https://www.mckinsey.com/featured-insights/future-of-work/
jobs-lost-jobs-gained- what-the-future-of-work-will-mean-for-jobs-skills-and-wages (accessed on 2 November 2024).
6.
PwC. Available online: https://www.pwc.com/bs/en/press-releases/pwc-2024-global-ai-jobs-barometer.html (accessed on 2
November 2024).
7. Blair, A.; Saffidine, A. AI surpasses humans at six-player poker. Science 2019,365, 864–865. [CrossRef] [PubMed]
8.
Wamba-Taguimdje, S.L.; Wamba, S.F.; Kamdjoug, J.R.K.; Wanko, C.E.T. Influence of artificial intelligence (AI) on firm performance:
The business value of AI-based transformation projects. Bus. Process Manag. J. 2020,26, 1893–1924. [CrossRef]
9.
Grewal, D.; Guha, A.; Satornino, C.B.; Schweiger, E.B. Artificial intelligence: The light and the darkness. J. Bus. Res. 2021,136,
229–236. [CrossRef]
Systems 2025,13, 82 21 of 25
10.
Hubert, K.F.; Awa, K.N.; Zabelina, D.L. The current state of artificial intelligence generative language models is more creative
than humans on divergent thinking tasks. Sci. Rep. 2024,14, 3440. [CrossRef]
11.
YouGov. Available online: https://today.yougov.com/technology/articles/49099-americans-2024-poll-ai-top-feeling- caution
(accessed on 24 October 2024).
12.
eco—Association of the Internet Industry. Available online: https://international.eco.de/presse/eco-yougov-survey- shows-that-
small-and-medium-sized- enterprises-in-particular-are-reluctant-to-use-ai/ (accessed on 24 October 2024).
13.
Mahmud, H.; Islam, A.N.; Ahmed, S.I.; Smolander, K. What influences algorithmic decision-making? A systematic literature
review on algorithm aversion. Technol. Forecast. Soc. Chang. 2022,175, 121390. [CrossRef]
14.
Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to medical artificial intelligence. J. Consum. Res. 2019,46, 629–650.
[CrossRef]
15.
Gaczek, P.; Pozharliev, R.; Leszczy´nski, G.; Zieli´nski, M. Overcoming consumer resistance to AI in general health care. J. Interact.
Mark. 2023,58, 321–338. [CrossRef]
16.
Yanamala, A.K.Y.; Suryadevara, S.; Kalli, V.D.R. Evaluating the impact of data protection regulations on AI development and
deployment. Int. J. Adv. Eng. Technol. Innov. 2023,1, 319–353.
17.
Draxler, F.; Werner, A.; Lehmann, F.; Hoppe, M.; Schmidt, A.; Buschek, D.; Welsch, R. The AI ghostwriter effect: When users
do not perceive ownership of AI-generated text but self-declare as authors. ACM Trans. Comput.-Hum. Interact. 2024,31, 1–40.
[CrossRef]
18. Siau, K.; Wang, W. Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 2018,31, 47–53.
19.
Fügener, A.; Grahl, J.; Gupta, A.; Ketter, W. Cognitive challenges in human–artificial intelligence collaboration: Investigating the
path toward productive delegation. Inf. Syst. Res. 2022,33, 678–696. [CrossRef]
20.
Hofmann, V.; Kalluri, P.R.; Jurafsky, D.; King, S. AI generates covertly racist decisions about people based on their dialect. Nature
2024,633, 147–154. [CrossRef] [PubMed]
21.
Christensen, J.; Hansen, J.M.; Wilson, P. Understanding the role and impact of Generative Artificial Intelligence (AI) hallucination
within consumers’ tourism decision-making processes. Curr. Issues Tour. 2024, 1–16. [CrossRef]
22.
Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016,3,
2053951716679679. [CrossRef]
23.
Puntoni, S.; Reczek, R.W.; Giesler, M.; Botti, S. Consumers and artificial intelligence: An experiential perspective. J. Mark. 2021,85,
131–151. [CrossRef]
24.
Stahl, B.C.; Andreou, A.; Brey, P.; Hatzakis, T.; Kirichenko, A.; Macnish, K.; Wright, D. Artificial intelligence for human
flourishing—Beyond principles for machine learning. J. Bus. Res. 2021,124, 374–388. [CrossRef]
25.
Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016,3, 1–12.
[CrossRef]
26.
Binns, R. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness,
Accountability, and Transparency, New York, NY, USA, 23–24 February 2018.
27.
Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY,
USA, 2019.
28. OECD. Available online: https://www.oecd.org/en/topics/sub-issues/ai-principles.html (accessed on 4 November 2024).
29.
European Parliament. Available online: https://www.europarl.europa.eu/cmsdata/196378/AI%20HLEG_Policy%20and%20
Investment%20Recommendations.pdf (accessed on 4 November 2024).
30.
Steelman, K.S.; Tislar, K.L. Measurement of tech anxiety in older and younger adults. In Proceedings of the 21st International
Conference on Human-Computer Interaction (HCI International 2019), Orlando, FL, USA, 26–31 July 2019.
31. Bloch, H.S. Review of La France devant la reconstruction économique, by R. Mossé. J. Political Econ. 1947,55, 383–385. [CrossRef]
32.
Rohner, D.J.; Simonson, M.R. Development of an art index of computer anxiety. In Proceedings of the Association for Educational
Communications and Technology, Philadelphia, PA, USA, 6–10 April 1981.
33. Brod, C. Technostress: The Human Cost of Computer Revolution; Addison-Wesley: Reading, MA, USA, 1984.
34.
Bondanini, G.; Giorgi, G.; Ariza-Montes, A.; Vega-Muñoz, A.; Andreucci-Annunziata, P. Technostress dark side of technology in
the workplace: A scientometric analysis. Int. J. Environ. Res. Public Health 2020,17, 8013. [CrossRef]
35.
Alkhawaja, M.I.; Halim, M.S.A.; Afthanorhan, A. Technology Anxiety and Its Impact on E-Learning System Actual Use in Jordan
Public Universities during the Coronavirus Disease Pandemic. Eur. J. Educ. Res. 2021,10, 1639–1647. [CrossRef]
36.
Huang, X.; Zou, D.; Cheng, G.; Chen, X.; Xie, H. Trends, research issues, and applications of artificial intelligence in language
education. Educ. Technol. Soc. 2023,26, 112–131.
37.
Yang, K.; Forney, J.C. The moderating role of consumer technology anxiety in mobile shopping adoption: Differential effects of
facilitating conditions and social influences. J. Electron. Commer. Res. 2013,14, 334.
38.
Aziz, S.A.; Jusoh, M.S.; Amlus, M.H. The moderating role of technology anxiety on brand service quality, brand image, and their
relation to brand loyalty. Int. J. Internet Mark. Advert. 2018,12, 270–289. [CrossRef]
Systems 2025,13, 82 22 of 25
39.
Wach, K.; Duong, C.D.; Ejdys, J.; Kazlauskait
˙
e, R.; Korzynski, P.; Mazurek, G.; Ziemba, E. The dark side of generative artificial
intelligence: A critical analysis of controversies and risks of ChatGPT. Entrep. Bus. Econ. Rev. 2023,11, 7–30. [CrossRef]
40. Wolpe, J. Psychotherapy by reciprocal inhibition. Cond. Reflex Pavlov. J. Res. Ther. 1968,3, 234–240. [CrossRef] [PubMed]
41.
Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989,13,
319–340. [CrossRef]
42. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q.
2003,27, 425–478. [CrossRef]
43.
Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit.
Soc. Sci. Commun. 2024 11, 1568.
44.
Fan, M.; Huang, Y.; Qalati, S.A.; Shah, S.M.M.; Ostic, D.; Pu, Z. Effects of information overload, communication overload, and
inequality on digital distrust: A cyber-violence behavior mechanism. Front. Psychol. 2021,12, 643981. [CrossRef]
45.
Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J.
Hum.-Comput. Stud. 2021,146, 102551. [CrossRef]
46.
Whittaker, M.; Crawford, K.; Dobbe, R.; Fried, G.; Kaziunas, E.; Mathur, V.; Myers West, S.; Richardson, R.; Schultz, J.; Schwartz, O.
AI Now Report 2018. AI Now Institute. 2018. Available online: https://ainowinstitute.org/AI_Now_2018_Report.pdf (accessed
on 10 November 2024).
47.
Wang, Y.Y.; Wang, Y.S. Development and validation of an artificial intelligence anxiety scale: An initial application in predicting
motivated learning behavior. Interact. Learn. Environ. 2019,30, 619–634. [CrossRef]
48.
Lemay, D.J.; Basnet, R.B.; Doleck, T. Fearing the Robot Apocalypse: Correlates of AI Anxiety. Int. J. Learn. Anal. Artif. Intell. Educ.
(iJAI) 2020 2, 24–33. [CrossRef]
49.
Chang, P.C.; Zhang, W.; Cai, Q.; Guo, H. Does AI-Driven technostress promote or hinder employees’ artificial intelligence
adoption intention? A moderated mediation model of affective reactions and technical self-efficacy. Psychol. Res. Behav. Manag.
2024,17, 413–427. [CrossRef] [PubMed]
50.
Barlow, D.H.; Chorpita, B.F.; Turovsky, J. Fear, panic, anxiety, and disorders of emotion. In Nebraska Symposium on Motivation,
1995: Perspectives on Anxiety, Panic, and Fear; Hope, D.A., Ed.; University of Nebraska Press: Lincoln, Nebraska, 1996; pp. 251–328.
51.
Huang, C.L.; Haried, P. An evaluation of uncertainty and anticipatory anxiety impacts on technology use. Int. J. Hum.–Comput.
Interact. 2020,36, 641–649. [CrossRef]
52.
Beckers, J.J.; Schmidt, H.G. The structure of computer anxiety: A six-factor model. Comput. Hum. Behav. 2001,17, 35–49.
[CrossRef]
53.
Graham, B.M.; Milad, M.R. The study of fear extinction: Implications for anxiety disorders. Am. J. Psychiatry 2011,168, 1255–1265.
[CrossRef]
54.
Grillon, C.; Lissek, S.; Rabin, S.; McDowell, D.; Dvir, S.; Pine, D.S. Increased anxiety during anticipation of unpredictable but not
predictable aversive stimuli as a psychophysiologic marker of panic disorder. Am. J. Psychiatry 2008,165, 898–904. [CrossRef]
[PubMed]
55.
Nitschke, J.B.; Sarinopoulos, I.; Oathes, D.J.; Johnstone, T.; Whalen, P.J.; Davidson, R.J.; Kalin, N.H. Anticipatory activation in the
amygdala and anterior cingulate in generalized anxiety disorder and prediction of treatment response. Am. J. Psychiatry 2009,166,
302–310. [CrossRef]
56.
Marwaha, J.S.; Landman, A.B.; Brat, G.A.; Dunn, T.; Gordon, W.J. Deploying digital health tools within large, complex health
systems: Key considerations for adoption and implementation. NPJ Digit. Med. 2022,5, 13. [CrossRef]
57.
Taylor, S.; Todd, P.A. Understanding information technology usage: A test of competing models. Inf. Syst. Res. 1995,6, 144–176.
[CrossRef]
58.
Freedman, N.; Geller, J.D.; Hoffenberg, J.; Hurvich, M.; Ward, R. Another Kind of Evidence: Studies on Internalization, Annihilation
Anxiety, and Progressive Symbolization in the Psychoanalytic Process; Routledge: London, UK, 2018.
59. Hurvich, M. The place of annihilation anxieties in psychoanalytic theory. J. Am. Psychoanal. Assoc. 2003,51, 579–616. [CrossRef]
[PubMed]
60. Richardson, K. An Anthropology of Robots and AI: Annihilation Anxiety and Machines; Routledge: London, UK, 2015.
61.
Galanos, V. Exploring expanding expertise: Artificial intelligence as an existential threat and the role of prestigious commentators,
2014–2018. Technol. Anal. Strateg. Manag. 2019,31, 421–432. [CrossRef]
62.
Federspiel, F.; Mitchell, R.; Asokan, A.; Umana, C.; McCoy, D. Threats by artificial intelligence to human health and human
existence. BMJ Glob. Health 2023,8, e010435. [CrossRef] [PubMed]
63.
Noble, S.M.; Mende, M.; Grewal, D.; Parasuraman, A. The Fifth Industrial Revolution: How harmonious human–machine
collaboration is triggering a retail and service [r]evolution. J. Retail. 2022,98, 199–208. [CrossRef]
64.
Yurt, E.; Kasarci, I. A Questionnaire of Artificial Intelligence Use Motives: A Contribution to Investigating the Connection
between AI and Motivation. Int. J. Technol. Educ. 2024,7, 308–325. [CrossRef]
Systems 2025,13, 82 23 of 25
65.
Krakowski, A.; Greenwald, E.; Hurt, T.; Nonnecke, B.; Cannady, M. Authentic integration of ethics and AI through sociotechnical,
problem-based learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, USA, 22 February–1
March 2022.
66.
Okuonghae, N.; Tunmibi, S. Digital competence as predictor for the motivation to use artificial intelligence technologies among
librarians in Edo and Delta States, Nigeria. J. Technol. Innov. Energy 2024,3, 1–11. [CrossRef]
67.
Qu, K.; Wu, X. ChatGPT as a CALL tool in language education: A study of hedonic motivation adoption models in English
learning environments. Educ. Inf. Technol. 2024,29, 19471–19503. [CrossRef]
68.
Deci, E.L.; Ryan, R.M. The general causality orientations scale: Self-determination in personality. J. Res. Personal. 1985,19, 109–134.
[CrossRef]
69.
Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemp. Educ. Psychol. 2000,25,
54–67. [CrossRef]
70. Rogers, E.M. Diffusion of Innovations; Free Press: New York, NY, USA, 2003.
71. Addo, A. The adoption of mobile phone: How has it changed us socially. Issues Bus. Manag. Econ. 2013,1, 47–60.
72.
Miller-Ott, A.E.; Kelly, L.; Duran, R.L. The effects of cell phone usage rules on satisfaction in romantic relationships. Commun. Q.
2012,60, 17–34. [CrossRef]
73.
Billieux, J. Problematic use of the mobile phone: A literature review and a pathways model. Curr. Psychiatry Rev. 2012,8, 299–307.
[CrossRef]
74.
MacCormick, J.S.; Dery, K.; Kolb, D.G. Engaged or just connected? Smartphones and employee engagement. Organ. Dyn. 2012,
41, 194–201. [CrossRef]
75.
King, A.C.; Hekler, E.B.; Grieco, L.A.; Winter, S.J.; Sheats, J.L.; Buman, M.P.; Cirimele, J. Harnessing different motivational frames
via mobile phones to promote daily physical activity and reduce sedentary behavior in aging adults. PLoS ONE 2013,8, e62613.
[CrossRef] [PubMed]
76.
CBS News. Available online: http://web.archive.org/web/20080412042610/http://www.wsbt.com/news/health/17263604
.html. (accessed on 10 November 2024).
77.
Adegbesan, A.; Akingbola, A.; Aremu, O.; Adewole, O.; Amamdikwa, J.C.; Shagaya, U. From Scalpels to Algorithms: The Risk of
Dependence on Artificial Intelligence in Surgery. J. Med. Surg. Public Health 2024,3, 100140. [CrossRef]
78.
Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic
review. Smart Learn. Environ. 2024,11, 28. [CrossRef]
79. Zhou, T.; Zhang, C. Examining generative AI user addiction from a CAC perspective. Technol. Soc. 2024,78, 102653. [CrossRef]
80.
Yates, J.R. Determinants of Addiction: Neurobiological, Behavioral, Cognitive, and Sociocultural Factors; Elsevier: Amsterdam, The
Netherlands, 2023.
81.
Yu, S.C.; Chen, H.R.; Yang, Y.W. Development and validation of the Problematic ChatGPT Use Scale: A preliminary report. Curr.
Psychol. 2024,43, 26080–26092. [CrossRef]
82.
Zhang, S.; Zhao, X.; Zhou, T.; Kim, J.H. Do you have AI dependency? The roles of academic self-efficacy, academic stress, and
performance expectations on problematic AI usage behavior. Int. J. Educ. Technol. High. Educ. 2024,21, 34. [CrossRef]
83.
Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenˇcík, D.; Campos-Castillo, C. Too human and not human enough: A grounded theory
analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media Soc. 2024,26, 5923–5941.
[CrossRef]
84.
Hu, B.; Mao, Y.; Kim, K.J. How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination,
and mind perception. Comput. Hum. Behav. 2023,145, 107760. [CrossRef]
85.
Qiao, R.; Liu, C.; Xu, J. Making algorithmic app use a virtuous cycle: Influence of user gratification and fatigue on algorithmic
app dependence. Humanit. Soc. Sci. Commun. 2024,11, 775. [CrossRef]
86.
Xie, T.; Pentina, I.; Hancock, T. Friend, mentor, lover: Does chatbot engagement lead to psychological dependence? J. Serv. Manag.
2023,34, 806–828. [CrossRef]
87.
Yankouskaya, A.; Liebherr, M.; Ali, R. ChatGPT Addiction: From Support to Dependence in AI Large Language Models. SSRN
2024, 4972612. [CrossRef]
88.
Vural, S.; Ferreira, N. Development and psychometric properties of the Anticipatory Anxiety Inventory. Cogn. Brain Behav. 2021,
25, 261–288. [CrossRef]
89.
Kira, I.; Templin, T.; Lewandowski, L.; Ramaswamy, V.; Ozkan, B.; Mohanesh, J.; Hussam, A. Collective and Personal Annihilation
Anxiety: Measuring Annihilation Anxiety AA. Psychology 2012,3, 90–99. [CrossRef]
90.
Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Morales-García, S.B.; Morales-García, M. Development and validation of a scale
for dependence on artificial intelligence in university students. Front. Educ. 2024,9, 1323898. [CrossRef]
91.
Deep, S.; Athimoolam, K.; Enoch, T. Optimizing Administrative Efficiency and Student Engagement in Education: The Impact of
AI. Int. J. Curr. Sci. Res. Rev. 2024,7, 7792–7804. [CrossRef]
Systems 2025,13, 82 24 of 25
92.
Pérez-Campuzano, D.; Ortega, P.M.; Andrada, L.R.; López-Lázaro, A. Artificial Intelligence potential within airlines: A review on
how AI can enhance strategic decision-making in times of COVID-19. J. Airl. Airpt. Manag. 2021,11, 53–72. [CrossRef]
93. de Sio, F.S. Artificial Intelligence and the Future of Work: Mapping the Ethical Issues. J. Ethics 2024,28, 407–427. [CrossRef]
94.
Beaudry, A.; Pinsonneault, A. The other side of acceptance: Studying the direct and indirect effects of emotions on information
technology use. MIS Q. 2010,34, 689–710. [CrossRef]
95.
Vössing, M.; Kühl, N.; Lind, M.; Satzger, G. Designing transparency for effective human-AI collaboration. Inf. Syst. Front. 2022,
24, 877–895. [CrossRef]
96.
Dingel, J.; Kleine, A.K.; Cecil, J.; Sigl, A.L.; Lermer, E.; Gaube, S. Predictors of Health Care Practitioners’ Intention to Use
AI-Enabled Clinical Decision Support Systems: Meta-Analysis Based on the Unified Theory of Acceptance and Use of Technology.
J. Med. Internet Res. 2024,26, e57224. [CrossRef] [PubMed]
97.
Mishra, K.; Boynton, L.; Mishra, A. Driving employee engagement: The expanded role of internal communications. Int. J. Bus.
Commun. 2014,51, 183–202. [CrossRef]
98.
Rožanec, J.M.; Novalija, I.; Zajec, P.; Kenda, K.; Tavakoli Ghinani, H.; Suh, S.; Soldatos, J. Human-centric artificial intelligence
architecture for industry 5.0 applications. Int. J. Prod. Res. 2023,61, 6847–6872. [CrossRef]
99. Horvati´c, D.; Lipic, T. Human-centric AI: The symbiosis of human and artificial intelligence. Entropy 2021,23, 332. [CrossRef]
100.
Gkinko, L.; Elbanna, A. Designing trust: The formation of employees’ trust in conversational AI in the digital workplace. J. Bus.
Res. 2023,158, 113707. [CrossRef]
101. Kore, A. Designing Human Centric AI Experiences; Apress: Berkeley, CA, USA, 2022.
102. Kotter, J.P. Leading Change; Harvard Business School Press: Boston, MA, USA, 1996.
103.
Matsunaga, M. Uncertainty in the Age of Digital Transformation. In Employee Uncertainty Over Digital Transformation: Mechanisms
and Solutions; Springer Nature: Singapore, 2024; pp. 11–84.
104.
Alkhalifah, J.M.; Bedaiwi, A.M.; Shaikh, N.; Seddiq, W.; Meo, S.A. Existential anxiety about artificial intelligence (AI)—Is it the
end of humanity era or a new chapter in the human revolution: Questionnaire-based observational study. Front. Psychiatry 2024,
15, 1368122. [CrossRef]
105.
Hilliard, A.; Kazim, E.; Ledain, S. Are the robots taking over? On AI and perceived existential risk. AI Ethics 2024, 1–14. [CrossRef]
106.
RealClearScience. Available online: https://www.realclearscience.com/articles/2023/07/06/ai_is_an_existential_threat__just_
not_the_way_you_think_964205.html (accessed on 15 November 2024).
107.
Pratt, M.; Boudhane, M.; Taskin, N.; Cakula, S. Use of AI for Improving Employee Motivation and Satisfaction. In Proceedings of
the 23rd International Conference on Interactive Collaborative Learning (ICL2020), Tallinn, Estonia, 23–25 September 2020.
108.
Spatola, N. The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance. Comput.
Hum. Behav. Artif. Hum. 2024,2, 100099. [CrossRef]
109.
Duong, C.D.; Ngo, T.V.N.; Khuc, T.A.; Tran, N.M.; Nguyen, T.P.T. Unraveling the dark side of ChatGPT: A moderated mediation
model of technology anxiety and technostress. Inf. Technol. People 2024. [CrossRef]
110.
Shaikh, S.J.; Cruz, I.F. AI in human teams: Effects on technology use, members’ interactions, and creative performance under time
scarcity. AI Soc. 2023,38, 1587–1600. [CrossRef]
111.
Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing work productivity through generative artificial intelligence: A comprehensive
literature review. Sustainability 2024,16, 1166. [CrossRef]
112.
Truong, Y.; Papagiannidis, S. Artificial intelligence as an enabler for innovation: A review and future research agenda. Technol.
Forecast. Soc. Chang. 2022,183, 121852. [CrossRef]
113.
Verhoef, P.C.; Broekhuizen, T.; Bart, Y.; Bhattacharya, A.; Dong, J.Q.; Fabian, N.; Haenlein, M. Digital transformation: A
multidisciplinary reflection and research agenda. J. Bus. Res. 2021,122, 889–901. [CrossRef]
114.
Mukherjee, D.; D’Souza, D. Think phased implementation for successful data warehousing. Inf. Syst. Manag. 2003,20, 82–90.
[CrossRef]
115.
Tarafdar, M.; Roy, R.K. Analyzing the adoption of enterprise resource planning systems in Indian organizations: A process
framework. J. Glob. Inf. Technol. Manag. 2003,6, 21–51. [CrossRef]
Systems 2025,13, 82 25 of 25
116. Phillips, J.; Klein, J.D. Change Management: From Theory to Practice. TechTrends 2023,67, 189–197. [CrossRef]
117. Edmondson, A. Psychological safety and learning behavior in work teams. Adm. Sci. Q. 1999,44, 350–383. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Artificial intelligence (AI) is increasingly infiltrating our lives, and a large proportion of the population use the technology whether they know it or not. While AI can offer significant transformative benefits, this is only true if it is used in a safe and responsible way with the right guardrails. Indeed, there have been several instances of harm resulting from the use of AI without the appropriate safeguards in place. As such, it is unsurprising that there are mixed views of AI in society, where the negative view can in fact manifest as a dystopian view of “robots taking over”. In this paper, we explore these positive and negative views of AI and the factors driving such perceptions. We propose that negative perceptions of AI often concern job displacement, bias and fairness, and misalignment with human values, while positive perceptions typically focus on specific applications and benefits of AI, such as in scientific research, healthcare, and education. Moreover, we posit that the types of perceptions one has about AI are driven by their proximity to AI, whether general or specific applications of AI are being considered, knowledge of AI, and how it is framed in the media. We end with a framework for reducing threat perceptions of AI, such that the technology can be embraced more confidently in tandem with risk management practices.
Article
Full-text available
This whitepaper explores how Artificial Intelligence (AI) systems can optimize administrative efficiency within educational institutions. It addresses the growing complexities of managing large student populations, ensuring regulatory compliance, and providing high-quality educational services. The paper delves into AI-driven solutions for automating routine tasks, enhancing data management, and improving decision-making processes. Additionally, it highlights the benefits of AI in reducing administrative burdens, increasing operational efficiency, and fostering a more responsive educational environment. The paper also provides real-world examples of successful AI implementations in educational settings, showcasing the transformative potential of these technologies.
Preprint
Full-text available
[PLEASE NOTE, Old title of this paper was: ChatGPT Addiction: From Support to Dependence in AI Large Language Models] The rapid rise of ChatGPT has introduced a transformative tool that enhances productivity, communication, and task automation across industries. However, concerns are emerging regarding the addictive potential of AI large language models (LLMs). This paper explores how ChatGPT fosters dependency through key features such as personalised responses, emotional validation, and continuous engagement. By offering instant gratification and adaptive dialogue, ChatGPT may blur the line between AI and human interaction, creating pseudosocial bonds that can replace genuine human relationships. Additionally, its ability to streamline decision-making and boost productivity may lead to over-reliance, reducing users' critical thinking skills and contributing to compulsive usage patterns. These behavioural tendencies align with known features of addiction, such as increased tolerance and conflict with daily life priorities. This viewpoint paper highlights the need for further research into the psychological and social impacts of prolonged interaction with AI tools like ChatGPT.
Article
Full-text available
Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions³. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4–7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.
Article
Full-text available
The ChatGPT (Chat Generative Pre-Trained Transformer) has been rapidly developing with a growing number of users. The potential for problematic use/dependence caused by the use of ChatGPT urgently needs to be explored. This study recruited a sample of 1040 Taiwanese adults to develop the Problematic ChatGPT use Scale (PCUS) and investigate the relationships between ChatGPT problematic use and related factors such as age, gender, demographic categories, usage time, user satisfaction rating, and depression. The results of exploratory and confirmatory factor analysis verified that the PCUS had good factorial/construct validity. Additionally, the PCUS showed high internal consistency and test-retest reliability. Among the factors related to the PCUS scores, usage time and depression showed a positive correlation with PCUS scores. Furthermore, male PCUS scores were significantly higher than female PCUS scores. The PCUS scores showed no significant relationship with age, demographic categories (undergraduate students, graduate students, and working professionals), and various satisfaction evaluations. Limitations and recommendations for future research were also discussed.
Article
This article introduces seven ethical issues raised by the introduction of artificial intelligence (AI) at work. Each ethical issue is presented in connection to broader and older philosophical topics as well as topics in the more specialised literature on applied ethics of technology. The seven issues are: (1) How to govern the impact of AI on job losses and other social issues raised by the reshaping of the job market? (2) AI may contribute to create new forms of oppression and violation of rights of the workforce; (3) AI may negatively affect workers’ (moral) agency, autonomy or responsibility; (4) AI may create hidden labour, that is, economically valuable tasks are performed by human agents without their work being sufficiently recognised, rewarded or protected, with (technological) companies acquiring an unfair gain and an increasing socio-economic power over people; (5) To what extent can AI affect the opportunity for people to perform good or meaningful work, and how should meaningful work be defined in a pluralistic society? (6) The introduction of AI at work may have a broader impact on social values and norms; (7) Who is responsible for making AI have a positive rather than a negative impact on ethical and societal values? In addition to providing a critical introduction to the ethical debate on AI and the future of work, the article also positions on this broader ethical and philosophical map the five articles of The Journal of Ethics special issue ‘Artificial Intelligence and the Future of Work’.