Content uploaded by Joel Koopman
Author content
All content in this area was uploaded by Joel Koopman on Mar 21, 2023
Content may be subject to copyright.
CONSEQUENCES OF WORK INTERACTION WITH AI
1
1
No Person is an Island:
Unpacking the Work and After-work Consequences of Interacting with Artificial
Intelligence
Pok Man Tang
University of Georgia
pokmantang620@gmail.com
Joel Koopman1
Texas A&M University
jkoopman@mays.tamu.edu
Ke Michael Mai1
National University of Singapore
bizmeke@nus.edu.sg
David De Cremer
National University of Singapore
bizddc@nus.edu.sg
Jack Zhang
Nanyang Technological Univeristy
jack.zhang@ntu.edu.sg
Philipp Reynders
Cardiff University
reyndersp@cardiff.ac.uk
Chin Tung Stewart Ng
National Sun Yat-sen University
stewartstuartng256@gmail.com
I-Heng Chen
National Sun Yat-sen University
ihchen@cm.nsysu.edu.tw
Correspondence should be addressed to Pok Man Tang at pokmantang620@gmail.com.
Joel Koopman and Ke Michael Mai declared equal authorship in this manuscript.
© 2023, American Psychological Association. This paper is not the copy of record and may
not exactly replicate the final, authoritative version of the article. Please do not copy or cite
without authors' permission. The final article will be available, upon publication, via its
DOI: 10.1037/apl0001103
CONSEQUENCES OF WORK INTERACTION WITH AI
2
2
Abstract
The Artificial Intelligence (AI) revolution has arrived, as AI systems are increasingly being
integrated across organizational functions into the work lives of employees. This coupling of
employees and machines fundamentally alters the work-related interactions to which employees
are accustomed, as employees find themselves increasingly interacting with, and relying on, AI
systems instead of human coworkers. This increased coupling of employees and AI portends a
shift towards more of an “asocial system” wherein people may feel socially disconnected at
work. Drawing upon the social affiliation model, we develop a model delineating both adaptive
and maladaptive consequences of this situation. Specifically, we theorize that the more
employees interact with AI in the pursuit of work goals, the more they experience a need for
social affiliation (adaptive)—which may contribute to more helping behavior towards coworkers
at work—as well as a feeling of loneliness (maladaptive) which then further impair employee
well-being after work (i.e., more insomnia and alcohol consumption). In addition, we submit that
these effects should be especially pronounced among employees with higher levels of attachment
anxiety. Results across four studies (N = 794) with mixed methodologies (i.e., survey study, field
experiment, and simulation study; Studies 1 to 4) with employees from four different regions
(i.e., Taiwan, Indonesia, United States, and Malaysia) generally support our hypotheses.
Keywords: Artificial Intelligence, Need for Affiliation, Loneliness, Attachment Anxiety, Social
Affiliation Model
CONSEQUENCES OF WORK INTERACTION WITH AI
3
3
No Person is an Island:
Unpacking the Work and After-work Consequences of Interacting with Artificial Intelligence
In the 2009 movie Surrogates, a revolution in artificial intelligence (AI) has transformed
work. In this future, people accomplish their daily work tasks by interacting with AI systems that
have largely eliminated direct interpersonal contact with people. Yet the protagonist, Bruce Willis,
finds these interactions empty and unfulfilling. The movie climax occurs when he abandons his
AI system to seek out a direct social connection with others. This movie depicts a sensationalized
example of which is becoming a commonplace workplace phenomenon—that many work-
interactions are not with human coworkers, but rather with AI systems that give advice, make
decisions, and increasingly are employees’ primary collaborators in the pursuit of work goals
(e.g., Kellogg et al., 2020; Meister, 2019; Wilson & Daugherty, 2018)—indeed, Oracle estimates
half of employees already utilize AI in some form on a daily basis (Oracle, 2019), and McKinsey
Global Institute expects that the augmentation of organizational functions such as marketing,
sales, and manufacturing may generate $2 trillion in value over 20 years (The Economist, 2018).
This coupling creates a phenomenon captured poignantly by Bruce Willis’ character, as
the incorporation of AI systems alters the work interactions to which employees are accustomed
(Chitty, 2018; Miller, 2019; Raisch & Krakowski, 2021). For example, where employees may
once have consulted with a coworker to get a second opinion on a decision, or obtain additional
information, AI can (much more efficiently) perform these tasks (Davenport & Kirby, 2016). Yet
AI systems, in contrast to those coworkers, are deficient in terms of providing an engaging and
socially rich user-experience (Mühlhoff, 2020; Tizhoosh & Pantanowitz, 2018). We posit that
this may be problematic because interacting with AI may (similarly as interacting with human
coworkers) activate primitive processes within people that have evolved to monitor the nature
and quality of social interactions with others (Leary & Baumeister, 2000; Sussman et al., 2005).
CONSEQUENCES OF WORK INTERACTION WITH AI
4
4
We draw in particular from O’Connor and Rosenblood, (1996), who describe these
processes as an evolutionary imperative (a “social drive” in their words; p. 531) that motivates
people to action when deprived of social affiliation. This drive is viewed as operating via a
regulatory mechanism (e.g., Johnson et al., 2006) that is sensitive to deviations from a desired
level of affiliation and is “satisfied by social interaction.” Drawing from their work, interactions
with coworkers are expected to trigger and satiate the aforementioned processes. Yet interactions
with AI complicate the picture. Importantly, both scholarly work across multiple disciplines
(D’Haussy, 2018; Kolbjørnsrud et al., 2016; Nyholm & Smids, 2020; Tang et al., 2022b), as well
as practitioner-focused publications (Mittal et al., 2019; Wilson & Daugherty, 2018) argue that
AI is also seen by employees as a “coworker.” This is because these systems are designed to
function as an autonomous work partner (De Cremer, 2020; Guzman & Lewis, 2020). Interactions
with AI may thus also trigger those regulatory processes. Yet unlike with a human coworker,
interactions with AI likely cannot provide the type of social feedback our primitive “social drive”
is designed to detect (O’Connor & Rosenblood, 1996). We thus expect employees may instead
feel socially deprived after interacting with AI (Dwivedi et al., 2019; Lee & Sathikh, 2013).
Unpacking the above is important because not only does it permit us the opportunity to
shine a light on a phenomenon that may have important implications for employee behavior, but
also it permits us the opportunity to integrate what is currently a somewhat fragmented literature
on social affiliation processes and their impact on human functioning (Lazarus, 1999; Leary,
2010; O’Connor & Rosenblood, 1996; Powell, 2022). In line with theorizing from O’Connor and
Rosenblood (1996), along with a recent extension (e.g., Hall, 2017), this body of work has
focused on the more active (what we term adaptive) consequences of social deprivation—the
subsequent motivation to cope by socially reconnecting with others. In summarizing that work,
however, Reissmann et al. (2021) noted that social deprivation may also drive passive (what we
CONSEQUENCES OF WORK INTERACTION WITH AI
5
5
term maladaptive) coping responses as well—a subsequent motivation to instead withdraw.
Applying this to the context of our model, socially-depriving situations (i.e., interactions
with AI) may on the one hand trigger an adaptive coping response to reconnect with others (as
we see with Bruce Willis’ character—reflected in a heightened need for affiliation; O’Connor &
Rosenblood, 1996) that prompts individuals to seek social contact with other via more active and
social behavior (i.e., helping; Van Dyne & LePine, 1998). Yet, on the other hand, such situations
may trigger a maladaptive coping response. Specifically, recent affiliation-based work has
identified the potential for heightened feelings of loneliness (Reissmann et al., 2021), which may
not only have negative implications for well-being by triggering insomnia (Barnes et al., 2015;
Graham & Schmidt, 1999; Moss, 2018), but also may prompt individuals to engage in more
avoidant types of behavior such as after-work drinking. If true, this can be problematic for
organizations as it may affect the mental fitness of the workforce, and may have long-term
implications for employees’ behaviors at work (e.g., Edwards, 1992; Sonnentag et al., 2022).
Importantly, the adaptive and maladaptive mechanisms discussed above that transmit the
effects of socially-depriving situations have largely been discussed in isolation in the affiliation
literature. Thus, beyond bringing them together in a single model, there remains a critical (as yet
unanswered) question—for whom will our hypothesized effects be more pronounced. A
consistent point of emphasis among affiliation scholars is that people should differ on the above
mechanisms based on their sensitivity to the absence of social connection in daily interactions
(Hall, 2017; O’Connor & Rosenblood, 1996). Notably, attachment theory speaks directly to this
point, in that individuals with stronger attachment anxiety should be particularly sensitive to the
absence of social connections (Bowlby, 1969, 1973). Thus, we extend research on social
affiliation model by integrating theory on attachment styles (Simpson, 1990; Simpson et al.,
CONSEQUENCES OF WORK INTERACTION WITH AI
6
6
1992; Simpson & Rholes, 2015).
1
Taken together, we submit that attachment-anxious employees
(i.e., those with a tendency to want close relationships who worry about being socially isolated or
abandoned; Liu et al., 2013; Mikulincer & Shaver, 2007) should experience stronger social
deficit upon interacting with AI at work. This should amplify both the adaptive and maladaptive
responses (i.e., heightened need for affiliation and loneliness) among these employees.
To test our model (see Figure 1), we adopt a “full-cycle research approach” (Chatman &
Flynn, 2005, p. 434) by designing and conducting three studies that (a) employ different research
methodologies (i.e., field and experimental designs), (b) recruit participants from different
industries (i.e., biomedical and real-estate for Study 1 and 2, as well as a wide range of industries
and business functions in Studies 3 and 4), and (c) recruit participants from different regions
(i.e., Taiwan, Indonesia, United States, and Malaysia). By doing so, we make three important
contributions to scholarship on AI, social affiliation model, and attachment theory respectively.
First, we widen the scope of ongoing conversations on the integration of AI into the
workplace which have tended to be more one-sided in terms of whether incorporating AI has
either beneficial (Murray et al., 2021; Wilson & Daugherty, 2018) or aversive (e.g., Efendić et
al., 2020; Newman et al., 2020) outcomes. We instead highlight that such integration may be a
double-edged sword (i.e., while interactions from this integration may have positive outcomes in
terms of affiliative behaviors, there may be negative outcomes as well in the form of impairing
employees’ well-being; Barnes et al., 2017; Graham & Schmidt, 1999). Thus, our research paints
a more complete picture about how work interactions with AI impact employees.
Our second contribution lies in our incorporation of (the somewhat scattered) research on
1
Attachment theorists also note another style termed attachment avoidance (e.g., Simpson, 1990). Although we
focus on the moderating effect of attachment anxiety, based on the arguments noted above, an anonymous reviewer
emphasized that opposing arguments could potentially be made for attachment avoidance (i.e., while attachment
anxious employees may be sensitive to the absence of social connection, attachment avoidance employees may be
less so). Thus, although we focus on attachment anxiety for our primary theorizing, we develop post-hoc hypotheses
for attachment avoidance and test its main and interactive effects in supplemental analyses (see our OSF repository).
CONSEQUENCES OF WORK INTERACTION WITH AI
7
7
social affiliation across different disciplines—thus extending this body of work. That is, the
social affiliation model (O’Connor & Rosenblood, 1996) is one part of the broad spectrum of
theory and research on social affiliation (Leary, 2010; Powell, 2022). By holistically integrating
knowledge in this stream of research, not only do we show that it meaningfully explains the
consequences of this emerging phenomenon at work, but we also contribute back to this research
by simultaneously unpacking the (active) adaptive and (passive) maladaptive consequences of
social deprivation (along with further examining outcomes in multiple domains). In so doing, our
research sheds new lights on the application of social affiliation research to understand the
consequences of human-machine interactions both at work and at home.
Finally, we extend social affiliation research by integrating it with attachment theory
(Simpson, 1990; Simpson et al., 1992; Simpson & Rholes, 2015). In so doing, we add specificity
to what had previously been hinted at—that sensitivity to social interactions (or, their absence)
plays an important role in the social affiliation process (Hall, 2017; O’Connor & Rosenblood,
1996). In this way, our model and theory explain both why, and for whom work interactions with
AI may affect employee behaviors in both work and non-work contexts (Whetten, 1989). This is
important practically as well, as applied psychologists have been increasingly interested in
understanding the organizational implications of attachment styles (McClean et al., 2021;
Richards & Schat, 2011), and our results suggest this may be even more important given the
increased pairing of employees and AI systems as we enter the Fourth Industrial Revolution
(Brynjolfsson & McAfee, 2017; Jarrahi, 2018). Thus, our research provides timely insights for
managers seeking to better understand the effects of augmenting employee jobs with AI.
Theory And Hypotheses Development
Interaction with AI in the 21st Century Workplace
Generally speaking, AI refers to “a broad collection of computer-assisted systems for task
CONSEQUENCES OF WORK INTERACTION WITH AI
8
8
performance, including machine learning, automated reasoning, knowledge repositories, image
recognition, and natural language processing” (von Krogh, 2018, p. 405). Although myriad forms
of AI are used in organizations worldwide, a commonality among them is that they use machine
learning and other forms of algorithmic reasoning to interpret data and augment employee
decision-making processes (Brougham & Haar, 2018; Brynjolfsson & McAfee, 2014; Davenport
& Kirby, 2016; Donald, 2019).
2
These systems leverage AI’s ability to process large quantities
of data and distill it into an interpretable and actionable form (Jarrahi, 2018). For example, using
machine-learning, AI systems make a series of micro-decisions (largely outside of the awareness
of human employees) about which pieces of information to emphasize, and which it uses to make
suggestions or recommendations. In contrast to traditional technologies that operate on a “fixed
set of preprogrammed instructions,” AI systems “have the capacity to learn, and can therefore
improve and adapt based on experience” autonomously (Chalmers et al., 2021, p. 3). In this way,
AI and employee ideally form a connection that speeds up and streamline the data-gathering and
decision-making process (Glaser, 2014; Gregory et al., 2021; Reeves, 2015). For this reason, AI
is seen as having large economic and organizational significance (The Economist, 2018).
Yet with this said, the relationship between employees and AI is complex (Cerulo, 2009;
Newell & Card, 1985; Waytz et al., 2014). As noted, these systems are quite intelligent and
sophisticated, which allows them to autonomously make decisions and suggestions (Davenport,
2018; Webster & Ivanov, 2020). Some employees find this aversive and feel that these systems
are inauthentic (Dietvorst et al., 2018; Jago, 2019), but at the same time, there is evidence that
people treat AI as a “social agent,” in that they hold them to the same social expectations as they
would to another person (Epley et al., 2007). This may be in part because these systems are
2
Our Appendices (see OSF) provide pictures from our data collection sites across the field studies of the typical
type of human-AI collaboration (Brynjolfsson & McAfee, 2017).
CONSEQUENCES OF WORK INTERACTION WITH AI
9
9
designed to mimic the types of interactions that occur between coworkers (i.e., these systems
communicate with employees and maintain “relational dynamics”; Guzman & Lewis, 2020, p.
70). Put differently, while interactions with traditional technology (i.e., phones, websites, or word
processing software) are unidirectional—users provide input to which the technology responds in
a predictable manner—interactions with AI systems are bi-directional (Tang et al., 2022b). That
is, AI systems engage employees in a manner that attempts to mimic how employees would
interact with their human coworkers (Anderson et al., 2018; Borges et al., 2021; Li & Du, 2016).
These dynamics of employee-AI interactions explain why we expect employees will find
these interactions to be socially isolating and devoid of the types of feedback that they would
obtain when interacting with human colleagues. Given the above, we expect interactions with AI
activate the social regulatory process we described earlier in much the same way as would occur
with a human coworker. Yet unlike interactions with a coworker, employees likely do not obtain
the expected social feedback when interacting with AI. For this reason, interactions with AI may
leave employees feeling deprived of social affiliation at work (Wirtz et al., 2019).
Interaction Frequency with AI and Research on Social Affiliation
A point of consensus across many different theories is that humans evolved to require
social interaction to both thrive and survive (e.g., Leary, 2000; Leary & Baumeister, 2000;
Sussman et al., 2005; Williams, 1997). Leveraging this work, O’Connor and Rosenblood (1996)
proposed what they called the ‘social affiliation model’ (p. 513), which describes an internal
regulatory process through which people monitor their affiliation with others. This process is
activated by social interactions with others, at which point people (automatically and outside of
conscious awareness) search for signals that provide information on the quality of those
interactions and their general social standing. For example, social interactions involve both
verbal and non-verbal signals that may suggest that a person is accepted and valued by the other.
CONSEQUENCES OF WORK INTERACTION WITH AI
10
10
Those signals (or, lack thereof) provide an assessment of a person’s current level of affiliation
which, per O’Connor and Rosenblood (1996), activate regulatory processes that drive subsequent
behavior to redress the situation (see also Leary, 2010). Although this system developed
millennia ago, the basic social context (and, operation of these internal processes) remains intact
in modern organizations (i.e., people engage in social interactions at work to perform work tasks,
and internal social affiliation processes monitor those interactions; Weick, 1979; Weiss, 1973).
Bridging back to our study context, the increasing augmentation of employee work with
AI upends this traditional social structure by coupling employees with intelligent machines with
whom they also interact (Wilson & Daugherty, 2018). As we noted, these systems have features
that may lead employees to also see them as a coworker (De Cremer, 2020; D’Haussy, 2018;
Mittal et al., 2019; Tang et al., 2022b). As a consequence, we submit that the primitive social
regulatory processes are similarly activated during these interactions. Yet because AI systems
cannot replicate the social and organic features of social interactions with other humans (e.g.,
Brynjolfsson & McAfee, 2017; Dietvorst et al., 2018; Lawless et al., 2017), those regulatory
processes are left searching for signals that are not present. Thus, employees do not receive the
social feedback they would otherwise expect (e.g., Carrigan & Porpora, 2021; Lazega, 2021).
3
Thus, an unintended consequence of the increased frequency with which some employees
interact with AI (i.e., an employee’s involvement, usage, or engagement with AI systems in the
course of assigned work duties; Kacmar et al., 2003; McAllister, 1995; Webster & Ivanov, 2020)
is the potential to become deprived of social affiliation at work. With this as a backdrop, we
draw from O’Connor and Rosenblood’s (1996) work on social affiliation and their identification
3
An anonymous reviewer acknowledged that the arguments we develop around “unmet expectations” may be one
explanation for our findings, but also argued that a plausible alternative is that employees not interacting with their
human colleagues. While we attempted to rule this explanation out both theoretically as discussed above, as well as
(to foreshadow our study designs) controlling for both the frequency and quality of interactions with coworkers, we
acknowledge we cannot definitively tease-apart these micro-mechanisms. We return to this in our discussion.
CONSEQUENCES OF WORK INTERACTION WITH AI
11
11
of the underlying regulatory process. To explain the behavioral consequences of this process, we
integrate this with the broad social affiliation literature (Reissmann et al., 2021; Revenson &
Lepore, 2012) which has identified two mechanisms that transmit the effects of social deprivation.
Adaptative Reaction—Increased Need for Affiliation and Helping Behavior
O’Connor and Rosenblood’s (1996) social affiliation model exists within a broader
constellation of theories and research pertaining to social affiliation, deprivation, and consequent
coping. Over the years, much of this research has focused on the more adaptive ways in which
people may cope with socially depriving situations (Reissmann et al., 2021). Building from this,
interactions with AI may instantiate within employees a recognition that their social affiliation
needs are unmet, which may drive compensatory behavior. People have a need to connect with
others (e.g., Brewer & Hewstone, 2004; O’Connor & Rosenblood, 1996), and this need is
unlikely to be met by interacting with AI systems (Nomura et al., 2006). That is, while AI can
reproduce words and meaning, it is generally unable to reproduce various aspects of social
interaction that foster affiliation (e.g., facial expressions, eye contact, body posture, and gestures;
Dodds et al., 2011; Graham et al., 1991; Phutela, 2015). Further, AI is unable to form meaningful
social connections with employees (Banerjee et al., 2018). While employees may observe a smile
that crosses a coworker’s face when they stop by to chat, typical AI systems are ill-equipped to
respond in unique and customized ways to different interaction partners. That is, most AI can
only “interact with humans in a uniform way” (Wirtz et al., 2019, p. 607)—a uniformity that
contrasts with the more personalized interactions to which employees are used to with coworkers
(Tanaka & Kobayashi, 2015). Interactions with AI may thus create a discrepancy between
employees’ desired level of affiliation at work and their actual state, resulting in an increased
need for affiliation (O’Connor & Rosenblood, 1996). Thus, we hypothesize:
Hypothesis 1: AI interaction frequency is positively related to employee’s need for affiliation
CONSEQUENCES OF WORK INTERACTION WITH AI
12
12
There are reasons to suspect possible adaptive consequences of this increased need for
affiliation—for example, Leary (2010) suggested that individuals in this situation may be more
eager and motivated to connect with others. This dovetails with theory from O’Connor and
Rosenblood (1996, p. 513) that suggests such deviations from desired levels of social affiliation
may trigger a “social drive” that motivates individuals to seek out social contact as a means of
coping. Indeed, one of the functions of this social regulatory process is to ensure that people
maintain adequate levels of social contact that are necessary for survival (Hall, 2017).
In particular, psychologists have posited that increased prosocial behavior (i.e., helping)
is a key means by which individuals connect with others (Crisp & Turner, 2014; Simpson &
Beckes, 2010). Helping (i.e., discretionary behavior involving actions oriented toward assisting
others at work; Organ, 1988) is an inherently social act that strengthens connections among
coworkers (Koopman et al., 2016). This behavior provides a context in which an employee can
both be in the presence of others, as well as contribute meaningfully to their relationship.
Notably, indirect evidence supports this argument, as scholars have noted that when individuals
feel the need to develop social connections with others, they are often motivated to seek social
contact and go the extra mile for others, as doing so serves as a form of social validation for their
contributions (Baumeister & Leary, 1995). For example, both Gest et al. (2005) and Pavey et al.,
(2011) found that when individuals feel a greater need to affiliate with others, they tend to
display more prosocial behavior. On this basis, we hypothesize:
Hypothesis 2: The relationship between AI interaction frequency and employee’s help
behavior is mediated by employee’s need for affiliation
Maladaptive Reaction—Increased Loneliness and After-Work Alcohol Consumption and
Insomnia
While there may be adaptive outcome of social deprivation stemming from interactions
with AI as described above, other (particularly recent) research has articulated a likelihood that
CONSEQUENCES OF WORK INTERACTION WITH AI
13
13
there may be an alternative mechanism that results in more maladaptive effects outcomes as well
(Reissmann et al., 2021, p. 2)—specifically, these authors noted that when there is a difference
“between a person’s desired and actually attained levels” of affiliation, one possible reaction may
be loneliness—a passive response to socially unfulfilling interactions (Cacioppo & Patrick, 2008;
Rokach & Brock, 1996). While interactions with AI should activate social regulatory processes,
these interactions are mechanical and inorganic in nature (Ackerman & Kanfer, 2020; Huang &
Rust, 2021), which may lead the social regulatory process to signal a sense of social isolation
that manifests as loneliness (Ozcelik & Barsade, 2018; Reissmann et al., 2021). Indeed, it seems
likely that interacting with AI may be a lonely endeavor in the workplace—for example,
previously routine activities such as seeking a second opinion on a proposed solution for a client
can now be provided instantaneously (and more accurately) by an AI system (Ransbotham et al.,
2017). Therefore, more frequent interactions with AI may lead employees to feel socially
disconnected from others, which should increase feelings of loneliness. Thus, we hypothesize:
Hypothesis 3: AI interaction frequency is positively related to employee’s loneliness
In turn, this sense of loneliness may lead to a host of maladaptive consequences for
employees. This is due to what scholars have referred to as the “self-reinforcing cycle of
loneliness” (Cacioppo & Patrick, 2008; Gabriel et al., 2020, p. 3). That is, the experience of
loneliness is sufficiently distasteful that it may not only have negative implications for well-
being, but also may lead employees to cope with the experience by engaging in further isolating
behaviors. Importantly, as Gabriel et al. (2020, p. 4) note, the effects of loneliness may be
particularly likely to occur during “after work” hours. Thus, we examine two outcomes of
loneliness after work: alcohol consumption and insomnia.
First, clinical observations and findings across more than 60 years reveal that lonely
people are more likely to consume alcohol (Bell, 1956; Zwerling, 1959). To this point, Bryan et
CONSEQUENCES OF WORK INTERACTION WITH AI
14
14
al. (2017, p. 606) conclude that lonely individuals are likely to fall prey to a “downward spiral,”
wherein these people may resort to consuming alcohol in order to “help them escape awareness
of their lack of social connections” (see also Åkerlind & Hörnquist, 1992). Put differently, lonely
employees are more likely to see themselves negatively, which can then spillover to other
negative behaviors (e.g., alcohol use) in an attempt to escape from those feelings (Hawkley et al.,
2008). For this reason, scholars from multiple disciplines have highlighted the role of alcohol
consumption as a means of “coping with feelings of [social] isolation” (Segal, 1987, p. 303).
Second, we suspect loneliness may also lead to insomnia among employees. Indeed, a
long-standing stream of research suggests lonely individuals are more likely to have problems
sleeping at night (Mikulincer, 1990; Peplau & Perlman, 1979; Perlman & Peplau, 1981; Sermat,
1978). As recent organizational research suggests, loneliness may be accompanied by recurring
thoughts about employees’ lack of social connection at work, which render employees to be
“mentally activated” after work (Gabriel et al., 2020, p. 4). More specifically, this mental
activation may contain negative reflections on the self and the reasons for their detachment and
loneliness at work (Hawkley & Cacioppo, 2010). These reflections are problematic, as they may
mentally preoccupy employees at night which gives rise to the problem of insomnia. Taken all
the above together, along with our theorizing in the prior hypotheses, we thus propose that:
Hypothesis 4a: The relationship between AI interaction frequency and employee’s after-
work alcohol consumption is mediated by employee’s loneliness
Hypothesis 4b: The relationship between AI interaction frequency and employee’s after-
work insomnia is mediated by employee’s loneliness
The Moderating Role of Attachment Anxiety and Insights from Attachment Theory
Our arguments thus far highlight that because interactions with AI can be devoid of the
meaningful components of social interaction that foster social connection between employees
(e.g., Kaplan & Haenlein, 2020; Wirtz et al., 2019), such interactions should trigger employees’
CONSEQUENCES OF WORK INTERACTION WITH AI
15
15
social affiliation systems to respond in both adaptive and maladaptive manners. In articulating
the social affiliation model, O’Connor and Rosenblood (1996, p. 514) further highlighted the
likelihood of “individual differences” in the nature of this system—specifically noting that some
individuals have a greater sensitivity to the absence of social connections in the workplace. This
directly connects with attachment theory, which is predicated on the notion that individuals differ
in the extent to which they desire social connections with others (Simpson et al., 1992). We,
therefore, extend the social affiliation model by integrating it with attachment theory.
Attachment theory posits that experiences and relationships during infancy, childhood,
and adolescence contribute to the development of relatively stable preferences regarding the
desire for social connection in daily life (Bowlby, 1969, 1973). This theory primarily identifies
two forms of attachment style: anxious and avoidant. Those with an avoidant attachment style
generally place less emphasis on social connections with others (Simpson et al., 1992), whereas
those with an anxious attachment style are seen as being sensitive to the nature and quality of their
social interactions. Plus, in the absence of these interactions, those with an anxious attachment
style tend to be more insecure about losing their connection with others (Simpson & Rholes,
2010). Based on this logic our focus in this manuscript is on the anxious attachment style.
4
Our examination of attachment anxiety also dovetails well with the nature of our focus on
interactions with AI at work, given our arguments that these interactions are largely distant and
devoid of social connection. For employees with higher levels of attachment anxiety, the absence
of social information during interactions with AI should be particularly salient. That is, these
employees may be sensitive to their inability to meaningfully relate to modern AI systems during
4
Although we focus on the moderating effect of attachment anxiety, a reviewer highlighted that opposing arguments
could potentially be made for attachment avoidance based on the notion that these individuals may have greater
tolerance for an absence of social connections in the workplace. As such, although we focus on attachment anxiety
for our primary theorizing, we also develop post-hoc hypotheses for attachment avoidance (please see the arguments
and results in the materials available in OSF).
CONSEQUENCES OF WORK INTERACTION WITH AI
16
16
interactions, and thus may feel as if they risk losing important social connections (Brennan et al.,
1998). Moreover, attachment anxious employees may be prone to underestimating their social
worth (Simpson & Rholes, 2010). This is problematic, as the inability of AI to provide social
validation should further heighten the extent to which attachment-anxious employees see their
interactions with AI systems at work as socially depriving (Ainsworth et al., 1978).
In contrast, those employees with low levels of attachment anxiety not only have more
confidence in their ability to develop social connections with others (Ainsworth, 1985), but also
these employees have a more stable sense of their own social worthiness. As a result, compared
to their higher attachment anxious counterparts, they should be less sensitive to instances in
which they are unable to obtain strong signals of social connectedness (Simpson et al., 1992).
Given this, the absence of social information from interactions with AI should be less concerning
to these employees (Brennan et al., 1998; Simpson & Rholes, 1998). Overall, the sense of social
loss, deprivation, and isolation that may accompany interactions with AI should be greater for
those with higher levels of attachment anxiety, compared to those with lower levels. Taken these
together, along with the mediating hypotheses that we proposed earlier, we hypothesize:
Hypothesis 5: Employee’s attachment anxiety moderates the indirect effect of AI
interaction frequency on employee’s helping behavior via employee’s need for affiliation,
such that this indirect effect will be stronger when attachment anxiety is higher compared
to when attachment anxiety is lower.
Hypothesis 6a: Employee’s attachment anxiety moderates the indirect effect of AI
interaction frequency on employee’s after-work alcohol consumption via employee’s
loneliness, such that this indirect effect will be stronger when attachment anxiety is
higher compared to when attachment anxiety is lower.
Hypothesis 6b: Employee’s attachment anxiety moderates the indirect effect of AI
interaction frequency on employee’s after-work insomnia via employee’s loneliness, such
that this indirect effect will be stronger when attachment anxiety is higher compared to
when attachment anxiety is lower.
Overview of Studies
We conducted four studies that (a) employ different research methodologies (i.e., field
and experimental designs), (b) recruit participants from different industries, and (c) recruit
CONSEQUENCES OF WORK INTERACTION WITH AI
17
17
participants from different geographical regions. In this way our research fits what Chatman and
Flynn (2005) term a “full cycle research approach” (p. 774)—examining a phenomenon in field
and experimental settings to enhance both the internal and external validity of the findings (e.g.,
Koopman et al., 2023; Tang et al., 2022c). Studies 1 and 2 provide an initial test of our model
with employees in a Taiwanese biomedical company (Study 1) and Indonesian real-estate
company (Study 2). Both studies incorporate multiple waves of surveys, as well as obtain reports
from two additional sources (i.e., a coworker and a member of the focal employee’s family).
Studies 3 and 4 then extend the generalizability of these two studies by assessing a broader
spectrum of working adults—working in a range of industries in United States (Study 3) and
working in multiple business functions in a Malaysian technology services company (Study 4).
Together, these four studies robustly test our hypotheses across employees working in different
jobs, industries, and national cultures (Yam et al., 2023). Our studies were approved by the
National Sun Yat-sen University’s Departmental Review Committee (#200604; #1110414-02),
as well as Texas A&M University’s Institutional Review Committee (#IRB2021-0093M).
Transparency and openness
We affirm that our study methods adhered to the Journal of Applied Psychology
methodological checklist. All measurement items and supplementary materials are available on
the Open Science Foundation website at
https://osf.io/acym6/?view_only=6dac73046224421c8af6781f733b3f95. Data were analyzed
using SPSS version 28, R Studio, and Mplus version 7.4. Study designs were not preregistered.
Study 1 Method
Sample & Procedure
We collected data in spring 2021 from engineers in a Taiwanese biomedical company
whose primary responsibilities include working with AI systems to design, test, and implement
CONSEQUENCES OF WORK INTERACTION WITH AI
18
18
new procedures and equipment (e.g., Park et al., 2018; Zhu, 2020). All engineers received an
announcement describing the study (including the need for a cohabiting family member to
participate in the study as well). Data was collected data over three time points each separated by
one week. In the first week (T1), employees reported their interaction frequency with AI,
interaction frequency and quality with coworkers (controls), and their attachment anxiety and
avoidance (controls). In the second week (T2), employees reported their need for affiliation and
loneliness (mediators) and their belongingness and relatedness needs (controls). In the third week
(T3), a coworker reported the focal employee’s helping behavior (coworkers reported only one
employee and were chosen for their work proximity to the focal employee; Tang et al., 2020),
and a cohabiting family member reported the employee’s after-work alcohol consumption and
insomnia. Overall, 166 engineers (53% male) completed the study. Average age was 34.3 years
old (SD = 5.75), average tenure was 2.96 years (SD = 1.62), and average years of using AI
systems at work was 2.14 years (SD = 0.91). Most (88%) were tertiary educated.
Measures
5
Measures were translated into participants’ native language using recommended back-
translation procedures (Brislin, 1980). All measures, except attachment anxiety and avoidance,
asked employees to respond as appropriate “over the last week.” Anchors and wording for all
measures are in OSF and reliabilities are given with our descriptive statistics. At T1, we
measured attachment anxiety and avoidance (5 items each; Simpson et al., 1996), interaction
frequency with AI and coworkers (3 items each, adapted for the particular referent; Shi et al.,
2013), and interaction quality with coworkers (3 items; Mallet et al., 2008). At T2, we measured
need for affiliation (3 items; Hill, 1987; Wiesenfeld et al., 2001), loneliness (3 items; Gabriel et
5
Please see our OSF repository for a series of content validation studies conducted for several of these measures.
CONSEQUENCES OF WORK INTERACTION WITH AI
19
19
al., 2020), need for belongingness (5 items; Puranik et al., 2021) and need for relatedness (3
items; LaGuardia et al., 2000). At T3, the focal employee’s helping was rated by a coworker (3
items; Yue et al., 2017) and alcohol consumption (multiplying 2 items that assess how many [a]
days alcohol was consumed and [b] drinks were consumed; Bacharach et al., 2010) and insomnia
(4 items; Greenberg, 2006) was rated by a family member.
Analytic Strategy
We tested our model with and without control variables (Becker, 2005). Primary results
for all studies reflect models without control variables. The OSF repository shows results with
controls, as well as other post-hoc analyses conducted following recommendations from the
review team. A confirmatory factor analysis (CFA) on our primary model (attachment anxiety,
interaction frequency with AI, need for affiliation, loneliness, helping, and insomnia—alcohol
consumption was not included because it is a discrete number) indicated adequate model fit (χ2 =
272.03, df = 174, CFI = .95, RMSEA = .06, SRMR = .05). We used the Mplus default maximum
likelihood estimator for our model and tested all hypotheses simultaneously.
6
Mediation and
moderated mediation were tested with a parametric bootstrap (with 20,000 replications to form
95% bias-corrected confidence intervals; Preacher et al., 2010; Selig & Preacher, 2008).
Study 1 Results
Table 1 presents descriptive statistics for study variables, and Table 2 provides path-
analytic results of our primary model. In support of Hypothesis 1, interaction frequency with AI
was positively associated with need for affiliation (B = .34, p < .001). Need for affiliation was
positively associated with helping behavior (B = .35, p < .001) and the indirect effect confidence
6
The company’s structure places employees in different units, which results in a nested data structure. We therefore
used the “COMPLEX” analysis in Mplus 7.4 (Muthén & Muthén, 2015) to account for non-independence at the unit
level. This approach allows intercepts to vary across clusters (Hofmann, 1997) and uses a sandwich estimator
(Muthén & Satorra, 1995) to calculate robust standard errors (for a recent example, see Frieder et al., 2018; Yoon et
al., 2021).
CONSEQUENCES OF WORK INTERACTION WITH AI
20
20
interval excluded zero (indirect effect = .119, 95% CI [.061, .202]), supporting Hypothesis 2.
Supporting Hypothesis 3, interaction frequency with AI was positively associated with loneliness
(B = .28, p = .004). Loneliness was positively associated with after-work alcohol consumption (B
= 1.28, p = .033) and after-work insomnia (B = .34, p < .001). Hypotheses 4a (indirect effect
= .363, 95% CI [.073, .866]) and 4b (indirect effect = .097, 95% CI [.030, .198]) were supported.
Attachment anxiety moderated the relationship between AI interaction frequency and
need for affiliation (B = .07, p = .037; Figure 2A). Specifically, the relationship between AI
interaction frequency and need for affiliation was stronger at higher (+1 SD) levels of attachment
anxiety (B = .44, p < .001), compared to lower (-1 SD) levels (B = .23, p = .002). The difference
between those slopes was also significant (difference = .21, p = .045). On this basis, Hypothesis
5 was supported, as a confidence interval for the indirect effect of AI interaction frequency on
employee’s helping behavior via need for affiliation excluded zero at higher levels (conditional
indirect effect = .155; 95% CI [.076, .271]) of attachment anxiety, but the confidence interval
included zero at lower levels (conditional indirect effect = .082; 95% CI [.033, .158]). A
confidence interval for the difference also excluded zero (95% CI [.003, .059]). Attachment
anxiety did not moderate the relationship between AI interaction frequency and loneliness (B
= .06, p = .367), so we do not report confidence intervals as these hypotheses are unsupported.
Discussion of Study 1 Findings
Results from Study 1 support most of our study hypotheses. Specifically, we found that
interacting with AI at work leads to heightened experiences of need for affiliation and loneliness,
which triggers a series of adaptive (i.e., helping at work) and maladaptive behaviors (i.e., alcohol
consumption and insomnia after work). However, we only found support for the moderating role
of attachment anxiety on the adaptive mechanism (i.e., need for affiliation). Notably, our model
was robust to the inclusion or exclusion of a number of control variables (interaction frequency
CONSEQUENCES OF WORK INTERACTION WITH AI
21
21
and quality with coworkers, need for belongingness and relatedness, and attachment avoidance).
Yet despite these positive aspects, there are important limitations of Study 1 as well.
First, while the field design of Study 1 is useful for establishing external validity, it is
limited in its ability to establish internal validity. Although we employed time separation for the
measures and obtained other-reports, an experimental design may allow us to further draw causal
inferences from our model (Hekman et al., 2017; Liang et al., 2018). Second, we have only
assessed employees in a single job who work with one type of AI software (biomedical engineers
based in Taiwan), which may limit the generalizability of our findings. Third, attachment anxiety
did not moderate the relationship between interaction frequency with AI and loneliness, which
necessitates further examination (and further, the moderation effect of attachment anxiety on the
relationship between interaction frequency with AI and need for affiliation had some instability,
as the results weakened when the interactive effect of attachment avoidance was added in the
post-hoc hypothesis test). Thus, we conducted a field experiment with employees in another
industry who use another type of AI system (Dvir et al., 2002; Lee et al., 2012).
Study 2 Method
Sample & Procedure
We collected data in spring 2021 from real estate consultants in an Indonesian property
management company whose primary responsibilities include working with AI systems to
perform customer portfolio matching (i.e., find the perfect match for people looking to buy, rent,
or sell their properties) and property price estimation (Mather, 2019). All 142 consultants were
given a briefing to describe the study (including the need for a cohabitating family member as in
Study 1). The 126 participating employees completed a survey (T1) with measures of attachment
anxiety and avoidance. Participants were then randomly assigned to two conditions. For three
consecutive days, we instructed employees to either collaborate with AI systems as much as
CONSEQUENCES OF WORK INTERACTION WITH AI
22
22
possible (AI condition) or not to use AI when performing their job duties (control condition).
After three work-days, we sent a post-manipulation survey (T2) with measures of need
for affiliation and loneliness, interaction frequency and quality with coworkers, and need for
belongingness and relatedness needs. Employees also rated manipulation check items regarding
interaction frequency with AI. At the same time, a coworker assessed the employee’s helping
behavior and a cohabiting family member assessed the focal employee’s after-work alcohol
consumption and insomnia. Overall, 120 consultants (61.7% male) completed the study. Average
age was 30.7 years old (SD = 6.20), average tenure was 3.03 years (SD = 1.61), and average
years using AI systems at work was 1.95 years (SD = 4.03). Most (80.9%) were tertiary educated.
Measures
We followed the same back-translation procedures as in Study 1. All measures, except
attachment anxiety and avoidance, asked employees to respond as appropriate “over the last
three days.” Anchors and wording for all measures are in OSF and reliabilities are given with our
descriptive statistics. Attachment anxiety and avoidance were measured at T1 as in Study 1. At
T2, we used the items from Study 1 to measure interaction frequency with AI, interaction
frequency and quality with coworkers, need for affiliation, loneliness, need for belongingness,
and relatedness needs. We used the items from Study 1 to measure the focal employee’s helping
behavior from a coworker and both alcohol consumption and insomnia from a family member.
Analytic Strategy
A CFA on our primary model (attachment anxiety, need for affiliation, loneliness,
helping, and insomnia—our independent variable was not included as it is an experimental
condition and alcohol consumption was not included as it is a discrete number) indicated
adequate model fit (χ2 = 164.97, df = 125, CFI = .98, RMSEA = .05, SRMR = .05). We thus
proceeded with model testing following the same procedures as in Study 1.
CONSEQUENCES OF WORK INTERACTION WITH AI
23
23
Study 2 Results
Table 3 presents the means, standard deviations, and correlations of our study variables,
and Table 4 provides path-analytic results of our primary model. First, we conducted a one-way
Analysis of Variance (ANOVA) to examine the effect of our manipulation. Responses to the
manipulation check items differed significantly between the AI condition (M = 5.29, SD = 1.29)
and control condition (M = 3.06, SD = 1.58; t[118]= 8.47, p < .001, d = 1.56). Overall, our
manipulation was deemed effective and thus we next proceeded to perform hypotheses testing.
Supporting Hypothesis 1, AI condition was positively associated with need for affiliation
(B = 2.24, p < .001). Supporting Hypothesis 2, the indirect effect between AI condition and
helping via need for affiliation was positive and significant. That is, need for affiliation was
positively associated with helping behavior (B = .37, p = .002), and the indirect effect confidence
interval excluded zero (indirect effect = .826, 95% CI [.301, 1.457]). Supporting Hypothesis 3,
AI condition was positively associated with loneliness (B = 2.10, p = .004). Hypotheses 4a
predicted that the relationship of AI condition and after-work alcohol consumption is mediated
by loneliness. Loneliness was not significantly associated with after-work alcohol consumption
(B = .27, p = .159), and a confidence interval for the indirect effect included zero (indirect effect
= .558, 95% CI [-.209, 1.380]), so Hypothesis 4a was not supported. Supporting Hypothesis 4b,
the indirect effect between AI condition and after-work insomnia use via loneliness was positive
and significant. That is, loneliness was positively associated with after-work insomnia (B = .42, p
< .001), and a confidence interval excluded zero (indirect effect = .886, 95% CI [.404, 1.451]).
Attachment anxiety moderated the effect between AI interaction frequency and need for
affiliation (B = .83, p < .001; Figure 2B). This relationship was stronger at higher (+1 SD) levels
of attachment anxiety (B = 3.15, p < .001), compared to lower (-1 SD) levels (B = 1.33, p
= .002). The difference between those slopes was also significant (difference = 1.82, p < .001).
CONSEQUENCES OF WORK INTERACTION WITH AI
24
24
On this basis, Hypothesis 5 was supported—the indirect was stronger at higher levels of
attachment anxiety (conditional indirect effect = 1.161; 95% CI [.452, 2.001]) compared to lower
levels (conditional indirect effect = .491; 95% CI [.171, 1.013]). A confidence interval for the
difference in these indirect effects excluded zero (95% CI [.106, .639]). Attachment anxiety also
moderated the relationship between AI interaction frequency and loneliness (B = .85, p < .001;
Figure 3A). This relationship was stronger at higher levels of attachment anxiety (B = 3.04, p
< .001), compared to lower levels (B = 1.17, p = .002). The difference between those slopes was
also significant (difference = 1.86, p < .001). As loneliness was not associated with alcohol
consumption, Hypothesis 6a was not supported, so we do not report the confidence intervals.
Hypothesis 6b was supported, however. For Hypothesis 6b, the indirect effect of AI
condition on employee’s after-work insomnia via loneliness was stronger at higher levels of
attachment anxiety (conditional indirect effect = 1.278; 95% CI [.603, 2.095]) compared to lower
levels (conditional indirect effect = .493; 95% CI [.188, .988]). A confidence interval for the
difference in these indirect effects excluded zero (95% CI [.147, .679]).
Discussion of Study 2 Findings
Study 2 constructively replicated our findings from Study 1 in a more controlled field
experimental setting with employees working in another industry (real estate industry) and
another country (Indonesia). The results of this study—combined with that of Study 1—provide
further evidence with regard to (a) the robustness of our findings (by taking various confounding
variables into account—though we note the non-significant link between loneliness and alcohol
consumption); (b) completeness of our hypothesized model (examining two separate
mechanisms by which interacting with AI influences a range of work and non-work outcomes);
(c) the generalizability of our findings to employees working in a different type of job, with a
different type of AI, in a different country; (d) validity of our findings via collecting reports from
CONSEQUENCES OF WORK INTERACTION WITH AI
25
25
three different sources (i.e., focal employees, coworkers, and family members). That said, there
are some remaining concerns that both Studies 1 and 2 are not able to thoroughly address.
First, despite the evidence for generalizability that we have provided from the convergent
findings in two very different samples, a commonality among them is that we obtained a report
of alcohol consumption and insomnia from a family member living with the employee. While
not intentional, our results are thus implicitly constrained only to employees who cohabitate with
a family member (and thus it is unclear whether we can generalize to employees who live alone).
Further, although the experimental design of Study 2 allows us to exert more control over the
sample, both Studies 1 and 2 are conducted in the field. There are inherently some idiosyncratic
issues that may arise for a given employee on a given period of time over which we do not have
control. Thus, further evidence for both internal and external validity could be obtained by
replicating our findings in a highly controlled laboratory design, as this would allow us to rule
out any potential idiosyncratic confounds across people and jobs (as well as again show that our
model can be applied to an entirely different form of AI). Another benefit of this approach is that
we can recruit a sample of employees from a broad spectrum of jobs and industries, which would
further highlight the generalizability of our findings (and by not restricting the sample to those
employees who cohabitate with a family member, we can address the issue noted previously).
Second, a component of our theorizing that rests on an assumption that employees view
AI as a coworker. This perspective is widely held in AI research across a number of different
literatures (e.g., Gerrish, 2018; Kelly III & Hamm, 2013; Kujala & Saariluoma, 2018; Shadbolt
& Hampson, 2019)—organizational scholarship included (D’Haussy, 2018; Kolbjørnsrud et al.,
2016; Nyholm & Smids, 2020). Specifically, as Tang et al. (2022b) recently note, “in the modern
workplace, however, intelligent machines [e.g., AI] are increasingly seen as coworkers of human
employees” and thus AI can be regarded as “both independent and co-dependent interaction
CONSEQUENCES OF WORK INTERACTION WITH AI
26
26
partners at work” (p. 1023). Practitioners acknowledge this as well, as several publications in this
space have noted the increasing trend that AI becomes the “new colleague” of employees (i.e.,
Mittal et al., 2019; Wilson & Daugherty, 2018). With this said, we felt that we should control for
employee attitudes towards AI as a way to further increase the robustness of our findings. Thus,
to address these two limitations, we conducted an online simulation-based study.
Study 3 Method
Sample & Procedure
We collected data in spring 2022 from 214 full-time working adults (32.7% male) in the
United States through Prolific who work in a broad spectrum of industries and job positions (See
Table 5). Overall, 105 participants were randomly assigned to the interaction with AI condition
while 109 were assigned to the control condition. Average age was 33.97 years (SD = 11.91) and
average tenure was 9.71 years (SD = 11.41). Participants, on average, spent 27.81 minutes on the
study. Before beginning the study, participants reported their attachment anxiety and avoidance,
along with their attitude toward AI (a new control in this study). Participants then proceeded to a
business simulation experiment (adapted from Ederer & Manso, 2013; for an example, see Tang
et al., 2023) wherein they were informed that they were going to provide consulting services for
a lemonade stand business. In this study, the simulation involved three different rounds. In each
round, participants made recommendations to the client on how to run a lemonade stand in terms
of its location, the sugar and lemon content, the color, and the price of the lemonade, etc.
Participants had 3 choices for location, 2 for color, 10 each for sugar and lemon content, and 125
for price (i.e., $0 to $12.50 in increments of $.10).
In the interaction with AI condition, participants interacted an AI that provided additional
information to help develop their recommendations. The AI interacted conversationally with
participants through an embedded video in the survey powered by an artificial-intelligence-based
CONSEQUENCES OF WORK INTERACTION WITH AI
27
27
text-to-voice service—Amazon Polly. As participants went through the task, the AI assisted by
offering information about different aspects of the lemonade (e.g., the implications of different
levels sugar and lemon content). Once participants made recommendations, the AI combined its
knowledge of the lemonade business with participants’ choices to provide further advice (using
formulas from Ederer & Manso, 2013). Finally, participants submitted their final proposal.
In the control condition, everything was the same as the AI condition, except participants
completed the task themselves without interacting with AI. Of note, in both conditions,
participants completed the first three rounds wherein they provided advice to their client. At the
end of the third round, they reported their current need for affiliation, loneliness, need for
belongingness (control), relatedness need (control), intention to help colleagues at work,
intention to consume alcohol after work, as well as the likelihood of having difficulty of sleeping
after work. Finally, participants rated items pertaining to manipulation checks as well as
psychological realism.
Measures
7
All measures, except attachment anxiety and avoidance, asked participants to respond as
appropriate “right now.” Anchors and wording for all measures is in OSF and reliabilities are
given with our descriptive statistics. Attachment anxiety and avoidance were measured as in the
prior studies. At the same time, we measured attitudes toward AI (12 items; Schepman &
Rodway, 2020). After the task, we used the same items from the prior studies to measure need
for affiliation, loneliness, belongingness needs, need for relatedness, and interaction frequency
7
Following Tang et al. (2022b), we assessed the psychological realism of the simulation by asking participants
several questions. Specifically, participants assigned to the AI interaction condition were asked to evaluate the
psychological realism of the task with three items from Farh et al. (2017). Approximately 52.4% participants at least
somewhat agreed (i.e., rating 5, 6, or 7; 1 = strongly disagree, 7 = strongly agree) with the item “It is realistic that I
might work with the AI in this task.” (M = 4.14, SD = 1.80), 55.2% at least somewhat agreed with the item “It is
realistic that I might experience similar interactions with the AI that I just experienced in the task” (M = 4.26, SD =
1.65), and 46.7% at least somewhat agreed with the item “At some point during my career, I will probably encounter
a situation like I just experienced in the task” (M = 4.02, SD = 1.64).
CONSEQUENCES OF WORK INTERACTION WITH AI
28
28
with AI. Given the experimental nature of this study, in the post-task survey we were only able to
assess participants’ intentions to (a) help coworkers in the rest of the workday and (b) consume
alcohol after work, as well as (c) the likelihood that participants would have insomnia that
evening. We used the same items as in the previous studies to measure these constructs.
Analytic Strategy
A CFA on our primary model (identical to Study 2) indicated adequate fit (χ2 = 226.10,
df = 125, CFI = .96, RMSEA = .06, SRMR = .06). We thus tested our model as in prior studies.
Study 3 Results
Table 6 presents the means, standard deviations, and correlations of our study variables,
and Table 7 provides path-analytic results of our primary model. First, we conducted a one-way
Analysis of Variance (ANOVA) to examine the effect of our manipulation. Responses to the
manipulation check items differed significantly between the AI condition (M = 5.37, SD = 1.50)
and control condition (M = 2.14, SD = 1.11; t[212]= 17.97, p < .001, d = 2.46). Overall, our
manipulation was deemed effective and thus we next proceeded to perform hypotheses testing.
Supporting Hypothesis 1, AI condition was positively associated with need for affiliation
(B = .82, p < .001). Supporting Hypothesis 2, the indirect effect between AI condition and
helping via need for affiliation was positive and significant. That is, need for affiliation was
positively associated with helping behavior (B = .28, p < .001), and the indirect effect confidence
interval excluded zero (indirect effect = .226, 95% CI [.106, .387]). Supporting Hypothesis 3, AI
condition was positively associated with loneliness (B = .53, p = .001). Supporting Hypothesis
4a, the indirect effect between the AI condition and after-work alcohol consumption use via
loneliness was positive and significant. That is, loneliness was positively associated with after-
work alcohol consumption (B = .40, p < .001), and a confidence interval excluded zero (indirect
effect = .213, 95% CI [.082, .402]). Supporting Hypothesis 4b, the indirect effect between AI
CONSEQUENCES OF WORK INTERACTION WITH AI
29
29
condition and after-work insomnia use via loneliness was positive and significant. That is,
loneliness was positively associated with after-work insomnia (B = .33, p < .001), and a
confidence interval excluded zero (indirect effect = .171, 95% CI [.064, .341]).
Attachment anxiety moderated the effect between AI interaction frequency and need for
affiliation (B = .29, p = .005; Figure 2C). This relationship was stronger at higher (+1 SD) levels
of attachment anxiety (B = 1.18, p < .001), compared to lower (-1 SD) levels (B = .45, p = .015).
The difference between those slopes was also significant (difference = .74, p = .005). On this
basis, Hypothesis 5 was supported, as the indirect was stronger at higher levels of attachment
anxiety (conditional indirect effect = .328; 95% CI [.153, .570]) compared to lower levels
(conditional indirect effect = .124; 95% CI [.030, .282]). A confidence interval for the difference
in these indirect effects excluded zero (95% CI [.024, .171]).
Attachment anxiety also moderated the relationship between AI interaction frequency and
loneliness (B = .32, p = .008; Figure 3B). This relationship was stronger at higher levels of
attachment anxiety (B = .93, p < .001), compared to lower levels (B = .12, p = .575). The
difference between those slopes was also significant (difference = .814, p = .008). Hypothesis 6a
was supported, as the indirect effect of AI condition on after-work alcohol consumption was
stronger at higher levels of attachment anxiety (conditional indirect effect = .377; 95% CI
[.177, .680]) compared to lower levels (conditional indirect effect = .048; 95% CI [-.116, .240]).
A confidence interval for the difference in these effects excluded zero (95% CI [.037, .268]).
Hypothesis 6b was also supported, as the indirect effect of AI condition on after-work insomnia
via loneliness was stronger at higher levels of attachment anxiety (conditional indirect effect
= .303; 95% CI [.130, .572]) compared to lower levels (conditional indirect effect = .039; 95%
CI [-.091, .200]). A confidence interval for the difference in these effects excluded zero (95% CI
[.028, .226]).
CONSEQUENCES OF WORK INTERACTION WITH AI
30
30
Discussion of Study 3 Findings
Study 3 provided further evidence for our hypotheses in a controlled online sample of
working adults in the United States working in a wide spectrum of jobs and industries (Table 7).
Further, this study added an important control variable—attitudes toward AI—and again used a
different form of AI, thus adding more evidence for the generalizability of our model (along with
not restricting the sample to employees who cohabitate with a family member, which addresses
that lingering concern from Studies 1 and 2). In this study, all hypotheses received support.
However, we want to be cautious with regard to the dependent variables. That is, given the
experimental nature of our study, we could only assess these as intentions (Brutus et al., 2010).
Notably, intentions are one of the most powerful predictors of subsequent behavior (Ajzen &
Fishbein, 1977; Eagly & Chaiken, 1993). However, this remains a limitation. For this reason, we
returned to the field to conduct an experiment with employees from different functional units
(i.e., multiple areas of the business who use different forms of AI). As with Study 3, we do not
restrict this sample to those who cohabitate with a family member, and we measure all controls.
Study 4 Method
Sample & Procedure
We collected data in spring 2022 from employees in different business functions
(finance, operations, marketing, and accounting) in a Malaysian technology company whose
primary responsibilities include working with AI systems (of note, different across the different
functions) to execute various job duties associated with their respective roles. For example,
employees in the finance unit interact with AI systems to forecast financial performance and
develop budgets while employees in the operation unit interact with AI systems to coordinate
resource allocation processes. All 316 employees were given a briefing to describe the study
(which, unlike Study 2, did not require a cohabitating family member). The 294 participating
CONSEQUENCES OF WORK INTERACTION WITH AI
31
31
employees completed a survey (T1) with measures of attachment anxiety and avoidance as well
as attitudes toward AI. As in Study 2, participants were randomly assigned to two conditions and
instructed to either collaborate with AI systems as much as possible (AI condition) or not to use
AI when performing their job duties (control condition) for three consecutive days.
After three work-days, we sent a post-manipulation survey (T2) with measures of need
for affiliation and loneliness, interaction frequency and quality with coworkers, and need for
belongingness and relatedness needs. Employees also rated manipulation check items regarding
interaction frequency with AI, as well as their own after-work alcohol consumption and
insomnia. At the same time, a coworker assessed the employee’s helping behavior. Overall, 294
consultants (47.3% male) completed the study. Average age was 37.9 years old (SD = 7.77),
average tenure was 3.18 years (SD = 1.85), and average years of using AI systems at work was
1.63 years (SD = .78). Most (62.3%) were tertiary educated.
Measures
We followed the same back-translation procedures as in Study 1. All measures, except
attachment anxiety and avoidance, asked employees to respond as appropriate “over the last
three days.” Anchors and wording for all measures are in OSF and reliabilities are given with our
descriptive statistics. Attachment anxiety and avoidance, as well as attitudes toward AI, were
measured at T1 as in Study 1. At T2, we used the items from Study 1 to measure interaction
frequency with AI, interaction frequency and quality with coworkers, need for affiliation,
loneliness, need for belongingness, and relatedness needs. We used the items from Study 1 to
measure the focal employee’s helping behavior from a coworker and employees self-reported
their alcohol consumption and insomnia.
Analytic Strategy
A CFA on our primary model (identical to Study 2) indicated adequate fit (χ2 = 375.48,
CONSEQUENCES OF WORK INTERACTION WITH AI
32
32
df = 125, CFI = .95, RMSEA = .08, SRMR = .04). We thus tested our model as in prior studies.
Study 4 Results
Table 8 presents the means, standard deviations, and correlations of our study variables,
and Table 9 provides path-analytic results of our primary model. First, we conducted a one-way
Analysis of Variance (ANOVA) to examine the effect of our manipulation. Responses to the
manipulation check items differed significantly between the AI condition (M = 5.09, SD = .83)
and control condition (M = 3.52, SD = 1.45; t[292]= 11.46, p < .001, d = 1.34). Overall, our
manipulation was deemed effective and thus we next proceeded to perform hypotheses testing.
Supporting Hypothesis 1, AI condition was positively associated with need for affiliation
(B = 1.47, p < .001). Supporting Hypothesis 2, the indirect effect between AI condition and
helping via need for affiliation was positive and significant. That is, need for affiliation was
positively associated with helping behavior (B = .10, p = .010), and the indirect effect confidence
interval excluded zero (indirect effect = .153, 95% CI [.037, .285]). Supporting Hypothesis 3, AI
condition was positively associated with loneliness (B = .70, p < .001). Supporting Hypothesis
4a, the indirect effect between the AI condition and after-work alcohol consumption use via
loneliness was positive and significant. That is, loneliness was positively associated with after-
work alcohol consumption (B = .53, p = .001), and a confidence interval excluded zero (indirect
effect = .368, 95% CI [.149, .693]). Supporting Hypothesis 4b, the indirect effect between AI
condition and after-work insomnia use via loneliness was positive and significant. That is,
loneliness was positively associated with after-work insomnia (B = .39, p < .001), and a
confidence interval excluded zero (indirect effect = .273, 95% CI [.146, .441]).
Attachment anxiety moderated the effect between AI interaction frequency and need for
affiliation (B = .27, p = .013; Figure 2D). This relationship was stronger at higher (+1 SD) levels
of attachment anxiety (B = 1.86, p < .001), compared to lower (-1 SD) levels (B = 1.07, p
CONSEQUENCES OF WORK INTERACTION WITH AI
33
33
< .001). The difference between those slopes was significant (difference = .79, p = .013). On this
basis, Hypothesis 5 was supported, as the indirect was stronger at higher levels of attachment
anxiety (conditional indirect effect = .194; 95% CI [.051, .359]) compared to lower levels
(conditional indirect effect = .112; 95% CI [.030, .229]). A confidence interval for the difference
in these indirect effects excluded zero (95% CI [.005, .071]). Attachment anxiety also moderated
the relationship between AI interaction frequency and loneliness (B = .29, p = .006; Figure 3C).
This relationship was stronger at higher levels of attachment anxiety (B = 1.12, p < .001),
compared to lower levels (B = .27, p = .208). The difference between those slopes was also
significant (difference = .85, p = .006). Thus, Hypothesis 6a was supported, as the indirect
relationship of AI condition on employee’s after-work alcohol consumption was stronger at
higher levels of attachment anxiety (conditional indirect effect = .592; 95% CI [.242, 1.106])
compared to lower levels (conditional indirect effect = .145; 95% CI [-.054, .457]). The
confidence interval for the difference in these effects excluded zero (95% CI [.045, .339]).
Hypothesis 6b was also supported, as the indirect effect of AI condition on after-work insomnia
via loneliness was stronger at higher levels of attachment anxiety (conditional indirect effect
= .438; 95% CI [.249, .634]) compared to lower levels (conditional indirect effect = .107; 95%
CI [-.055, .294]). A confidence interval for the difference in these effects excluded zero (95% CI
[.035, .214]).
Discussion of Study 4 Findings
All study hypotheses were supported in Study 4. Moreover, the design of this study
comprehensively addresses many of the issues previously brought up (i.e., employees came from
multiple units of the company, all used different types of AI systems, and were not selected
based on whether they cohabited with a family member or not). Overall, this study (combined
with the three prior), provides consistent, robust, and generalizable evidence for our hypotheses.
CONSEQUENCES OF WORK INTERACTION WITH AI
34
34
General Discussion
Anthropologists and evolutionary psychologists have long theorized that people evolved
to require social interactions to survive and thrive in their communities (Horsfall & Arensberg,
1949). These interactions, scholars argue, were critical for individuals to gain social information
about themselves and their place within the group (Levinson, 2006). For this reason, individuals
developed an internal social affiliation process that helps to regulate these interactions (Leary &
Baumeister, 2000). Until recently, the design of modern organizations mirrored those early
communities, as organizations were social systems wherein people came together to collectively
pursue goals (Barnard, 1938). However, as society enters the Fourth Industrial Revolution, digital
transformations in companies worldwide are altering the workplace and the nature of social
interactions among coworkers, as AI systems are increasingly coupled with employees in the
course of their work (Murray et al., 2021). From an organizational point of view, this shift is
critical to increase efficiency and competitiveness in the digital age (Gregory et al., 2021). Yet,
the human element has been somewhat overlooked during this period. That is, peoples’ social
affiliation processes may be sensitive to the increasing frequency of interaction with non-human
AI systems, which may have consequences for employees (O’Connor & Rosenblood, 1996).
Our purpose was to shed light on this phenomenon by testing a model built by integrating
research on social affiliation (Hall, 2017; Leary, 2010; O’Connor & Rosenblood, 1996) and
attachment theory (Simpson, 1990; Simpson et al., 1992; Simpson & Rholes, 2015). Based on
this theoretical perspective, we argued that employees may respond to increasing interaction with
AI at work in an (a) adaptive manner (enacting more affiliative behaviors [i.e., helping] due to
increased needs for affiliation), or (b) a maladaptive manner (enacting behaviors that are more
isolating in nature [alcohol consumption and insomnia after work] due to increased loneliness).
We extend this research further by integrating theory on attachment styles. In so doing, we show
CONSEQUENCES OF WORK INTERACTION WITH AI
35
35
employees with higher levels of anxious attachment are more sensitive to the experience of
interacting with AI, which heightens subsequent feelings of both need for affiliation as well as
loneliness. While we generally find support for our hypotheses across the four studies, it is
important to be mindful of these findings when considering the implications of our findings for
the future of human-AI integration in organizations worldwide (Murray et al., 2021; Tang et al.,
2022a; Wilson & Daugherty, 2018), as well as research on affiliation and attachment.
Overall, the results of the four studies that we have conducted provide robust evidence
for the validity of our hypothesized relationships. That is, we examined our model with
employees who (1) come from a variety of jobs and industries, (2) work with different types of
AI systems, and (3) come from both Eastern and Western cultures. Moreover, our research (4)
employs different methodologies and operationalizations of interactions with AI, and (5) obtains
multiple sources of data (i.e., from focal employees, coworkers, and family members). The
largely consistent findings from the four studies should provide strong confidence in the theory
we develop in this research. We now turn to discuss the implications of our findings.
Theoretical Implications and Avenues for Future Research
First, we broaden the scope of research on the incorporation of AI into organizations by
taking a balanced approach to understanding the variety of outcomes that accrue to employees
based on their interactions with AI at work. That is, while organizational research tends to focus
on either the work-related benefits (e.g., Gregory et al., 2021; Raisch & Krakowski, 2021) or the
aversive consequences (Dietvorst et al., 2018; Yam et al., 2023) of working with AI, the social
affiliation lens that we apply helps us to simultaneously understand both perspectives. In so
doing, we not only identify a positive outcome (increasing helping behavior) that aligns with
research on the benefits accompanying AI-augmented jobs (e.g., von Krogh, 2018), but we also
show several concomitant downsides in the form of impaired well-being.
CONSEQUENCES OF WORK INTERACTION WITH AI
36
36
The second implication of our work is that we stitch together research on social affiliation
that is presently scattered throughout the scholarly literature. Importantly, bringing this research
together revealed that scholars have theorized about, but never simultaneously examined, both
adaptive and maladaptive mechanisms that transmit the effects of social deprivation. Thus, we
contribute to research on social affiliation by juxtaposing these mechanisms and examining their
implications for subsequent coping responses among employees following their interactions with
AI systems. In so doing, we paint a fuller picture of the breadth of outcomes that may be driven
by the social regulatory system suggested by social affiliation scholars (O’Connor & Rosenblood,
1996). Extending this point, we make an additional contribution to research on social affiliation.
Typically, research regarding social affiliation looks at outcomes in a single (non-work) domain.
We extend this research by not only showing that they can be applied to explain psycho-
behavioral consequences after interacting with AI at work, but also that this predicts outcomes
across domains (i.e., helping at work and alcohol consumption and insomnia at home).
Third, our integration of attachment theory with the social affiliation research extends
this theory even further. That is, we add specificity to what was previously only hinted at in
terms of the role of individual differences as influencing people’s sensitivity to the absence of
social connection in their daily interactions (Hall, 2017; O’Connor & Rosenblood, 1996). As our
results reveal, attachment-anxious employees may be particularly sensitive to interactions that
are more socially distant (e.g., those with AI). To this end, the affiliation systems of those
individuals may react more vigorously. It is also noteworthy that our results consistently reveal
that attachment-avoidance is not associated with outcomes of these types of interactions. This is
an important point of departure for organizational scientists seeking to address calls for further
examinations of theory on attachment styles in applied scholarship (e.g., McClean et al., 2021).
Practical Implications
CONSEQUENCES OF WORK INTERACTION WITH AI
37
37
The Fourth Industrial Revolution has arrived (Davenport, 2018; Makridakis, 2017), so
our research provides timely insights for decision-makers to enhance their understanding of how
AI impacts employees. Managers must recognize that, while performance is an important metric
(e.g., Raisch & Krakowski, 2021), interactions with AI may be linked (for better or for worse)
with a variety of performance and well-being related employee outcomes. As interactions with
AI are not going away (Wilson & Daugherty, 2018), managers should focus on a holistic set of
employee outcomes. One way to mitigate some of the consequences of our model is to try and
combat the potential for employees to be lonely—for example, managers could consider the
appropriate density of AI systems in the work environment (Graetz & Michaels, 2018; Zhao et
al., 2012), such that employees can maintain desirable levels of social interactions with others as
well. Relatedly, managers can arrange other opportunities for socializing (Chong et al., 2020).
We further offer a word of caution regarding our finding that interactions with AI led to
increased levels of helping. While this is a desirable outcome from a managerial standpoint
(Podsakoff et al., 2009), this behavior is still a reaction to what employees viewed as a socially
deficient work situation. As such, we do not suggest that the way to encourage greater levels of
helping among employees is to otherwise deprive them of social interactions. To this point, it is
important to note that the same experience that led to greater levels of helping also led to greater
levels of alcohol consumption and insomnia after work (which might jeopardize employees’
mental well-being and result in a negative spiral; Berryman et al., 2018; Pereira et al., 2013). In
this way, our point is analogous to research on citizenship pressure (Bolino et al., 2010). While
pressure to engage in citizenship is associated with greater levels of citizenship, it can also harm
well-being and increase deviant behavior (Koopman et al., 2020; Vigoda-Gadot, 2006).
Limitations and Future Research
Importantly, each study has notable weaknesses, though many are offset by the design of
CONSEQUENCES OF WORK INTERACTION WITH AI
38
38
another study (Hekman et al., 2017; Liang et al., 2018). For example, in Study 1 the interaction
effect of attachment anxiety predicting need for affiliation weakened when adding the interaction
of attachment avoidance (despite this added term itself being not significant). Also, the path
between loneliness and alcohol consumption was not significant in Study 2. Both Studies also
have limitations associated with the generalizability of their findings, as they share a common
feature that the assessments of alcohol consumption and insomnia come from a family member
living with the employee. Further, although both Studies 1 and 2 were conducted in two different
organizations, it makes sense that there are some inherent job differences associated with the
experience of interacting with AI. These issues might call into question the generalizability of
their findings. In order to address these concerns, Study 3 recruited a sample of employees from
a broad spectrum of jobs and industries, whereas Study 4 recruited a sample of employees
coming from different business units who interact with different types of AI systems. Overall,
the four studies, when viewed holistically, should complement the limitations of the others.
Moreover, there are caveats associated with the interpretation of our findings pertaining
to the adaptive pathway of the model and the associated construct—need for affiliation. Research
on psychological needs differentiates between (1) need elicitation and (2) need satisfaction. For
the former, scholars tend to focus on situational or personal factors that instantiate a particular
need. For example, as per de Bloom et al.'s (2020) model, situational factors (e.g., work events or
culture) make needs salient, which then drive subsequent behavior to address those needs. In a
similar vein, other scholars have argued that some work experiences may deprive employees of
certain needs (e.g., Vogel et al., 2020). In contrast, the latter stream of research focuses on how
certain work experiences or interactions may satisfy or fulfill needs. For example, Foulk et al.
(2019) found various forms of motivational striving at work can satisfy basic employee needs.
Similarly, Lin et al. (2021) found that some out-of-work positive events may lead to need
CONSEQUENCES OF WORK INTERACTION WITH AI
39
39
satisfaction at home. Our conceptualization is more aligned with the former research stream, as
our theoretical arguments imply that interactions with AI instantiate a need to socially affiliate
with others at work. However, this leads to another issue we must discuss.
As we highlighted earlier, an anonymous reviewer questioned whether our findings were
indeed driven by the sense of “unmet expectations” about which we theorize, or instead whether
they were driven by a corresponding lack of interactions with human colleagues. While the
former argument derives from the theoretical lens of social affiliation that we apply to this
research question, our study designs were not designed in such a way to directly test these
competing explanations (note that we did control for both the frequency and quality of
interactions with coworkers in an effort to speak to this point). Although this question does not
threaten the validity of our findings, it has implications for understanding why those findings
were observed. Thus, we think future research could benefit from zooming more closely into the
micro-mechanism linking AI interactions with our two mediators. For example, scholars could
compare our social affiliation framework against an alternative such as social baseline theory
(Coan & Sbarra, 2015). From this competing perspective, interactions with AI at work might be
considered as a kind of “relationship disruption” phenomenon (Coan & Sbarra, 2015, p. 87),
such that it will trigger a social-regulatory process of heightened uncertainty and risk—the result
of which would be efforts to maintain social relationships.
There is also a potential limitation associated with our need for affiliation measure. The
original scale is quite long (Hill, 1987), and so we adapted three items for our studies. Doing so
can create concerns over content validity, which we sought to address in two ways. First, we
assessed the convergence of our items (and, the original items) with the definition of need for
affiliation. Second, we conducted an additional study that follows a procedure from Rosen et al.
(2019) in which we administered our scale and the original to a sample of working employees to
CONSEQUENCES OF WORK INTERACTION WITH AI
40
40
evaluate the correlation between the two measures. The results from both studies (reported at
OSF) provide reasonable basis from which to conclude that our shortened measure captures the
core essence of this construct and is thus a valid operationalization. However, we recommend
that scholars evaluating this construct in the future further examine the items used to measure
this construct, and perhaps undertake additional efforts to refine them further.
We do also want to highlight the limitation of measuring our dependent variables as
intentions in Study 3. Despite (as we highlighted earlier) the appropriateness and necessity of
such measures for experimental studies, we do think that future research could go a step further
with the study design to identify a way to capture a behavioral outcome that can proxy for those
of interest to the current research. For example, participants could be asked to provide written
responses to a scenario in which a colleague requested assistance which could then be coded for
evidence of prosocial behavior (e.g., Twenge et al., 2007). To assess the likelihood of socially-
isolating behavior, participants could be asked to write a narrative about what they plan to do
over the upcoming weekend—responses to which could again be coded accordingly.
Meanwhile, there are some potential limitations associated with the specific design of
each of the studies that warrant some further discussion. For example, in Study 1 (i.e., the multi-
wave and multi-source field study), we adopted a one-week interval between each of the surveys.
This design choice follows prior research that also used this time frame as a way to capture the
proximal responses that follow certain work experiences (e.g., Johnson et al., 2015; Lian et al.,
2014; Ong, 2022; Rosen et al., 2014; Yoon et al., 2023). Yet this time interval is not without
limitation, as it could be too long of a window and thus allow for other unmeasured influences on
our downstream variables. Yet here, Studies 2 and 4 (which use an interval of 3 days) and Study
3 (for which all variables were measured in the experimental setting) should alleviate this
concern. Yet with this said, there are other processes that may occur at work that take longer than
CONSEQUENCES OF WORK INTERACTION WITH AI
41
41
a week to develop (for example, turnover processes are theorized to unfold over much longer
time periods; Lee & Mitchell, 1994). Thus, we recommend future research examine models such
as ours over a variety of time intervals—both shorter and longer—to capture the effects of
interacting with AI.
Following from the above, another potential limitation comes from the design of the
simulation-based experiment (Study 3). While participants did tend to see the experience as at
least somewhat realistic and a possible experience they may have in their work at some point,
there is a need to enhance the quality of this (and, other; Efendić et al., 2020) experimental tasks
that involve interactions with AI. While it may not be surprising that participants who do not
work with AI at work right now cannot imagine how they may in the future, it may behoove
researchers to educate participants about the ways in which work is changing to improve the
realism of the task. Along similar lines, it would be good to enhance the interaction experience as
well. For example, scholars can consider incorporating aspects of Open AI systems (e.g.,
ChatGPT; Pavlik, 2023) in developing a collaboration task with AI. In short, we recommend
future research to augment the simulation paradigm to increase the realism of the study.
Conclusion
Across millennia, people evolved internal systems to gauge the quality of relationships
with others. These systems have remained effective in a workplace that, just as in primitive tribal
communities, prioritized social interactions with coworkers. Yet, the advent of digital, asocial AI
systems and their incorporation into employee work, threatens to upend the operation of these
systems. We show how employee interactions with their new AI colleagues may lead to an
increased need for affiliation as well as loneliness. The mixed consequences of these states paint
an important, but sobering, picture of the future of AI augmentation efforts. While this future
continues apace, managers must pay heed to outcomes experienced by their human employees.
CONSEQUENCES OF WORK INTERACTION WITH AI
42
42
References
Ackerman, P. L., & Kanfer, R. (2020). Work in the 21st century: New directions for aging and
adult development. American Psychologist, 75(4), 486–498.
https://doi.org/10.1037/amp0000615
Ainsworth, M. D. S. (1985). Attachments across the life span. Bulletin of the New York Academy
of Medicine, 61(9), 792–812.
Ainsworth, M. D. S., Blehar, M. C., Waters, E., & Wall, S. (1978). Patterns of Attachment: A
Psychological Study of the Strange Situation (1st edition). Psychology Press.
Ajzen, I., & Fishbein, M. (1977). Attitude-behavior relations: A theoretical analysis and review
of empirical research. Psychological Bulletin, 84(5), 888–918.
https://doi.org/10.1037/0033-2909.84.5.888
Åkerlind, I., & Hörnquist, J. O. (1992). Loneliness and alcohol abuse: A review of evidences of
an interplay. Social Science & Medicine, 34(4), 405–414. https://doi.org/10.1016/0277-
9536(92)90300-F
Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial intelligence and the future of
humans. Pew Research Center, 10, 12.
Bacharach, S. B., Bamberger, P., & Biron, M. (2010). Alcohol consumption and workplace
absenteeism: The moderating effect of social support. The Journal of Applied
Psychology, 95(2), 334–348. https://doi.org/10.1037/a0018018
Baldominos, A., Blanco, I., Moreno, A. J., Iturrarte, R., Bernárdez, Ó., & Afonso, C. (2018).
Identifying real estate opportunities using machine learning. Applied Sciences, 8(11),
2321. https://doi.org/10.3390/app8112321
Banerjee, S., Singh, P. K., & Bajpai, J. (2018). A comparative study on decision-making
capability between human and artificial intelligence. In B. K. Panigrahi, M. N. Hoda, V.
Sharma, & S. Goel (Eds.), Nature inspired computing (pp. 203–210). Springer.
https://doi.org/10.1007/978-981-10-6747-1_23
Barnard, C. I. (1938). The functions of the executive. Massachusetts: Harvard University Press.
Barnes, C. M., Lucianetti, L., Bhave, D. P., & Christian, M. S. (2015). “You wouldn’t like me
when I’m sleepy”: Leaders’ sleep, daily abusive supervision, and work unit engagement.
Academy of Management Journal, 58(5), 1419–1437.
https://doi.org/10.5465/amj.2013.1063
Barnes, C. M., Miller, J. A., & Bostock, S. (2017). Helping employees sleep well: Effects of
cognitive behavioral therapy for insomnia on work outcomes. Journal of Applied
Psychology, 102(1), 104–113. https://doi.org/10.1037/apl0000154
Baruti, R. (2017). Learning Alteryx: A beginner’s guide to using Alteryx for self-service analytics
and business intelligence. Packt Publishing Ltd.
Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal
attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–
529. https://doi.org/10.1037/0033-2909.117.3.497
Becker, T. E. (2005). Potential problems in the statistical control of variables in organizational
research: A qualitative analysis with recommendations. Organizational Research
Methods, 8(3), 274–289. https://doi.org/10.1177/1094428105278021
Bell, R. G. (1956). Alcohol and loneliness. Journal of Social Therapy, 2, 171–181.
CONSEQUENCES OF WORK INTERACTION WITH AI
43
43
Berryman, C., Ferguson, C. J., & Negy, C. (2018). Social media use and mental health among
young adults. Psychiatric Quarterly, 89(2), 307–314. https://doi.org/10.1007/s11126-
017-9535-6
Bolino, M. C., Turnley, W. H., Gilstrap, J. B., & Suazo, M. M. (2010). Citizenship under
pressure: What’s a “good soldier” to do? Journal of Organizational Behavior, 31(6),
835–855. https://doi.org/10.1002/job.635
Borges, A. F. S., Laurindo, F. J. B., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2021).
The strategic use of artificial intelligence in the digital era: Systematic literature review
and future research directions. International Journal of Information Management, 57,
102225. https://doi.org/10.1016/j.ijinfomgt.2020.102225
Bowlby, J. (1969). Attachment and loss v. 3 (Vol. 1). In J. A. Simpson & W. S. Rholes (Eds.),
Attachment theory and research: New directions and emerging themes (pp. 470–478).
The Guilford Press.
Bowlby, J. (1973). Attachment and loss: Volume II: Separation, anxiety and anger. The Hogarth
press and the institute of psycho-analysis.
Brennan, K. A., Clark, C. L., & Shaver, P. R. (1998). Self-report measurement of adult
attachment: An integrative overview. In Attachment theory and close relationships (pp.
46–76). The Guilford Press.
Brewer, M. B., & Hewstone, M. (2004). Self and social identity (pp. xii, 340). Blackwell
Publishing.
Brislin, R. W. (1980). Cross-cultural research methods. In I. Altman, A. Rapoport, & J. F.
Wohlwill (Eds.), Environment and culture (pp. 47–82). Springer US.
https://doi.org/10.1007/978-1-4899-0451-5_3
Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual
Review of Psychology, 68(1), 627–652. https://doi.org/10.1146/annurev-psych-010416-
043958
Brougham, D., & Haar, J. (2018). Smart Technology, Artificial Intelligence, Robotics, and
Algorithms (STARA): Employees’ perceptions of our future workplace. Journal of
Management & Organization, 24(2), 239–257. https://doi.org/10.1017/jmo.2016.55
Brutus, S., Gill, H., & Duniewicz, K. (2010). State of science in industrial and organizational
psychology: A review of self-reported limitations. Personnel Psychology, 63(4), 907–
936. https://doi.org/10.1111/j.1744-6570.2010.01192.x
Bryan, J. L., Baker, Z. G., & Tou, R. Y. (2017). Prevent the blue, be true to you: Authenticity
buffers the negative impact of loneliness on alcohol-related problems, physical
symptoms, and depressive and anxiety symptoms. Journal of Health Psychology, 22(5),
605–616. https://doi.org/10.1177/1359105315609090
Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, progress, and
prosperity in a time of brilliant technologies. W. W. Norton & Company.
Brynjolfsson, E., & McAfee, A. (2017). Artificial intelligence, for real. Harvard Business
Review.
Cacioppo, J. T., & Patrick, W. (2008). Loneliness: Human nature and the need for social
connection. W. W. Norton & Company.
Campa, R. (2016). The rise of social robots: A review of the recent literature. Journal of Ethics
and Emerging Technologies, 26(1), 106–113.
Campbell, L., Simpson, J. A., Kashy, D. A., & Rholes, W. S. (2001). Attachment orientations,
dependence, and behavior in a stressful situation: An application of the actor-partner
CONSEQUENCES OF WORK INTERACTION WITH AI
44
44
interdependence model. Journal of Social and Personal Relationships, 18(6), 821–843.
https://doi.org/10.1177/0265407501186005
Carrigan, M., & Porpora, D. V. (2021). Post-human futures: Human enhancement, artificial
intelligence and social theory. Routledge.
Cerulo, K. A. (2009). Nonhumans in social interaction. Annual Review of Sociology, 35(1), 531–
552. https://doi.org/10.1146/annurev-soc-070308-120008
Chalmers, D., MacKenzie, N. G., & Carter, S. (2021). Artificial intelligence and
entrepreneurship: Implications for venture creation in the fourth industrial revolution.
Entrepreneurship Theory and Practice, 45(5), 1028–1053.
https://doi.org/10.1177/1042258720934581
Chang, C.-H., Rosen, C. C., & Levy, P. E. (2009). The relationship between perceptions of
organizational politics and employee attitudes, strain, and behavior: A meta-analytic
examination. Academy of Management Journal, 52(4), 779–801.
https://doi.org/10.5465/amj.2009.43670894
Chatman, J. A., & Flynn, F. J. (2005). Full-cycle micro-organizational behavior research.
Organization Science, 16(4), 434–447. https://doi.org/10.1287/orsc.1050.0136
Chitty, N. (2018). Artificial intelligence, communication and human contentment. Journal of
Content, Community and Communication, 7(Year 4)), v.
Chong, S., Kim, Y. J., Lee, H. W., Johnson, R. E., & Lin, S.-H. (Joanna). (2020). Mind your own
break! The interactive effect of workday respite activities and mindfulness on employee
outcomes via affective linkages. Organizational Behavior and Human Decision
Processes, 159, 64–77. https://doi.org/10.1016/j.obhdp.2019.11.001
Coan, J. A., & Sbarra, D. A. (2015). Social baseline theory: The social regulation of risk and
effort. Current opinion in psychology, 1, 87-91.
Crisp, R. J., & Turner, R. N. (2014). Essential Social Psychology. SAGE.
Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to
work. MIT Press.
Davenport, T. H., & Kirby, J. (2016). Just how smart are smart machines? MIT Sloan
Management Review, 57(3), 21–25.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard
Business Review, 96(1), 108–116.
de Bloom, J., Vaziri, H., Tay, L., & Kujanpää, M. (2020). An identity-based integrative needs
model of crafting: Crafting within and across life domains. Journal of Applied
Psychology, 105(12), 1423–1446. https://doi.org/10.1037/apl0000495
De Cremer, D. (2020). Leadership by algorithm: Who leads and who follows in the AI era?
Harriman House Limited.
Mittal, N., Kuder, D., & Hans, S. (2019, January 16). AI-fueled organizations. Reaching AI's full
potential in the enterprise. Deloitte. https://www2.deloitte.com/us/en/insights/focus/tech-
trends/2019/driving-ai-potential-organizations.html
Den Hartog, D. N., De Hoogh, A. H. B., & Keegan, A. E. (2007). The interactive effects of
belongingness and charisma on helping and compliance. Journal of Applied Psychology,
92(4), 1131–1139. https://doi.org/10.1037/0021-9010.92.4.1131
D’Haussy, C. (2018). Welcoming an Artificial Intelligence robot as a colleague. In The
WealthTech Book (pp. 23–27). John Wiley & Sons, Ltd.
https://doi.org/10.1002/9781119444510.ch5
CONSEQUENCES OF WORK INTERACTION WITH AI
45
45
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People
will use imperfect algorithms if they can (even slightly) modify them. Management
Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Dodds, T. J., Mohler, B. J., & Bülthoff, H. H. (2011). Talk to the virtual hands: Self-animated
avatars improve communication in head-mounted display virtual environments. PLOS
ONE, 6(10), e25759. https://doi.org/10.1371/journal.pone.0025759
Donald, M. (2019). Leading and managing change in the age of disruption and artificial
intelligence. Emerald Group Publishing.
Dvir, T., Eden, D., Avolio, B. J., & Shamir, B. (2002). Impact of transformational leadership on
follower development and performance: A field experiment. Academy of Management
Journal, 45(4), 735–744. https://doi.org/10.5465/3069307
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining
the Unified Theory of Acceptance and Use of Technology (UTAUT): Towards a revised
theoretical model. Information Systems Frontiers, 21(3), 719–734.
https://doi.org/10.1007/s10796-017-9774-y
Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Harcourt Brace Jovanovich
College Publishers.
Ederer, F., & Manso, G. (2013). Is pay for performance detrimental to innovation? Management
Science, 59(7), 1496–1513. https://doi.org/10.1037/1082-989X.12.2.121
Edwards, J. R. (1992). A cybernetic theory of stress, coping, and well-being in organizations.
Academy of Management Review, 17(2), 238–274.
https://doi.org/10.5465/amr.1992.4279536
Efendić, E., Van de Calseyde, P. P. F. M., & Evans, A. M. (2020). Slow response times
undermine trust in algorithmic (but not human) predictions. Organizational Behavior and
Human Decision Processes, 157, 103–114. https://doi.org/10.1016/j.obhdp.2020.01.008
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of
anthropomorphism. Psychological Review, 114(4), 864–886.
https://doi.org/10.1037/0033-295X.114.4.864
Foulk, T. A., Lanaj, K., & Krishnan, S. (2019). The virtuous cycle of daily motivation: Effects of
daily strivings on work behaviors, need satisfaction, and next-day strivings. Journal of
Applied Psychology, 104(6), 755–775. https://doi.org/10.1037/apl0000385
Frieder, R. E., Wang, G., & Oh, I.-S. (2018). Linking job-relevant personality traits,
transformational leadership, and job performance via perceived meaningfulness at work:
A moderated mediation model. Journal of Applied Psychology, 103(3), 324–333.
https://doi.org/10.1037/apl0000274
Gabriel, A. S., Lanaj, K., & Jennings, R. E. (2020). Is one the loneliest number? A within-person
examination of the adaptive and maladaptive consequences of leader loneliness at work.
Journal of Applied Psychology. https://doi.org/10.1037/apl0000838
Gerrish, S. (2018). How smart machines think. MIT Press.
Gest, S. D., Welsh, J. A., & Domitrovich, C. E. (2005). Behavioral predictors of changes in
social relatedness and liking school in elementary school. Journal of School Psychology,
43(4), 281–301. https://doi.org/10.1016/j.jsp.2005.06.002
Glaser, V. (2014). Enchanted algorithms: How organizations use algorithms to automate
decision-making routines. Academy of Management Proceedings, 2014(1), 12938.
https://doi.org/10.5465/ambpp.2014.12938abstract
CONSEQUENCES OF WORK INTERACTION WITH AI
46
46
Graetz, G., & Michaels, G. (2018). Robots at work. The Review of Economics and Statistics,
100(5), 753–768. https://doi.org/10.1162/rest_a_00754
Graham, G. H., Unruh, J., & Jennings, P. (1991). The impact of nonverbal communication in
organizations: A survey of perceptions. The Journal of Business Communication (1973),
28(1), 45–62. https://doi.org/10.1177/002194369102800104
Graham, K., & Schmidt, G. (1999). Alcohol use and psychosocial well-being among older
adults. Journal of Studies on Alcohol, 60(3), 345–351.
https://doi.org/10.15288/jsa.1999.60.345
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the
uncanny valley. Cognition, 125(1), 125–130.
https://doi.org/10.1016/j.cognition.2012.06.007
Greenberg, J. (2006). Losing sleep over organizational injustice: Attenuating insomniac reactions
to underpayment inequity with supervisory training in interactional justice. Journal of
Applied Psychology, 91(1), 58–69. https://doi.org/10.1037/0021-9010.91.1.58
Gregory, R. W., Henfridsson, O., Kaganer, E., & Kyriakou, S. H. (2021). The role of artificial
intelligence and data network effects for creating user value. Academy of Management
Review, 46(3), 534–551. https://doi.org/10.5465/amr.2019.0178
Grover, P., Kar, A. K., & Dwivedi, Y. K. (2022). Understanding artificial intelligence adoption
in operations management: Insights from the review of academic literature and social
media discussions. Annals of Operations Research, 308(1), 177–213.
https://doi.org/10.1007/s10479-020-03683-9
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–
machine communication research agenda. New Media & Society, 22(1), 70–86.
https://doi.org/10.1177/1461444819858691
Halbesleben, J. R. B., & Wheeler, A. R. (2011). I owe you one: Coworker reciprocity as a
moderator of the day-level exhaustion–performance relationship. Journal of
Organizational Behavior, 32(4), 608–626. https://doi.org/10.1002/job.748
Hall, J. A. (2017). The regulation of social interaction in everyday life: A replication and
extension of O’Connor and Rosenblood (1996). Journal of Social and Personal
Relationships, 34(5), 699–716. https://doi.org/10.1177/0265407516654580
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman,
R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human
Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
Hawkley, L. C., Hughes, M. E., Waite, L. J., Masi, C. M., Thisted, R. A., & Cacioppo, J. T.
(2008). From social structural factors to perceptions of relationship quality and
loneliness: The Chicago health, aging, and social relations study. The Journals of
Gerontology: Series B, 63(6), S375–S384. https://doi.org/10.1093/geronb/63.6.S375
Hekman, D. R., Johnson, S. K., Foo, M.-D., & Yang, W. (2017). Does diversity-valuing
behavior result in diminished performance ratings for non-white and female leaders?
Academy of Management Journal, 60(2), 771–797.
https://doi.org/10.5465/amj.2014.0538
Hill, C. A. (1987). Affiliation motivation: People who need people… but in different ways.
Journal of Personality and Social Psychology, 52(5), 1008–1018.
https://doi.org/10.1037/0022-3514.52.5.1008
Hilpisch, Y. (2020). Artificial intelligence in finance. O’Reilly Media.
CONSEQUENCES OF WORK INTERACTION WITH AI
47
47
Hofmann, D. A. (1997). An overview of the logic and rationale of hierarchical linear models.
Journal of Management, 23(6), 723–744. https://doi.org/10.1016/S0149-2063(97)90026-
X
Horsfall, A. B., & Arensberg, C. M. (1949). Teamwork and productivity in a shoe factory.
Human Organization, 8(1), 13–25.
https://doi.org/10.17730/humo.8.1.y7223260872v5251
Howe, M., Chang, C.-H. (D.), & Johnson, R. E. (2013). Understanding affect, stress, and well-
being within a self-regulation framework. In P. L. Perrewé, C. C. Rosen, & J. R. B.
Halbesleben (Eds.), The role of emotion and emotion regulation in job stress and well
being (pp. 1–34). Emerald Group Publishing. https://doi.org/10.1108/S1479-
3555(2013)0000011005
Huang, M.-H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in
marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.
https://doi.org/10.1007/s11747-020-00749-9
Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38–
56. https://doi.org/10.5465/amd.2017.0002
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making. Business Horizons, 61(4), 577–586.
https://doi.org/10.1016/j.bushor.2018.03.007
Johnson, R. E., Chang, C.-H., & Lord, R. G. (2006). Moving from cognition to behavior: What
the research says. Psychological Bulletin, 132(3), 381–415. https://doi.org/10.1037/0033-
2909.132.3.381
Johnson, R. E., Rosen, C. C., Chang, C.-H. (Daisy), & Lin, S.-H. (Joanna). (2015). Getting to the
core of locus of control: Is it an evaluation of the self or the environment? Journal of
Applied Psychology, 100(5), 1568–1578. https://doi.org/10.1037/apl0000011
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects.
Science, 349(6245), 255–260. https://doi.org/10.1126/science.aaa8415
Kacmar, K. M., Witt, L. A., Zivnuska, S., & Gully, S. M. (2003). The interactive effect of leader-
member exchange and communication frequency on performance ratings. Journal of
Applied Psychology, 88(4), 764–772. https://doi.org/10.1037/0021-9010.88.4.764
Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities
of artificial intelligence. Business Horizons, 63(1), 37–50.
https://doi.org/10.1016/j.bushor.2019.09.003
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested
terrain of control. Academy of Management Annals, 14(1), 366–410.
https://doi.org/10.5465/annals.2018.0174
Kelly III, J. E., & Hamm, S. (2013). Smart machines: IBM’s Watson and the era of cognitive
computing. Columbia University Press.
Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2016). How artificial intelligence will redefine
management. Harvard Business Review, 2(1), 3–10.
Koopman, J., Lanaj, K., & Scott, B. A. (2016). Integrating the Bright and Dark Sides of OCB:
A Daily Investigation of the Benefits and Costs of Helping Others. Academy of
Management Journal, 59, 414-435.
Koopman, J., Rosen, C. C., Gabriel, A. S., Puranik, H., Johnson, R. E., & Ferris, D. L. (2020).
Why and for whom does the pressure to help hurt others? Affective and cognitive
CONSEQUENCES OF WORK INTERACTION WITH AI
48
48
mechanisms linking helping pressure to workplace deviance. Personnel Psychology,
73(2), 333–362. https://doi.org/10.1111/peps.12354
Koopman, J., Lanaj, K., Lee, Y. E., & Alterman, V., Bradley, C., & Stoverink, A. S. (2023).
Walking on Eggshells: A Self-Control Perspective on Workplace Political Correctness.
Journal of Applied Psychology, 108, 425-445
Kozma, R., Alippi, C., Choe, Y., & Morabito, F. C. (2018). Artificial intelligence in the age of
neural networks and brain computing. Academic Press.
Kujala, T., & Saariluoma, P. (2018). Cognitive mimetics for designing intelligent technologies.
Advances in Human-Computer Interaction, 2018.
La Guardia, J. G., Ryan, R. M., Couchman, C. E., & Deci, E. L. (2000). Within-person variation
in security of attachment: A self-determination theory perspective on attachment, need
fulfillment, and well-being. Journal of Personality and Social Psychology, 79(3), 367.
Lawless, W. F., Mittu, R., Sofge, D., & Russell, S. (2017). Autonomy and Artificial Intelligence:
A Threat or Savior? Springer.
Lazarus, R. S. (1999). Stress and emotion: A new synthesis (pp. xiv, 342). Springer Publishing
Co.
Lazega, E. (2021). Perplexity logs: On routinized certainty work and social consequences of
seeking advice from an artificial intelligence. In Post-Human Futures. Routledge.
Leary, M. R. (2000). Affect, cognition, and the social emotions. In Feeling and thinking: The
role of affect in social cognition (pp. 331–356). Cambridge University Press.
Leary, M. R. (2010). Affiliation, acceptance, and belonging: The pursuit of interpersonal
connection. In Handbook of social psychology, Vol. 2, 5th ed (pp. 864–897). John Wiley
& Sons, Inc. https://doi.org/10.1002/9780470561119.socpsy002024
Leary, M. R., & Baumeister, R. F. (2000). The nature and function of self-esteem: Sociometer
theory. In Advances in Experimental Social Psychology (Vol. 32, pp. 1–62). Academic
Press. https://doi.org/10.1016/S0065-2601(00)80003-9
Lee, M. K., Forlizzi, J., Kiesler, S., Rybski, P., Antanitis, J., & Savetsila, S. (2012).
Personalization in HRI: A longitudinal field experiment. 2012 7th ACM/IEEE
International Conference on Human-Robot Interaction (HRI), 319–326.
Lee, S. G., & Sathikh, P. (2013). A framework for effective human-to-machine communication
for artificial interactive systems. DS 75-7: Proceedings of the 19th International
Conference on Engineering Design (ICED13), Design for Harmonies, Vol. 7: Human
Behaviour in Design, Seoul, Korea, 19-22.08. 2013, 367–376.
Levinson, D. F. (2006). The genetics of depression: A review. Biological Psychiatry, 60(2), 84–
92. https://doi.org/10.1016/j.biopsych.2005.08.024
Li, D., & Du, Y. (2016). Artificial intelligence with uncertainty (2nd ed.). CRC Press.
https://doi.org/10.1201/9781315366951
Lian, H., Brown, D. J., Ferris, D. L., Liang, L. H., Keeping, L. M., & Morrison, R. (2014).
Abusive supervision and retaliation: A self-control framework. Academy of Management
Journal, 57(1), 116–139. https://doi.org/10.5465/amj.2011.0977
Liang, L. H., Brown, D. J., Ferris, D. L., Hanig, S., Lian, H., & Keeping, L. M. (2018). The
dimensions and mechanisms of mindfulness in regulating aggressive behaviors. Journal
of Applied Psychology, 103(3), 281–299. https://doi.org/10.1037/apl0000283
Lin, S.-H. (Joanna), Chang, C.-H. (Daisy), Lee, H. W., & Johnson, R. E. (2021). Positive family
events facilitate effective leader behaviors at work: A within-individual investigation of
CONSEQUENCES OF WORK INTERACTION WITH AI
49
49
family-work enrichment. Journal of Applied Psychology, 106(9), 1412–1434.
https://doi.org/10.1037/apl0000827
Liu, J., Hui, C., Lee, C., & Chen, Z. X. (2013). Why do I feel valued and why do I contribute? A
relational approach to employee’s organization-based self-esteem and job performance.
Journal of Management Studies, 50(6), 1018–1040. https://doi.org/10.1111/joms.12037
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on
society and firms. Futures, 90, 46–60. https://doi.org/10.1016/j.futures.2017.03.006
Mallett, R. K., Wilson, T. D., & Gilbert, D. T. (2008). Expect the unexpected: Failure to
anticipate similarities leads to an intergroup forecasting error. Journal of Personality and
Social Psychology, 94(2), 265. https://doi.org/10.1037/0022-3514.94.2.94.2.265
Malone, G. P., Pillow, D. R., & Osman, A. (2012). The General Belongingness Scale (GBS):
Assessing achieved belongingness. Personality and Individual Differences, 52(3), 311–
316. https://doi.org/10.1016/j.paid.2011.10.027
Mather, B. (2019). Artificial intelligence in real estate investing: How artificial intelligence and
Machine Learning technology will cause a transformation in real estate business,
marketing and finance for everyone. Abiprod Pty Ltd.
McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal
cooperation in organizations. Academy of Management Journal, 38(1), 24–59.
https://doi.org/10.5465/256727
McClean, S. T., Yim, J., Courtright, S. H., & Dunford, B. B. (2021). Transformed by the family:
An episodic, attachment theory perspective on family–work enrichment and
transformational leadership. Journal of Applied Psychology, No Pagination Specified-No
Pagination Specified. https://doi.org/10.1037/apl0000869
Meister, J. (2019). Ten HR trends in the age of artificial intelligence. Forbes.
Mikulincer, M. (1990). Joint influence of prior beliefs and current situational information on
stable and unstable attributions. The Journal of Social Psychology, 130(6), 739–753.
https://doi.org/10.1080/00224545.1990.9924626
Mikulincer, M., & Shaver, P. R. (2003). The attachment behavioral system in adulthood:
activation, psychodynamics, and interpersonal processes. In Advances in experimental
social psychology, Vol. 35 (pp. 53–152). Elsevier Academic Press.
https://doi.org/10.1016/S0065-2601(03)01002-5
Mikulincer, M., & Shaver, P. R. (2007). Attachment in adulthood: Structure, dynamics, and
change. Guilford Press.
Miller, A. (2019). The intrinsically linked future for human and Artificial Intelligence
interaction. Journal of Big Data, 6(1), 38. https://doi.org/10.1186/s40537-019-0202-7
Moss, J. (2018). Helping remote workers avoid loneliness and burnout. Harvard Business
Review on Health.
Mühlhoff, R. (2020). Human-aided artificial intelligence: Or, how to run large computations in
human brains? Toward a media sociology of machine learning. New Media & Society,
22(10), 1868–1884. https://doi.org/10.1177/1461444819885334
Murray, A., Rhymer, J., & Sirmon, D. G. (2021). Humans and Technology: Forms of Conjoined
Agency in Organizations. Academy of Management Review, 46(3), 552–571.
https://doi.org/10.5465/amr.2019.0186
Muthén, L. K., & Muthén, B. O. (2015). Mplus Version 7.4 Software. Los Angeles, CA: Muthén
& Muthén.
CONSEQUENCES OF WORK INTERACTION WITH AI
50
50
Muthén, B. O., & Satorra, A. (1995). Complex Sample Data in Structural Equation Modeling.
Sociological Methodology, 25, 267–316. https://doi.org/10.2307/271070
Newell, A., & Card, S. K. (1985). The Prospects for Psychological Science in Human-Computer
Interaction. Human–Computer Interaction, 1(3), 209–242.
https://doi.org/10.1207/s15327051hci0103_1
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair:
Algorithmic reductionism and procedural justice in human resource decisions.
Organizational Behavior and Human Decision Processes, 160, 149–167.
https://doi.org/10.1016/j.obhdp.2020.03.008
Nomura, T., Kanda, T., & Suzuki, T. (2006). Experimental investigation into influence of
negative attitudes toward robots on human–robot interaction. AI & SOCIETY, 20(2), 138–
150. https://doi.org/10.1007/s00146-005-0012-7
Nyholm, S., & Smids, J. (2020). Can a robot be a good colleague? Science and Engineering
Ethics, 26(4), 2169–2188. https://doi.org/10.1007/s11948-019-00172-6
O’Connor, S. C., & Rosenblood, L. K. (1996). Affiliation motivation in everyday experience: A
theoretical comparison. Journal of Personality and Social Psychology, 70(3), 513.
Ong, W. J. (2022). Gender-contingent effects of leadership on loneliness. Journal of Applied
Psychology, 107(7), 1180–1202. https://doi.org/10.1037/apl0000907
Oracle (2019, November 21). From fear to enthusiasm: Artificial intelligence is winning more
hearts and minds in the workplace.
https://www.oracle.com/a/ocom/docs/applications/hcm/ai-at-work-ebook.pdf
Organ, D. W. (1988). Organizational citizenship behavior: The good soldier syndrome (pp. xiii,
132). Lexington Books/D. C. Heath and Com.
Overall, N. C., Simpson, J. A., & Struthers, H. (2013). Buffering attachment-related avoidance:
Softening emotional and behavioral defenses during conflict discussions. Journal of
Personality and Social Psychology, 104(5), 854–871. https://doi.org/10.1037/a0031798
Ozcelik, H., & Barsade, S. G. (2018). No Employee an Island: Workplace Loneliness and Job
Performance. Academy of Management Journal, 61(6), 2343–2366.
https://doi.org/10.5465/amj.2015.1066
Paetzold, R. L. (2015). Attachment theory in organizational settings. In Attachment theory and
research: New directions and emerging themes (pp. 261–286). The Guilford Press.
Park, Y., Fritz, C., & Jex, S. M. (2018). Daily cyber incivility and distress: The moderating roles
of resources at work and home. Journal of Management, 44(7), 2535–2557.
https://doi.org/10.1177/0149206315576796
Pavey, L., Greitemeyer, T., & Sparks, P. (2011). Highlighting Relatedness Promotes Prosocial
Motives and Behavior. Personality and Social Psychology Bulletin, 37(7), 905–917.
https://doi.org/10.1177/0146167211405994
Pavlik, J. V. (2023). Collaborating With ChatGPT: Considering the Implications of Generative
Artificial Intelligence for Journalism and Media Education. Journalism & Mass
Communication Educator, 10776958221149577.
Peplau, L. A., & Perlman, D. (1979). Blueprint for a social psychological theory of loneliness.
Love and Attraction: An Interpersonal Conference, 101–110.
Pereira, G., Wood, L., Foster, S., & Haggar, F. (2013). Access to alcohol outlets, alcohol
consumption and mental health. PLOS ONE, 8(1), e53461.
https://doi.org/10.1371/journal.pone.0053461
CONSEQUENCES OF WORK INTERACTION WITH AI
51
51
Perlman, D., & Peplau, L. A. (1981). Toward a social psychology of loneliness. Personal
Relationships, 3, 31–56.
Phutela, D. (2015). The importance of non-verbal communication. IUP Journal of Soft Skills,
9(4), 43–49.
Piazolo, D., & Dogan, U. (2021). Macroeconomic structure of the German real estate market
(SSRN Scholarly Paper No. 3802553). Social Science Research Network.
https://papers.ssrn.com/abstract=3802553
Podsakoff, N. P., Whiting, S. W., Podsakoff, P. M., & Blume, B. D. (2009). Individual-and
organizational-level consequences of organizational citizenship behaviors: A meta-
analysis. Journal of Applied Psychology, 94(1), 122. https://doi.org/10.1037/a0013079
Powell, L. J. (2022). Adopted utility calculus: Origins of a concept of social affiliation.
Perspectives on Psychological Science, 17456916211048488.
https://doi.org/10.1177/17456916211048487
Preacher, K. J., Zyphur, M. J., & Zhang, Z. (2010). A general multilevel SEM framework for
assessing multilevel mediation. Psychological Methods, 15(3), 209–233.
https://doi.org/10.1037/a0020141
Puranik, H., Koopman, J., & Vough, H. C. (2021). Excuse me, do you have a minute? An
exploration of the dark- and bright-side effects of daily work interruptions for employee
well-being. Journal of Applied Psychology, No Pagination Specified-No Pagination
Specified. https://doi.org/10.1037/apl0000875
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–
augmentation paradox. Academy of Management Review, 46(1), 192–210.
https://doi.org/10.5465/amr.2018.0072
Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial
intelligence: Closing the gap between ambition and action. MIT Sloan Management
Review, 59(1).
Reeves, M. (2015). Algorithms can make your organization self-tuning. Haravard Business
Review.
Reissmann, A., Stollberg, E., Hauser, J., Kaunzinger, I., & Lange, K. W. (2021). The role of state
feelings of loneliness in the situational regulation of social affiliative behavior: Exploring
the regulatory relations within a multilevel framework. PLOS ONE, 16(6), e0252775.
https://doi.org/10.1371/journal.pone.0252775
Revenson, T. A., & Lepore, S. J. (2012). Coping in social context. In Handbook of health
psychology, 2nd ed (pp. 193–217). Psychology Press.
https://doi.org/10.4324/9781410600073
Richards, D. A., & Schat, A. C. H. (2011). Attachment at (not to) work: Applying attachment
theory to explain individual behavior in organizations. Journal of Applied Psychology,
96(1), 169–182. https://doi.org/10.1037/a0020372
Rokach, A., & Brock, H. (1996). The causes of loneliness. Psychology: A Journal of Human
Behavior, 33(3), 1–11.
Rosen, C. C., Ferris, D. L., Brown, D. J., Chen, Y., & Yan, M. (2014). Perceptions of
Organizational Politics: A Need Satisfaction Paradigm. Organization Science, 25(4),
1026–1055. https://doi.org/10.1287/orsc.2013.0857
Rosen, C. C., Simon, L. S., Gajendran, R. S., Johnson, R. E., Lee, H. W., & Lin, S.-H. J. (2019).
Boxed in by your inbox: Implications of daily e-mail demands for managers’ leadership
CONSEQUENCES OF WORK INTERACTION WITH AI
52
52
behaviors. Journal of Applied Psychology, 104(1), 19.
https://doi.org/10.1037/apl0000343
Russell, D., Peplau, L. A., & Cutrona, C. E. (1980). The revised UCLA Loneliness Scale:
Concurrent and discriminant validity evidence. Journal of Personality and Social
Psychology, 39(3), 472–480. https://doi.org/10.1037/0022-3514.39.3.472
Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards Artificial
Intelligence Scale. Computers in Human Behavior Reports, 1, 100014.
https://doi.org/10.1016/j.chbr.2020.100014
Segal, B. M. (1987). Drinking motivation and the causes of alcoholism: An overview of the
problem and a multidisciplinary model. Alcohol and Alcoholism, 22(3), 301–311.
Selig, J. P., & Preacher, K. J. (2008). Monte Carlo method for assessing mediation: An
interactive tool for creating confidence intervals for indirect effects [Computer software].
Sermat, V. (1978). Sources of loneliness. Essence: Issues in the study of ageing, dying, and
death, 2(4), 271–276.
Shadbolt, N., & Hampson, R. (2019). The digital ape: How to live (in peace) with smart
machines. Oxford University Press.
Shi, J., Johnson, R. E., Liu, Y., & Wang, M. (2013). Linking subordinate political skill to
supervisor dependence and reward recommendations: A moderated mediation model.
Journal of Applied Psychology, 98(2), 374–384. https://doi.org/10.1037/a0031129
Simpson, J. A. (1990). Influence of attachment styles on romantic relationships. Journal of
Personality and Social Psychology, 59(5), 971–980. https://doi.org/10.1037/0022-
3514.59.5.971
Simpson, J. A., & Beckes, L. (2010). Evolutionary perspectives on prosocial behavior. In
Prosocial motives, emotions, and behavior: The better angels of our nature (pp. 35–53).
American Psychological Association. https://doi.org/10.1037/12061-002
Simpson, J. A., & Rholes, W. S. (1994). Stress and secure base relationships in adulthood. In
Attachment processes in adulthood (pp. 181–204). Jessica Kingsley Publishers.
Simpson, J. A., & Rholes, W. S. (2010). Attachment and relationships: Milestones and future
directions. Journal of Social and Personal Relationships, 27(2), 173–180.
https://doi.org/10.1177/0265407509360909
Simpson, J. A., & Rholes, W. S. (2015). Attachment theory and research: New directions and
emerging themes. Guilford Publications.
Simpson, J. A., Rholes, W. S., & Nelligan, J. S. (1992). Support seeking and support giving
within couples in an anxiety-provoking situation: The role of attachment styles. Journal
of Personality and Social Psychology, 62(3), 434. https://doi.org/10.1037/0022-
3514.62.3.434
Simpson, J. A., Rholes, W. S., & Phillips, D. (1996). Conflict in close relationships: An
attachment perspective. Journal of Personality and Social Psychology, 71(5), 899–914.
https://doi.org/10.1037/0022-3514.71.5.899
Sonnentag, S., Cheng, B. H., & Parker, S. L. (2022). Recovery from work: Advancing the field
toward the future. Annual Review of Organizational Psychology and Organizational
Behavior, 9(1), 33–60. https://doi.org/10.1146/annurev-orgpsych-012420-091355
Starr, C. W., Saginor, J., & Worzala, E. (2020). The rise of PropTech: Emerging industrial
technologies and their impact on real estate. Journal of Property Investment & Finance,
39(2), 157–169. https://doi.org/10.1108/JPIF-08-2020-0090
CONSEQUENCES OF WORK INTERACTION WITH AI
53
53
Sussman, R. w., Garber, P. A., & Cheverud, J. M. (2005). Importance of cooperation and
affiliation in the evolution of primate sociality. American Journal of Physical
Anthropology, 128(1), 84–97. https://doi.org/10.1002/ajpa.20196
Tanaka, T., & Kobayashi, K. (2015). Developing a dividual model using a modular neural
network for human-robot interaction. Journal of Robotics, Networking and Artificial Life,
2(1), 34–39. https://doi.org/10.2991/jrnal.2015.2.1.9
Tang, P. M., Yam, K. C., & Koopman, J. (2020). Feeling proud but guilty? Unpacking the
paradoxical nature of unethical pro-organizational behavior. Organizational Behavior
and Human Decision Processes, 160, 68-86.
Tang, P. M., Koopman, J., Elfenbein, H. A., Zhang, J. H., De Cremer, D., Li, C. H., & Chan,
E.T. (2022a). Using robots at work during the COVID‐19 crisis evokes passion decay:
Evidence from field and experimental studies. Applied Psychology, 71(3), 881-911.
Tang, P. M., Koopman, J., McClean, S. T., Zhang, J. H., Li, C. H., De Cremer, D., Lu, Y., & Ng,
C. T. S. (2022b). When conscientious employees meet intelligent machines: an
integrative approach inspired by complementarity theory and role theory. Academy of
Management Journal, 65(3), 1019–1054. https://doi.org/10.5465/amj.2020.1516
Tang, P. M., Yam, K. C., Koopman, J., & Ilies, R. (2022c). Admired and Disgusted? Third
Parties’ Paradoxical Emotional Reactions and Behavioral Consequences Towards
Others’ Unethical Pro-Organizational Behavior. Personnel Psychology, 75, 33-67.
Tang, P. M., Koopman, J., Yam, K. C., De Cremer, D., Zhang, J. H., & Reynders, P. (2023). The
Self-regulatory Consequences of Dependence on Intelligent Machines at
Work: Evidence from Field and Experimental Studies. Human Resource Management.
The Economist (2018). For artificial intelligence to thrive, it must explain itself – if it cannot,
who will trust it?. The Economist, February. https://www.economist.com/science-and-
technology/2018/02/15/for-artificial-intelligence-to-thrive-it-must-explain-itself
Tizhoosh, H. R., & Pantanowitz, L. (2018). Artificial intelligence and digital pathology:
Challenges and opportunities. Journal of Pathology Informatics, 9, 38.
https://doi.org/10.4103/jpi.jpi_53_18
Trougakos, J. P., Beal, D. J., Cheng, B. H., Hideg, I., & Zweig, D. (2015). Too drained to help:
A resource depletion perspective on daily interpersonal citizenship behaviors. Journal of
Applied Psychology, 100(1), 227. https://doi.org/10.1037/a0038082
Twenge, J. M., Baumeister, R. F., DeWall, C. N., Ciarocco, N. J., & Bartels, J. M. (2007). Social
exclusion decreases prosocial behavior. Journal of Personality and Social Psychology,
92(1), 56.
Van Dyne, L., & LePine, J. A. (1998). Helping and voice extra-role behaviors: Evidence of
construct and predictive validity. Academy of Management Journal, 41(1), 108–119.
https://doi.org/10.5465/256902
Vigoda-Gadot, E. (2006). Compulsory citizenship behavior: Theorizing some dark sides of the
good soldier syndrome in organizations. Journal for the Theory of Social Behaviour,
36(1), 77–93. https://doi.org/10.1111/j.1468-5914.2006.00297.x
Vogel, R. M., Rodell, J. B., & Sabey, T. B. (2020). Meaningfulness misfit: Consequences of
daily meaningful work needs–supplies incongruence for daily engagement. Journal of
Applied Psychology, 105(7), 760–770. https://doi.org/10.1037/apl0000464
CONSEQUENCES OF WORK INTERACTION WITH AI
54
54
von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for
phenomenon-based theorizing. Academy of Management Discoveries, 4(4), 404–409.
https://doi.org/10.5465/amd.2018.0084
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism
increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52,
113–117. https://doi.org/10.1016/j.jesp.2014.01.005
Webster, C., & Ivanov, S. (2020). Robotics, artificial intelligence, and the evolving nature of
work. In B. George & J. Paul (Eds.), Digital transformation in business and society:
theory and cases (pp. 127–143). Springer International Publishing.
https://doi.org/10.1007/978-3-030-08277-2_8
Weick, K. E. (1979). The social psychology of organizing. Addison-Wesley, Reading, MA.
Weiss, R. S. (1973). Loneliness: The experience of emotional and social isolation (pp. xxii, 236).
The MIT Press.
Whetten, D. A. (1989). What constitutes a theoretical contribution? The Academy of
Management Review, 14(4), 490–495. https://doi.org/10.2307/258554
Wiesenfeld, B. M., Raghuram, S., & Garud, R. (2001). Organizational identification among
virtual workers: The role of need for affiliation and perceived work-based social support.
Journal of Management, 27(2), 213–229. https://doi.org/10.1177/014920630102700205
Williams, K. D. (1997). Social Ostracism. In R. M. Kowalski (Ed.), Aversive Interpersonal
Behaviors (pp. 133–170). Springer US. https://doi.org/10.1007/978-1-4757-9354-3_7
Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining
forces. Harvard Business Review, 96(4), 114–123.
Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—
Applications and challenges. International Journal of Public Administration, 42(7), 596–
615. https://doi.org/10.1080/01900692.2018.1498103
Yam, K. C., Christian, M. S., Wei, W., Liao, Z., & Nai, J. (2017). The mixed blessing of leader
sense of humor: Examining costs and benefits. Academy of Management Journal, 61(1),
348–369. https://doi.org/10.5465/amj.2015.1088
Yam, K. C., Tang, P. M., Jackson, J. C., Su, R., & Gray, K. (2023). The rise of robots increases
job insecurity and maladaptive workplace behaviors: Multimethod evidence. Journal of
Applied Psychology. Online Advance Version.
Yip, J., Ehrhardt, K., Black, H., & Walker, D. O. (2018). Attachment theory at work: A review
and directions for future research. Journal of Organizational Behavior, 39(2), 185–198.
https://doi.org/10.1002/job.2204
Yoon, S., McClean, S. T., Chawla, N., Kim, J. K., Koopman, J., Rosen, C. C., Trougakos, J.
P., & McCarthy, J. M. (2021). Working Through an ‘Infodemic’: The Impact of
COVID-19 News Consumption on Employee Uncertainty and Work Behaviors.
Journal of Applied Psychology, 106, 501-517.
Yoon, S., Koopman, J., Dimotakis, N., Simon, L. S., Liang, L. H., Ni, D., Zheng, X., Fu, S.,
Lee, Y. E., Tang, P., Ng, C. T. S., Bush, J., Darden, T., Forrester, J., Tepper, B. J., &
Brown, D. J. (2023). Journal of Applied Psychology.
Yu, S., Indurthi, S. R., Back, S., & Lee, H. (2018). A multi-stage memory augmented neural
network for machine reading comprehension. Proceedings of the Workshop on Machine
Reading for Question Answering, 21–30. https://doi.org/10.18653/v1/W18-2603
CONSEQUENCES OF WORK INTERACTION WITH AI
55
55
Yue, Y., Wang, K. L., & Groth, M. (2017). Feeling bad and doing good: The effect of customer
mistreatment on service employee’s daily display of helping behaviors. Personnel
Psychology, 70(4), 769–808. https://doi.org/10.1111/peps.12208
Zhao, J., Yang, X., Xiao, R., Zhang, X., Aguilera, D., & Zhao, J. (2012). Belief system,
meaningfulness, and psychopathology associated with suicidality among Chinese college
students: A cross-sectional survey. BMC Public Health, 12, 668.
https://doi.org/10.1186/1471-2458-12-668
Zhu, H. (2020). Big data and artificial intelligence modeling for drug discovery. Annual Review
of Pharmacology and Toxicology, 60(1), 573–589. https://doi.org/10.1146/annurev-
pharmtox-010919-023324
Zwerling, I. (1959). Psychiatric findings in an interdisciplinary study of forty-six alcoholic
patients. Quarterly Journal of Studies on Alcohol, 20, 543–554.
https://doi.org/10.15288/qjsa.1959.20.543
CONSEQUENCES OF WORK INTERACTION WITH AI 56
56
Table 1. Descriptive Statistics and Correlations among Study Variables (Study 1)
Note. N = 166. Scale reliabilities are reported along the diagonal in parentheses.
* p < .05
Mean
SD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. Interaction frequency with AI
4.59
1.18
(.93)
2. Interaction frequency with
coworkers (control)
5.16
1.31
-.03
(.89)
3. Interaction quality with
coworkers (control)
4.78
1.32
-.14
.66*
(.82)
4. Need for affiliation
4.71
1.10
.32*
.37*
.25*
(.82)
5. Loneliness
3.29
1.40
.24*
-.09
-.16*
.03
(.84)
6. Need for belonginess (control)
5.82
.74
.27*
.21*
.15
.29*
.00
(.84)
7. Relatedness need (control)
5.03
1.24
.09
.10
-.03
.22*
.11
.30*
(.87)
8. Helping behavior
4.40
1.38
.10
.29*
.41
.29*
-.07
.12
.02
(.88)
9. Alcohol consumption
4.22
8.48
.19*
-.02
-.01
.04
.25*
-.05
-.07
.03
-
10. Insomnia
3.97
1.40
.20*
.02
-.02
.16*
.37*
.09
-.02
.01
.10
(.86)
11. Attachment anxiety
3.60
1.46
.11
-.29*
-.29*
-.25*
.09
-.17*
-.01
-.25*
.05
.07
(.90)
12. Attachment avoidance (control)
3.99
1.78
.04
-.39*
-.38*
-.35*
.08
-.05
.00
-.33*
-.02
-.07
.41*
(.96)
13.Age (years)
34.30
5.75
-.10
.042
-.05
-.10
-.01
-.05
-.08
.02
-.10
-.07
-.05
.00
-
14.Gender (1=male; 0=female )
.53
.50
.11
.042
.03
.09
.03
-.01
-.03
-.05
.06
.08
-.11
-.09
.03
15.Tenure (years)
2.96
1.62
.19*
.093
.10
.08
.16*
.06
-.02
.05
.04
.22*
.02
-.02
.30*
.08
CONSEQUENCES OF WORK INTERACTION WITH AI 57
57
Table 2. Path Analysis (Primary Hypotheses; Study 1)
Need for
affiliation
Loneliness
Helping
Alcohol
consumption
Insomnia
Variables
B
SE
B
SE
B
SE
B
SE
B
SE
Intercept
4.69*
.07
3.27*
.11
3.00*
.59
.23
3.27
2.11*
.57
Independent variable
Interaction frequency with AI (IF)
.34*
.07
.28*
.10
.03
.13
1.00*
.47
.10
.10
Mediator
Need for affiliation
--
--
--
--
.35*
.09
-.04
.75
.16
.10
Loneliness
--
--
--
--
-.08
.10
1.28*
.60
.34*
.07
Moderator
Attachment anxiety (AAN)
-.22
.04
.06
.10
--
--
--
--
--
--
Interaction (AAN × IF)
.07*
.03
.06
.07
--
--
--
--
--
--
Note. N = 166. * p < .05
CONSEQUENCES OF WORK INTERACTION WITH AI 58
58
Table 3. Descriptive Statistics and Correlations among Study Variables (Study 2)
Note. N = 120. 1 = AI condition; 0 = Control condition. Scale reliabilities are reported along the diagonal in parentheses.
* p < .05
Mean
SD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. Condition (1 or 0)
.50
.50
-
2. Interaction frequency
with coworkers (control)
5.24
1.37
.10
(.88)
3. Interaction quality with
coworkers (control)
5.28
1.12
.09
.56*
(.91)
4. Need for affiliation
4.17
1.78
.63*
.11
.12
(.94)
5. Loneliness
3.96
1.72
.64*
.11
.03
.68*
(.94)
6. Need for belonginess
(control)
4.60
1.27
.24*
.39*
.52*
.25*
.25*
(.94)
7. Relatedness need
(control)
3.15
1.43
.10
-.20*
.40*
-.08
-.06
-.57*
(.93)
8. Helping behavior
4.90
1.66
.12
.20*
.26*
.26*
.08
.25*
-.12
(.95)
9. Alcohol consumption
2.67
2.58
.32*
.13
.02
.27*
.32*
.09
-.01
.02
-
10. Insomnia
3.36
1.72
.38*
.10
.04
.35*
.49*
-.02
.09
-.09
.17
(.87)
11. Attachment anxiety
3.00
1.10
.17
.09
-.06
.09
.22*
-.15
.24*
.08
.08
.27*
(.92)
12. Attachment avoidance
(control)
4.22
.99
.07
-.16
-.27*
-.15
.01
-.30*
.32*
-.16
-.09
.09
.35*
(.74)
13.Age (years)
30.68
6.20
.22*
-.04
-.06
.17*
.06
.05
.12
.15
.14
-.01
-.05
.07
-
14.Gender (1=male;
0=female)
.62
.49
.27*
.17
.19*
.16*
.21*
.10
.00
.09
.09
.12
.26*
.21*
.06
-
15.Tenure (years)
3.03
1.61
.11
.17
.21*
.09
.12*
.18
-.12
.03
-.06
.05
-.01
-.05
.21*
.07
CONSEQUENCES OF WORK INTERACTION WITH AI 59
59
Table 4. Path Analysis (Primary Hypotheses; Study 2)
Need for
affiliation
Loneliness
Helping
Alcohol
consumption
Insomnia
Variables
B
SE
B
SE
B
SE
B
SE
B
SE
Intercept
2.97*
.17
2.83*
.16
4.09*
.42
.95
.64
1.53*
.40
Independent variable
AI Condition (AI)
2.24*
.24
2.10*
.23
-.05
.40
.95
.61
.38
.38
Mediator
Need for affiliation
--
--
--
--
.37*
.12
.05
.18
-.01
.11
Loneliness
--
--
--
--
-.18
.12
.27
.19
.42*
.12
Moderator
Attachment anxiety (AAN)
-.43*
.15
-.23
.14
--
--
--
--
--
--
Interaction (AAN × AI)
.83*
.22
.85*
.21
--
--
--
--
--
--
Note. N = 120. * p < .05
CONSEQUENCES OF WORK INTERACTION WITH AI 60
60
Table 5. Industries and examples job titles of participants (Study 3)
Industries
Percentage of participants (N = 214)
Example job titles
Educational services
13.1%
Teaching assistant, science teacher, senior lecturer
Finance and insurance
9.3%
Finance manager, financial analyst, accountant,
broker
Health care and social assistance
9.3%
Nurse, massage therapist, pharmacist
Information
9.3%
Software developer, digital marketing, web designer
Retail and service
6.5%
Retail assistant, customer service manager, cashier
Accommodation and food services
4.7%
Bartender, waitress, baker
Arts, entertainment, and recreation
4.7%
Graphic designer, blog writer, producer
Professional, scientific, and technical
services
4.7%
Lawyer, electrical engineer, consultant
Management
3.7%
Operation manager, project manager, assistant
manager
Public administration
3.7%
Civil servant, procurement manager, human
resources
Administration
2.8%
Administrator, proof reader
Transportation
2.8%
Crew scheduler, Logistic and transport specialist
Construction
2.3%
Architect, senior site manager
Others
23.1%
Dog trainer, homemaker, research fellow
Note. The categorization of industries was based on 2017 North American Industry Classification System (NAICS):
https://www.census.gov/naics/
CONSEQUENCES OF WORK INTERACTION WITH AI 61
61
Table 6. Descriptive Statistics and Correlations among Study Variables (Study 3)
Note. N = 214. 1 = AI condition; 0 = Control condition. Scale reliabilities are reported along the diagonal in parentheses. For race, it is
coded as “1=White; 2=Black American; 3=American Indian; 4=Asian; 5=Native Hawaiian or other pacific islander; 6=other”
* p < .05
Mean
SD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. Condition (1 or 0)
0.49
0.50
-
2. Need for affiliation
5.11
1.04
.35*
(.91)
3. Loneliness
3.73
1.25
.29*
.07
(.81)
4. Need for belonginess
(control)
4.82
1.05
.00
.17*
-.27*
(.86)
5. Relatedness need
(control)
4.80
1.11
.05
.27*
-.24*
.46*
(.71)
6. Helping behavior
5.61
1.14
-.06
.21*
-.25*
.50*
.36*
(.90)
7. Alcohol consumption
2.85
1.74
-.19*
-.07
.21*
-.07
-.08
.01
-
8. Insomnia
3.29
1.62
-.03
-.14*
.22*
-.15*
-.20*
-.00
.21*
(.87)
9. Attachment anxiety
2.77
1.28
.21*
-.11
.42*
-.32*
-.26*
-.28*
.11
.17*
(.91)
10. Attachment avoidance
(control)
3.84
1.55
-.52*
-.37*
.08
-.27*
-.18*
-.14*
.22*
.22*
.18*
(.85)
11. Attitudes towards AI
(control)
4.44
1.13
-.28*
-.00
-.08
.06
.05
.06
.17*
-.11
-.03
.16*
(.94)
12.Age (years)
33.97
11.91
.32*
.12
-.06
.11
.06
.20*
-.20*
-.01
-.08
-.23*
-.20*
-
13.Gender (1=male;
0=female)
.67
.47
.19*
.12
.06
.13
.09
.09
-.05
.08
.06
-.11
-.26*
.04
-
14.Tenure (years)
9.71
11.41
.24*
.09
.01
.09
.01
.15*
-.11
.07
-.11
-.18*
-.09
.73*
-.10
-
15.Race
1.49
1.19
-.17*
-.06
-.12
.06
-.06
-.02
.14*
.01
-.11
.09
.15*
-.20*
-.07
-.14*
CONSEQUENCES OF WORK INTERACTION WITH AI 62
62
Table 7. Path Analysis (Primary Hypotheses; Study 3)
Need for
affiliation
Loneliness
Helping
Alcohol
consumption
Insomnia
Variables
B
SE
B
SE
B
SE
B
SE
B
SE
Intercept
4.67*
.09
3.42*
.11
5.12*
.43
1.72*
.66
3.24*
.63
Independent variable
AI Condition (AI)
.82*
.13
.53*
.15
-.18
.16
-.96*
.25
-.19
.24
Mediator
Need for affiliation
--
--
--
--
.28*
.08
.02
.12
-.21
.11
Loneliness
--
--
--
--
-.22*
.06
.40*
.09
.33*
.09
Moderator
Attachment anxiety (AAN)
-.32*
.08
.19*
.09
--
--
--
--
--
--
Interaction (AAN × AI)
.29*
.10
.32*
.12
--
--
--
--
--
--
Note. N = 214. * p < .05
CONSEQUENCES OF WORK INTERACTION WITH AI 63
63
Table 8. Descriptive Statistics and Correlations among Study Variables (Study 4)
Note. N = 294. 1 = AI condition; 0 = Control condition. Scale reliabilities are reported along the diagonal in parentheses. * p < .05
Mean
SD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1. Condition (1 or 0)
0.50
0.50
-
2. Interaction frequency
with coworkers (control)
5.00
1.21
.00
(.81)
3. Interaction quality with
coworkers (control)
5.11
1.06
-.06
.64*
(.88)
4. Need for affiliation
5.03
1.58
.47*
.01
.08
(.98)
5. Loneliness
3.17
1.48
.23*
-.28*
-.32*
.11
(.93)
6. Need for belonginess
(control)
4.45
1.19
-.27*
.42*
.48*
-.04
-.34*
(.88)
7. Relatedness need
(control)
4.65
1.11
.09
.28*
.29*
.11
-.35*
.32*
(.71)
8. Helping behavior
5.45
1.03
-.08
.42*
.43*
.09
-.32*
.37*
.29*
(.86)
9. Alcohol consumption
2.53
3.93
-.00
.06
.01
.01
.19*
.04
-.04
.08
-
10. Insomnia
3.51
1.66
.04
-.15*
-.17*
-.12*
.34*
-.14*
-.30*
-.16*
.08
(.90)
11. Attachment anxiety
4.33
1.47
-.03
-.08
-.05
-.18*
.37*
-.19*
-.32*
-.08
.09
.29*
(.91)
12. Attachment avoidance
(control)
4.65
1.34
-.05
-.14*
-.20*
.04
.22*
-.22*
-.31*
-.12*
.04
.14*
.14*
(.83)
13. Attitudes towards AI
(control)
4.80
1.04
-.08
.08
.11
-.04
-.04
.07
.01
.09
.03
.04
.05
.16*
(.92)
14. Age (years)
37.92
7.77
-.05
-.00
-.01
-.02
-.04
-.02
-.08
-.03
.00
-.11
.08
-.00
-.02
-
15. Gender (1=male;
0=female)
.47
.50
-.05
-.01
-.03
.12*
.02
-.01
.04
-.04
.08
-.05
.02
-.01
-.01
-.05
-
16.Tenure (years)
3.18
1.85
-.07
.09
.02
.01
-.08
-.00
.05
-.06
.06
-.04
.01
-.03
-.12*
.21*
.08
CONSEQUENCES OF WORK INTERACTION WITH AI 64
64
Table 9. Path Analysis (Primary Hypotheses; Study 4)
Need for
affiliation
Loneliness
Helping
Alcohol
consumption
Insomnia
Variables
B
SE
B
SE
B
SE
B
SE
B
SE
Intercept
4.30*
.11
2.82*
.11
5.71*
.22
.91
.88
3.14*
.35
Independent variable
AI Condition (AI)
1.47*
.16
.70*
.15
-.17
.13
-.44
.52
.16
.21
Mediator
Need for affiliation
--
--
--
--
.10*
.04
.03
.16
-.19*
.06
Loneliness
--
--
--
--
-.22*
.04
.53*
.16
.39*
.06
Moderator
Attachment anxiety (AAN)
-.30*
.07
.24*
.07
--
--
--
--
--
--
Interaction (AAN × AI)
.27*
.11
.29*
.11
--
--
--
--
--
--
Note. N = 294. * p < .05
CONSEQUENCES OF WORK INTERACTION WITH AI 65
65
Figure 1. Hypothesized Model
.
CONSEQUENCES OF WORK INTERACTION WITH AI 66
66
Figure 2. Attachment Anxiety Moderates the Relationship between Interaction Frequency with AI and Need for Affiliation
Note. Figure 2A – Study 1 | Figure 2B – Study 2 | Figure 2C – Study 3 | Figure 2D – Study 4
2A
2B
2C
2D
CONSEQUENCES OF WORK INTERACTION WITH AI 67
67
Figure 3. Attachment Anxiety Moderates the Relationship between Interaction Frequency with AI and Loneliness
Note. Figure 3A – Study 2 | Figure 3B – Study 3 | Figure 3C – Study 4
3A
3B
3C
A preview of this full-text is provided by American Psychological Association.
Content available from Journal of Applied Psychology
This content is subject to copyright. Terms and conditions apply.