ArticlePDF Available

The dark side of AI-powered service interactions: exploring the process of co-destruction from the customer perspective

Taylor & Francis
The Service Industries Journal
Authors:

Abstract

Artificial intelligence (AI)-powered chatbots are changing the nature of service interfaces from being human-driven to technology-dominant. As a result, customers are expected to resolve issues themselves before reaching out to customer service representatives, ultimately becoming a central element of service production as co-creators of value. However, AI-powered interactions can also fail, potentially leading to anger, confusion, and customer dissatisfaction. We draw on the value co-creation literature to investigate the process of co-destruction in AI-powered service interactions. We adopt an exploratory approach based on in-depth interviews with 27 customers who have interacted with AI-powered chatbots in customer service settings. We find five antecedents of failed interactions between customers and chatbots: authenticity issues, cognition challenges, affective issues, functionality issues, and integration conflicts. We observe that although customers do accept part of the responsibility for co-destruction, they largely attribute the problems they experience to resource misintegration by service providers. Our findings contribute a better understanding of value co-destruction in AI-powered service settings and provide a richer conceptualization of the link between customer resource loss, attributions of resource loss, and subsequent customer coping strategies. Our findings also offer service managers insights into how to avoid and mitigate value co-destruction in AI service settings.
1
The Dark Side of AI-powered Service Interactions: Exploring the Process of Co-
destruction from the Customer Perspective
Abstract
Artificial intelligence (AI)-powered chatbots are changing the service interface from being
human-driven to technology-dominant. As a result, customers are expected to resolve issues
themselves before reaching out to customer service representatives, ultimately becoming a
central element of service production as co-creators of value. However, AI-powered
interactions can also fail, potentially leading to anger, confusion, and customer dissatisfaction.
We draw on the value co-creation literature to investigate the process of co-destruction in AI-
powered service interactions. We adopt an exploratory approach based on in-depth interviews
with 27 customers who have interacted with AI-powered chatbots in customer service settings.
We find five antecedents of failed interactions between customers and chatbots: authenticity
issues, cognition challenges, affective issues, functionality issues, and integration conflicts. We
observe that although customers do accept part of the responsibility for co-destruction, they
largely attribute the problems they experience to resource misintegration by service providers.
Our findings contribute a better understanding of value co-destruction in AI-powered service
settings and provide a richer conceptualization of the link between customer resource loss,
attributions of resource loss, and subsequent customer coping strategies. Our findings also offer
service managers insights into how to avoid and mitigate value co-destruction in AI service
settings.
Practical contribution on policy making on supporting progress towards UN Sustainable
Development Goal 8: Promoting inclusive and sustainable economic growth, employment
and decent work for all.
Our study demonstrates the disruptive effects of AI-powered technologies for end-users and
companies deploying such technologies to their customers. Our findings point to the need for
policy makers to educate businesses and entrepreneurs on the proper use of AI, particularly on
the protection of user privacy, avoidance of malicious use, and the provision of transparency so
as to avoid public and consumer loss of confidence in such technologies. In this way, the
negative effects of AI-powered technologies can be mitigated, and AI deployment can result in
both customer satisfaction and cost optimization, leading to economic growth.
Keywords
Value Co-destruction; Customer Resource loss; Artificial Intelligence; Automated service
interactions; Chatbots; Service Robots; Value Co-creation
Word Count
9,048 words (excluding References)
2
Introduction
Artificial intelligence (AI) is rapidly transforming service encounters, as frontline
employees (FLEs) are increasingly becoming supported or even replaced by AI technology.
Indeed, automated customer service agents attracted the highest share of AI investment
(USD4.5 billion worldwide) in 2019 (IDC, 2019). Such technologies, which include
conversational agents (or chatbots) and voice-controlled digital assistants (e.g., Alexa), are
fundamentally changing the nature of the service interface from one that is human-driven to
one that is predominantly autonomous and technology-dominant (Larivière et al., 2017).
By interacting with AI technologies to self-serve, a customer becomes a central element of
service production, a partial employee and a co-creator of value (Bitner et al., 1997; Vargo &
Lusch, 2004). Active customer participation during service encounters yields several benefits.
For service providers, it results in an improved ability to understand and react to customer needs
(Etgar, 2008), while customers value the convenience and the cost savings afforded by such
technologies (Ho & Ko, 2008). AI technologies can also provide more convenient, accessible
services compared to the more traditional services they substitute, generally enabling
customers to accomplish specific tasks more easily.
However, positive value creation is not the only outcome that can arise when customers
interact with AI-powered technologies. In the same way that value is collaboratively co-created,
it can be collaboratively co-destroyed during the process of interaction (Echeverri & Skålén,
2011). The autonomy of AI may signify suboptimal outcomes if the technology adapts in
unexpected ways or the wrong data is acted on by FLEs or customers (Bock et al., 2020).
Furthermore, AI technologies rely on customer participation, which increases service
complexity and, eventually, the likelihood of service failure (Hilton & Hughes, 2013). As
3
customers invest higher levels of effort and time into an interaction, they might feel annoyed
and frustrated when the co-created service fails to meet their expectations (Grönroos & Voima,
2013; Harrison & Waite, 2015). Indeed, these instances represent the loss of valuable resources,
such as time and patience, for the customer (Harrison & Waite, 2015).
Academic literature to date has offered an incomplete understanding of the antecedents of
co-destruction and resource loss in particular (Järvi et al., 2018; Smith, 2013). Further research
on value co-destruction is required to obtain a more complete and refined appreciation of the
processes involved (Ostrom et al., 2015). Such an understanding is especially important in light
of the pervasiveness of AI technologies in service, particularly regarding conditions and drivers
that determine how AI may lead to diminished value creation (Bock et al., 2020).
We aim to understand the process of co-destruction in AI-powered service interactions and
argue that such extant literature does not offer sufficient insights into this topic. This is a
potential result of three research limitations.
First, whereas value co-creation has been given considerable research attention (e.g., Cova
& Dalli, 2009; Morosan & DeFranco, 2016; Ramaswamy & Ozcan, 2018; Verleye, 2015), co-
destruction has been largely overlooked (Ostrom et al., 2015). There is a dearth of studies
addressing co-destruction that generally focused on conceptual discussions of the notions of
co-destruction and the associated resource loss (e.g., Echeverri & Skålén, 2011; Plé &
Chumpitaz Cáceres, 2010), with few studies empirically examining resource loss and its causes
(e.g., Smith, 2013). Customers are increasingly playing a more significant role in service
delivery, as service encounters are becoming increasingly infused with technology and
automation. We argue that, as a result of this trend, a better understanding of the antecedents of
4
value co-destruction, especially customer resource loss, is required. This understanding is
especially important to ensuring that co-destruction is avoided, especially in service encounters
that are specifically designed to foster value co-creation (Plé, 2017).
Second, what we do know about co-destruction comes from traditional service settings,
such as insurance (Blut et al., 2019). There have been calls for further research on value co-
destruction in a diversity of industries and contexts (Prior & Marcos-Cuevas, 2016), especially
in technology-driven service environments (Quach & Thaichon, 2017). Ostrom et al. (2015)
propose that rapid changes in service experience and delivery that are being brought about by
technology necessitate novel service-related knowledge. Evolved service encounters create an
opportunity to evaluate how AI affects core service areas, such as co-creation and co-
destruction (Robinson et al., 2020).
Third, a consumer-centric understanding of value co-destruction remains limited
(Camilleri & Neuhofer, 2017; Yin et al., 2019). Most co-destruction studies examine provider
customer relationships in Business-to-Business (B2B) settings (Echeverri & Skålén, 2011; Järvi
et al., 2018; Vafeas et al., 2016), whereas few studies focus on the customer perspective (Kim
et al., 2019; Smith, 2013). Service-Dominant (S-D) logic proposes that value is uniquely and
phenomenologically determined by the beneficiary (Vargo & Lusch, 2008, p. 7). However,
research examining how consumers individually experience value creation and destruction
remains scarce (Kelleher & Peppard, 2011). We argue that it is vital to understand the
antecedents of customer resource loss from the customer point of view, especially in AI-
powered self-service settings, where the customer lies at the very core of service delivery.
5
Our study seeks to help close these gaps by investigating the process of co-destruction in
AI-powered service settings. We aim to address two distinct objectives: first, to understand the
transformational effects of AI on co-destruction, and second, to analyze the process of co-
destruction from the customer perspective. We draw on an in-depth empirical study based on
27 interviews with customers who have already interacted with AI-powered chatbots. We
propose a conceptualization of the co-destruction process through AI technology to
demonstrate the link between customer resource loss, attributions of resource loss, and
customer coping strategies following such loss.
In the coming paragraphs, we first discuss how AI is transforming the service industry. We
then explore the theoretical concept of value co-destruction by adopting an S-D logic lens.
Then, we discuss the research method before presenting our findings and discussing the
proposed conceptualization of co-destruction in AI service settings. We offer theoretical
contributions for value co-destruction research, as well as managerial implications for the
service industry based on our findings.
Theoretical Background
AI in Service Encounters
The extant literature generally describes AI in terms of human intelligence or mimicking
intelligent human behavior, and involving a number of cognitive functions, such as rational
thinking, problem-solving, and learning (Huang & Rust, 2018; Syam & Sharma, 2018;
Tussyadiah, 2020). Distinct abilities of AI have been proposed in relation to the human skills
that can be reproduced. Huang and Rust (2018) discuss four different types of intelligence
needed for service tasks: mechanical, analytical, intuitive, and empathetic. Mechanical
6
intelligence is related to repetitive mundane tasks, which are exemplified by those performed
by call center agents. Analytical intelligence concerns the ability to process information for
problem-solving and learning. It depends largely on machine learning (ML)a subset of AI
that allows systems to automatically learn and improve from past experiences without being
explicitly programmed to do so. Intuitive intelligence is associated with creative thinking and
problem-solving, such as that required by marketing managers and doctors. Empathetic
intelligence refers to one’s ability to identify and comprehend others’ emotions and respond
accordingly. Empathic intelligence is central for those whose occupations require interpersonal
and people skills, such as psychologists.
Chatbots, or conversational agents, are examples of AI applications that are being deployed
in service settings that are of a mechanical or analytical nature. A conversational agent is a
virtual, autonomous, technological object that can engage in proactive or reactive behavior
(Holz et al., 2009). Chatbots are a type of disembodied conversational agent, as they do not
have a physical appearance; they allow user interactions through only voice or text interfaces
(Araujo, 2018; Keyser et al., 2019). However, not all chatbots are AI-driven. For instance, rule-
based chatbots are scripted with pre-programmed logic and follow a predetermined path of
questions and answers, exhibiting minimal intelligence (Tuzovic & Paluch, 2018). Such entry-
level chatbots can be implemented to answer Frequently Asked Questions (FAQs), such as
delivery and shipping-related questions. (Buhalis & Cheng, 2020). By contrast, AI-driven
chatbots are capable of understanding and communicating via human language through natural
language processing (NLP) (Griol et al., 2013). Additionally, ML allows chatbots to
continuously learn and evolve as they obtain access to increased amounts of data (Kumar et al.,
2016).
7
Examples of AI-driven chatbots include virtual assistants Alexa (Amazon), Siri (Apple),
and Edward (made available by Edwardian Hotels). Edward can communicate in natural,
conversational language to guide tourists throughout their entire travel journey, and it learns
from every interaction (Tussyadiah, 2020). In 2019, Edward managed 69% of all guest queries,
resulting in increased efficiency and the re-assignment of staff from repetitive tasks to more
important queries (Oram, 2019).
The information-rich nature of the service industry is a possible reason for the widespread
adoption of chatbots by companies (Kumar et al., 2016), as companies are constantly striving
to streamline their operations and achieve cost savings (Ukpabi et al., 2019). In service settings,
chatbots can create a prompt, interactive, convenient, and cost-effective channel for
communicating with customers throughout their entire journey (Belanche et al., 2020; Chung
et al., 2018; Gnewuch et al., 2018).
Despite companies’ increased enthusiasm for chatbot deployment, a number of important
questions have emerged, including the potential for chatbots to significantly affect relationships
between customers and service providers at the service frontline (van Doorn et al., 2017).
Chatbots have the ability to either augment or substitute frontline service employees (Davenport
et al., 2019). The literature suggests that AI technologies can assist FLEs by helping them
perform their roles better, or they can completely replace and automate employees active
involvement in service encounters (Keyser et al., 2019; Marinova et al., 2017). Several studies
have challenged the classic idea that augmentation and substitution are mutually exclusive, as
both effects can emerge simultaneously during the adoption of a technology (e.g., Ivanov &
Webster, 2019). Thus, AI technology will always have an impact on a significant portion of
service encounters.
8
There is growing evidence showing that, while AI-powered chatbots can enrich the
customer experience by learning from previous customer conversations and continuously
adapting their responses from such learning (Xu et al., 2017), they can also cause discomfort
(Mende et al., 2017). Studies of humancomputer interaction have reported that when chatbots
are designed to be more complex and animated, exhibiting high levels of anthropomorphism,
customers experience the negative feelings of eeriness and unease (Ciechanowski et al., 2019).
Negative emotions can lead to negative attitudes towards a service provider with resultant
and often irreversiblereduced purchase intentions (Demoulin & Willems, 2019). Thus,
identifying the conditions under which chatbots can undermine the customer experience is an
urgent objective.
Value Co-creation and Co-destruction in Technology-Driven Service Encounters
When interacting with technology to self-serve, customers adopt a critical role in service
production. In this role as partial employees and active co-creators of value, customers become
fully engaged in solving problems and delivering the required service (Bitner et al., 1997). Here,
we will draw on the value co-creation literature to understand the antecedents to value loss in
AI-powered service encounters.
Value co-creation implies that value and experiences can no longer be merely delivered to
customers; rather, the service provider can only present value propositions (Vargo & Lusch,
2008). Once customers accept such propositions and successfully integrate their operant and
operand resources, value is co-created collaboratively and interactively (Chandler & Lusch,
2015; Hilton & Hughes, 2013). In an AI-driven service encounter, the service provider can only
create a value proposition that includes the availability of the chatbot, together with a number
9
of modules, such as the user interface, the knowledge base, and the NLP interpreter module, in
order to be able to read, understand, and derive meaning from human language (Buhalis &
Cheng, 2020). It is the customer, through the integration of operand and operant resources (e.g.,
skills, time, and access to mobile phone and the Internet), who seeks to collaborate (i.e.,
interact) with the chatbot and, as a result, determine value creation. For example, instead of
engaging in face-to-face interactions with a receptionist at a hotel, customers can get in touch
with an AI-powered chatbot that is able to answer queries at any time of the day, irrespective
of the customer’s location or language. Such individualized, contextualized experiences based
on instant dynamic engagement between the customer and the service provider are examples of
real-time value co-creation (Buhalis & Sinarta, 2019).
The premise of a collaborative process of co-creation between a service provider and a
customer has recently attracted criticism, since it inherently implies that interactions between
the two actors tend to result in value co-creation (i.e., positive valence). Recent studies have
drawn attention to the possibility that interactions between a service provider and a customer
can also result in negative outcomes, where at least one of the actors experiences a decline in
value from the interaction with the other actor (Plé & Chumpitaz Cáceres, 2010). This negative
outcome has been conceptualized as co-destruction, defined as an interactional process
between service systems that results in a decline in at least one of the systems’ well-being
(which, given the nature of a service system, can be individual or organizational) (Plé &
Chumpitaz Cáceres, 2010, p. 431). More specifically, value co-destruction can be experienced
by any or all of the actors involved in an interaction and can be either intentional or accidental
(Plé & Chumpitaz Cáceres, 2010). Value co-destruction implies that when an actor (for
example, the customer) integrates a resource with another resource of another actor (for
example, the service provider), the well-being of any one or both of these actors diminishes
10
(Plé & Chumpitaz Cáceres, 2010). This decline in well-being stems from a discrepancy between
the actors’ expectations regarding actual or perceived resource integration (Plé, 2017).
The extant literature generally encapsulates the factors that lead to a decline in well-being,
and therefore co-destruction, under the term ‘unexpected resource loss (Smith, 2013, p. 1903).
For example, a customer may interact with Alexa, Google Home, or Siri, expecting the
experience to be frictionless. Instead they may find the actual experience to be frustrating and
ineffective (Kaplan & Haenlein, 2018). In this case, the customer experiences a decline in well-
being (frustration) and loses resources (time), thus experiencing value co-destruction by losing
more than what was gained. Although the customer and the service provider can both cause
resource loss, the extant literature has focused predominantly on resource loss that stems from
the service provider’s failure to fulfill its value proposition (Järvi et al., 2018). In this respect,
the literature identifies a number of factors that can act as antecedents of resource loss and co-
destruction, including absence of information (Järvi et al., 2018; Robertson et al., 2014),
mistakes (Järvi et al., 2018), indifference, and technological failure (Zhang et al., 2018).
Several studies have identified resources that customers often lose when they interact with
service providers (e.g., Plé, 2016; Smith, 2013). Although resource loss classifications differ
among these studies, there is general consensus about the types of resources that can be lost
during service interactions. A comprehensive resource framework is provided by Plé (2016),
who identifies a number of resources that customers can lose as they interact with service
providers, including economic, social, informational, emotional, temporal, and relational
resources, as well as resources related to the customer’s role, such as role clarity. The
customer’s role during service interactions has only recently been examined in the service
literature, as studies have highlighted the importance of considering the detrimental effects of
11
customer participation. Chowdhury et al. (2016) draw attention to the negative aspects resulting
from co-creation and identify role conflicts and ambiguity, both of which can lead to tension.
Blut et al. (2019) reveal that active customer participation can lead to role stress, including role
conflict, role overload, and role ambiguity, and that such stress increases based on the task
scope and the beneficiary participation.
Recent studies have also demonstrated a link between resource loss and resource deficiency
and misintegration (e.g., Smith, 2013). Resource deficiency occurs when one or more of the
actors do not possess the required operant resources (e.g., knowledge) to be used during the
interaction. As a result, resource deficiency can have a compounded effect on how other
resources are utilized during the interaction. If the customer is deficient in a particular resource
(e.g., trust), a negative influence on the delivery of another resource (e.g., information) by the
service provider may result (Vafeas et al., 2016). The actors involved in the encounter can also
intentionally or unintentionally misintegrate their own resources or the resources of other actors
during the interaction (Plé & Chumpitaz Cáceres, 2010). Resources are misintegrated when any
of the actors fail to integrate their operant and operand resources in an appropriate or expected
manner from the other actor’s perspective (Plé & Chumpitaz Cáceres, 2010, p. 432). Consistent
with this view, Laud et al. (2019), propose an extensive typology of resource misintegration
manifestations, which include deceptive integration of resources (deliberate concealment of
resource integration), misunderstanding of how to integrate resources (failure to understand
how to correctly integrate resources), negligent integration of resources (deliberate inattention
in the integration of resources) and unwillingness to integrate resources (deliberate withholding
or withdrawal of resources). As resource misintegration can manifest itself in different ways,
uncovering the distinct antecedents of resource loss makes it possible to obtain early warning
signs of co-destruction (Laud et al., 2019).
12
The negative impact of co-destruction can be so substantial that customers involved in a
failed interaction may refuse to collaborate again in subsequent interactions (Prior & Marcos-
Cuevas, 2016). Customers can also publicly manifest their feelings regarding the failed
interaction with the service provider through negative word of mouth on social media, which
can harm the service provider’s image and reputation (Balaji et al., 2016).
A significant limitation of previous co-destruction studies is the research context. Whereas
numerous studies have recently investigated co-destruction and its associated resource loss,
most studies have focused on traditional, human-to-human interaction service settings (Smith,
2013), such as travel insurance (Blut et al., 2019) and B2B settings (Chowdhury et al., 2016;
Vafeas et al., 2016). By contrast, little empirical research has explored co-destruction in relation
to AI-powered technologies, which are permeating many service settings. For instance, while
co-destruction has been explored in the context of physical (embodied) service robots in elderly
settings (Čaić et al., 2018), there has been no exploration of co-destruction in the context of
virtual (disembodied) service robots (Wirtz et al., 2018), such as AI-powered chatbots.
Virtual AI-driven service settings present a distinct context whereby (1) the FLE is replaced
by a virtual and disembodied conversational agent that is trained to understand and, importantly,
mimic human behavior (Holz et al., 2009), and (2) the customer is expected to be more involved
and to perform some of the tasks that were previously performed by the FLE (Kaartemo &
Helkkula, 2018).
AI technologies are introduced in service settings to support the co-creation of value
between a service provider and customer at the organizational frontline (Keyser et al., 2019,
13
p. 158). However, co-creation is not the only outcome of frontline interactions, and co-
destruction remains an important possibility. It is therefore important to understand the reasons
for, and the situations that lead to, co-destruction (Echeverri & Skålén, 2011; Plé & Chumpitaz
Cáceres, 2010). Our study seeks to bridge this gap in the knowledge by exploring the
antecedents of co-destruction and the resulting resource loss in AI-powered service
environments.
Research Method
Since there is a dearth of empirical studies on this topic, we utilized an exploratory research
design involving a qualitative research method to gain a rich understanding of how customers
behave and interact with AI technologies, as well as to understand the process of co-destruction
(Edmondson & McManus, 2007). We conducted in-depth semi-structured interviews with
customers who had interacted with AI chatbots in a customer service context in the past, as
these participants were considered to have the required expertise in this area. The
implementation of AI-powered applications, such as chatbots, is predominantly concentrated
in customer service environments in service-heavy industries, such as financial services,
telecoms, retail, and travel (Kannan & Bernoff, 2019). A customer service setting, therefore,
lent itself well to be explored within our study.
Data Collection
We developed an interview guide based on a review of earlier literature. We included
questions related to the resources required during the interaction (Grönroos & Voima, 2013;
Hilton & Hughes, 2013), sources of frustration (Echeverri & Skålén, 2011), resource loss (Plé,
14
2016; Smith, 2013), and resource deficiencies and misintegration (Plé & Chumpitaz Cáceres,
2010; Smith, 2013).
Research and current reports show that service robot and chatbot usage is strongest among
the younger demographic (Ivanov et al., 2018; SmartAction, 2018; Tuzovic & Paluch, 2018).
This implies that not all demographics were equally contributive to our study. Thus, typical
case sampling, a type of purposive sampling strategy, was used to select those cases considered
most typical, normal or representative of the group of cases under consideration (Teddlie &
Tashakkori, 2009, p. 176). Purposive sampling enabled the selection of specific information-
rich cases that were closely related to the study’s aim (Patton, 2002), which is consistent with
past qualitative co-destruction studies that sought to draw on rich data from informants with
comprehensive experience in particular practices (Echeverri & Skålén, 2011; Quach &
Thaichon, 2017). Younger customers who had experience interacting with chatbots were
considered most suitable for the exploration of co-destructionbecause they represent a
demographic that relies heavily on technology for their day-to-day interactions (Buhalis et al.,
2019), and because this allowed the collection of in-depth data on co-destruction episodes from
real, authentic, past experiences.
We employed a recruitment screener to qualify participants so as to ensure that the
participants had experienced interactions with an AI-powered chatbot in a customer service
context at least once in the 12 months preceding the study. Participants were asked to provide
details about their past chatbot interactions, such as the name of the service provider offering
the chatbot and the name of the chatbot (where relevant), to verify that an interaction with an
AI chatbot (and not a human representative) in a customer service context did indeed occur.
15
We conducted 27 face-to-face interviews with voluntary participants residing in Maltaa
multicultural setting that provide access to a diversity of otherwise difficult-to-reach
participants. Data was collected between June and September 2019. The data collection process
was concluded when additional interview data showed that theoretical saturation was reached
(Glaser & Strauss, 2017). On average, each interview lasted around 41 minutes. After obtaining
consent from the interviewees, we recorded and transcribed all the interviews.
The participants’ ages ranged from 21 to 46 years, fitting the sought demographic profile.
The participants had obtained a relatively high level of education (undergraduate degree or
higher) and occupied middle to upper management positions. The participants generally
enjoyed using modern technology daily, and perceived novel applications as having high utility,
which is characteristic of their generation (Roberts, 2018; Stewart et al., 2017). Indeed, the
interviewees reported using AI-powered chatbots in various contexts, the most common being
Fintech (financial technologies), an industry whereby technology is comprehensively used to
improve and automate the delivery of financial services (Belanche et al. 2019). The
interviewees mentioned that they generally used Fintech applications for convenience and
control in managing their accounts. A full list of interviewee characteristics is set out in Table
1.
16
Data Analysis
We uploaded our transcripts into an NVivo 12 project for coding and followed the
systematic approach outlined by Gioia et al. (2013)an approach that has been employed in
previous co-destruction research (Järvi et al., 2018; Vafeas et al., 2016). This process required
organization of the data into first-order codes, which were closely linked to existing terms
offered by the interviewees so as to preserve the authenticity of their expressions. The next
stage involved establishing connections between the first-order codes, leading to the emergence
of second-order themes. This procedure involved several readings of verbatim transcripts and
establishing connections between the emerging themes and the data set. Once a viable set of
themes was established, the second-order themes were condensed further into second-order
aggregate dimensions. This process ensured a strong foundation for building a data structure,
17
while enriching the qualitative rigor of the study by clearly showing the development of the
process, from the primary data to the theoretical constructs (Gioia et al., 2013).
Following Bazeley and Jackson (2019), we identified a rich transcript as being particularly
suitable to start the coding process and to generate a preliminary list of first-order codes. When
the preliminary coding was concluded, we looked for links between the second-order themes
to classify them into more developed, aggregate dimensions. For example, poor chat progress
and incorrect interpretation were combined into the aggregate dimension cognition
challenges. We adopted this process for the rest of the data.
This analytical process resulted in the emergence of five distinct aggregate dimensions,
which are shown in Figure 1 and set out in more detail in the next section.
[Insert Figure 1 near here]
Findings
Our data analysis revealed that co-destruction in AI service settings emerges from five
antecedents. These antecedents, in the order in which they occur during the interaction process,
are authenticity issues, cognition challenges, affective issues, functionality issues, and
integration conflicts. Any of these antecedents can lead to the perception of a failed service
interaction and result in resource loss for customers and thus a decline in their well-being.
Customers attribute responsibility for resource losses either to themselves or to the service
provider and adopt avoidance or confrontative measures intent on regaining control over failed
18
interactions. Table 2 sets out the five antecedents with their constituent concepts, the perceived
customer resource loss, and the attributions for such resource loss.
Authenticity Issues
19
Authenticity issues stem from the shrouded presence of a chatbot during an interaction,
such as when a customer perceives that the service provider is intentionally hiding or covering
up the identity of the chatbot. Participants mentioned that when the identity of the chatbot is
not clear, they depend on a number of cues to determine whether they are interacting with a
chatbot or a human agent. These cues include receiving instant replies to questions, noticing
perfect punctuation and spelling, or repeatedly obtaining the same answers to different queries.
Customers felt deceived when they were made to believe they were interacting with a human
agent but found out that they were actually interacting with a chatbot.
This uncertainty affected customers’ role claritythat is, the extent to which they
understood the role they must fulfill in the interaction. Customers were uncertain whether to
change their style of communication, to write in a keyword-like manner, or to write in a
different way. They did not know whether to be curt or friendly, and what stance to adopt to
demonstrate that their issue was important.
I expect that I am told that it’s a chatbot because then I would have to change the
way I pose the question. So even your English, how you write a question, it has to
be quite clear for the chatbot to understand what you’re actually asking. [I7]
Customers held themselves and the service provider equally accountable for the lack of
role clarity that arises due to authenticity issues. Customers realized that they might not possess
the skills that allow them to recognize that they are communicating with a chatbot. For example,
participants remarked that although they are generally alert and attentive during their
conversations, they might lack the ability to discern nuances that are specific to chatbot
behavior. Additionally, participants discussed the service provider’s deceptive integration of
20
resources through the deliberate misrepresentation of a chatbot presence; for example, when
the service provider intentionally opted not to disclose the chatbot identity or if the chatbot was
purposely assigned human characteristics, such as being made to write imperfect text, this was
labeled unfair and dishonest. Furthermore, service providers occasionally attribute human
names to chatbots for increased levels of anthropomorphism (Araujo, 2018); thus, unless this
is indicated explicitly during the chat, customers can easily mistake the chatbot for a human
representative.
Cognition Challenges
These findings demonstrate that customers perceive a number of cognitive challenges with
using chatbots. Chatbots exhibit a lack of understanding when the chat progress is poor, such
as when they ask an excessive number of questions to comprehend an issue or they consistently
provide the same answer to different customer questions. Cognitive challenges were also
observed through instances of incorrect interpretation, such as when chatbots misunderstand a
query or problem and provide an unrelated reply. One respondent complained that the reply
that I received had nothing to do with my questionit was irrelevantso it was evident that
the chatbot did not understand my question [I20].
The data showed that incomprehension breeds feelings of frustration and anger in
customers. Although customers may not expect the chatbot to fully resolve their problems or
issues, they do expect that, at a minimum, it is able to understand the context of their question
and to provide adequate guidance. When this is not the casefor example, when the chat results
in a deadlockcustomers feel agitated and upset because they feel that they have lost control
of the interaction. If customers are already upset and frustrated pre-interaction, their anger and
frustration will only be exacerbated in cases characterized by a chatbot’s lack of cognition.
21
Interestingly, participants were observed to attribute such emotional resource losses to both
the service provider and themselves. The service provider was assigned responsibility for
deliberate inattention during the creation and evolution of the chatbotfor instance, if the
chatbot was not trained properly or not provided with the correct inputs. One participant
remarked that functionality issues led him to believe that the chatbot needs more development
at this point in time [I19]. However, participants admitted that they also have a part to play in
ensuring that the chatbot understands their request; they pointed to their own failure to
understand how to correctly integrate resources when they did not adapt their communication
skills (e.g., their writing style and use of keywords) to be better understood by the chatbot:
The chatbot kept asking me, ‘which is your account?’, and I retyped the problem,
and it gave me the same answer. And then at the end, I realized that my wording
might not have been correct. As soon as I changed the wording, the chatbot
immediately pointed me to the right answer. [I14]
Affective Issues
The data analysis showed that customers expect chatbots to exhibit a degree of empathy.
In customer support situations, customers perceive chatbots as substituting human employees
and, as a result, expect an element of sympathy and personalization within the interaction. One
participant articulated that rather than just understanding what I’m typing, I want the chatbot
to understand, to feel, what I’m feeling’ [I05]. However, the participants generally believed that
regardless of their level of technological advancement, chatbots are unable to adapt easily to
customers emotional state and to diffuse customers’ feelings of anger, frustration, stress, and
concern.
22
Consequently, customers experience relational resource losses when they determine that
interactions with chatbots are devoid of affective understanding. They perceive regular
interactions with human support agents as a way to develop a deeper bond with the service
provider, which often involves building a relationship with particular employees who, in turn,
may acknowledge the customers repeat interactions. In contrast, chatbot interactions were
described as clinical [I16], leaving customers feeling unvalued and experiencing feelings of
detachment from the service provider.
I thought it [the chatbot] was rather impersonal... It made me feel distant from the
company. I didn’t like that. If it were my company, I would want my customers
to feel comfortable and close to the company, not distant. [I04]
Participants attributed affective issues to the service provider’s deliberate withholding of
resources and information. Participants reasoned that they felt let down by the service provider
when they were forced to interact with a chatbot rather than be given human support during
critical, urgent situationsfor example, when payment fails upon a hotel checkout, resulting in
the customer potentially missing a flight as he waits for the payment transfer to be put into
effect. In such cases, which are characterized by a high degree of anxiety and strong feelings
of anger, customers expect to be met with a high level of empathy and reassurance. When
customers are required to interact with a chatbot instead, attribution judgments towards the
service provider are heightened. The service provider is perceived as withdrawing human
resources (human FLEs) and, as a result, blamed for their unwillingness to integrate resources.
23
Customers also assume a minimal degree of responsibility for resources losses that occur
as a result of affective issues. When the service provider adopts a chatbot first, agent second
policy, customers generally do have the option to speak to a human representative; however,
they have to pass through the chatbot first and instruct the chatbot to transfer the conversation
on to a human. However, not all customers are aware of this functionality and how to use it.
Participants remarked how during urgent, emotional situations, they might lack the presence of
mind to check or notice the option to transfer the conversation to a human representative.
Functionality Issues
Our findings revealed that customers perceive chatbots to be limited in their functionality.
Despite being powered by AI, chatbots can only offer limited assistance during chat
interactions. For example, customers perceive chatbots as a suitable replacement to humans
when answering simple, straightforward questions; however, they cannot be relied on to solve
more complex queries, as the scope of their abilities is generally narrow. The limited
functionality of chatbots was also reflected in their inability to process non-textual inputs, such
as pictures or emojis, which customers may prefer to use to express their feelings or preferences.
Despite expecting chatbots to be limited, customers do not expect significant resource
losses as a result of such functional limitations. Yet, participants reported the loss of significant
time resources when they needed to repeat a request or restart the entire conversation with a
human. Such temporal losses are further attenuated, as they contrast heavily with customers’
prior expectations of chatbot speed and agility.
24
I’m not going to use the [company name] chatbot ever again, because it’s a waste
of time. Because if you use it with the intent to have an immediate reply, and then
it turns out to be more complicated, then I would be more frustrated. [I04]
Participants attributed functionality issues completely to the service provider. Functionality
issues are perceived to result from the unavailability of specific chatbot features, which is the
responsibility of the service provider. More precisely, such attributions of misintegration were
observed when the chatbot failed to set a customer’s expectations—for example, by neglecting
to inform the customer of limitations in its functionality, or by specifying the degree of accuracy
in the given answers.
Integration Conflicts
Our analysis identified integration conflicts as another antecedent of resource loss. These
conflicts arise from disconnects between the chatbot and other customer support channels,
generally manned by human representatives. This can occur when, for example, information
collected by the chatbot in the initial stages of the conversation is not conveyed to a human
representative, or when the chatbot does not automatically transfer a complex or deadlocked
conversation to a human representative.
Integration conflicts do not only result in temporal and emotional resources losses (time
wastage and frustration); such conflicts also cause customers to experience informational
losses. Customers perceive that chatbot conversations are not stored; as a result, they feel that
they have lost their frame of reference relating to a specific problem or issue. In such cases,
customers perceive that they have lost their ability to request an audit trail, including details of
who they have spoken to and about what.
25
In my case, the chatbot definitely did not keep any data, and I had to repeat it and
start from scratch every time I logged in when I was speaking to a new chatbot,
because this was over a period of months…there was zero traceability. [I08]
Participants blamed the service provider for informational losses occurring through such
conflicts, although in some cases, the participants also assumed a degree of responsibility.
Customers deem the service provider responsible in cases of poor organizational procedures,
which inhibit a human representative from following up on previous conversations with
chatbots. The service provider was also blamed for not including a clear exit option when
interacting with a chatbot, thus enabling customers to shift their interaction to a human.
Participants felt that this reflected deliberate inattention on the service provider’s part.
Customers also pointed at their own limitations during chatbot conversations when they did not
notice specific options, such as the possibility of saving the chat conversation (to counter
informational losses), or the possibility of transferring the conversation to a human
representative.
Customer Reactions to Resource Losses
We also observed that resource loss influences customer emotions and subsequent
behaviors to different degrees. Milder reactions may involve an immediate call for human
support through other channels, such as phone or email, or refusal to reuse the chatbot,
especially if the negative interaction was one in a series of negative experiences.
Situations in which participants experienced deeper emotional resource losses led to
harsher reactions. A common reaction involved terminating a service with the service provider
26
or moving to a competitor, especially in cases of a new service where the customer had not yet
invested significant time and effort in developing a relationship with the service provider.
I was like, you know what? I’m okay, I don’t have the problem anymore, because
I’m stopping the service straight away. I’m fed up. [I25]
The harshest reactions involved propagating negative word of mouth on social media.
Although negative word of mouth was not as common as the other reactions identified, we
observed it to be spurred by continuous negative service from the chatbot, such as looping or
failing to understand the customer’s query.
Basically, it [the chatbot] got me nowhere…it was extremely frustrating. I think
in all it took me a good three months to get it sorted out. I had to actually get it to
Twitter, complaining live on Twitter to get someone to speak to me. [I08]
Resource loss activates customers’ desire to take control of a situation by hurting or getting
even with the service provider. This behavior may imply a coping strategy, whereby customers
select a specific, and generally negative, course of action in an attempt to restore their own
well-being (Mick & Fournier, 1998). Avoidance and confrontative coping behaviors were
evident when customers attempted to restore their well-being in AI-powered service
interactions. Attempts to call for human support and refusal to reuse the chatbot are examples
of avoidance coping strategies, where customers attempt to distance themselves from new
technology. On the other hand, termination of service, switching to a competitor, and engaging
in negative word of mouth demonstrate confrontative coping strategies.
27
Discussion and Theoretical Contributions
We investigated the process of co-destruction during customer interactions with AI-
powered chatbots. In doing so, we aimed to address two distinct goals: first, to understand the
transformational effects of AI on co-destruction, and second, to analyze the process of co-
destruction from the customer perspective. The findings from this empirical study contribute to
the current understanding of co-destruction in three ways.
First, we demonstrated that co-destruction is a process that is set into motion by a number
of factors (antecedents), resulting in a decline in well-being for at least one of the actorsin
this case, the customer. A decline in well-being comprises resource loss. In order to counter
such resource loss, customers undergo an exercise of responsibility attribution to determine
who was responsible for the resource loss and decide what action to take. This process allows
customers to perceive that they have re-gained a degree of control over the encounter. A
conceptualization of co-destruction based on this process is set out in Figure 2.
[Insert Figure 2 near here]
Whereas previous studies have contributed models of co-creation from the customer
perspective (Etgar, 2008; Füller, 2010), our study addresses the gap in the literature by
proposing a model of co-destruction from the customer’s viewpoint. Our model provides a
richer conceptualization of the link between customer resource loss, attributions of resource
loss, and customer coping strategies following such loss. As previous co-destruction studies
tended to investigate resource loss, attributions of resource loss, and coping strategies
separately, our study makes a noteworthy contribution to the extant understanding of value co-
28
destruction as a complete process, which has been largely overlooked in previous literature
(Ostrom et al., 2015).
Second, our study empirically demonstrated the transformational role of AI in value co-
destruction. Our conceptualization showed that while a number of antecedents of resource loss
are common to a multitude of service settings, AI-powered service settings are affected by a
number of specific antecedents, necessitating their own investigation.
Three of the identified antecedentscognition challenges, functionality issues, and
integration conflictsare consistent with previous studies that examined co-destruction
through interaction with physical robots or in human-to-human B2B settings (Čaić et al., 2018;
Järvi et al., 2018; Vafeas et al., 2016). Our findings suggest that cognition challenges stem from
the lack of understanding of AI technology and echo previous studies suggesting a lack of
understanding as an antecedent of resource loss (Čaić et al., 2018; Järvi et al., 2018; Vafeas et
al., 2016). The identification of functionality issues as a reason for co-destruction corroborates
previous studies that outline the inability to serve as a key antecedent of co-destruction (Järvi
et al., 2018). This notion is congruent with the idea that co-destruction occurs when an operant
resource by one of the actors (the AI technology offered by the service provider) is deemed to
be inadequate and cannot meet customer requests satisfactorily (Echeverri & Skålén, 2011).
Integration conflicts are related to inadequate coordination, which was previously proposed as
an antecedent of co-destruction, albeit in B2B settings (Vafeas et al., 2016). However, whereas
previous studies consider inadequate coordination to arise from the lack of alignment between
two actors or systems (Vafeas et al., 2016), our findings suggest that poor organizational
configurations of one actor (the service provider) cause such misalignment. The infusion of AI
29
into service thus renders value creation a more complex process whereby coordinated,
harmonized resource inputs assume even greater importance.
Two of the identified reasons for resource loss, affective and authenticity issues, are
specific to AI-powered environments. Both of these, however, have not been identified in the
extant co-destruction literature.
Chatbots, even if powered by AI, are unlikely to be autonomous and to express genuine
emotions (Robinson et al., 2020). They may be trained to mimic human responses and express
basic emotions, which may be suitable or even ideal in low-involvement, mundane chatbot
encounters (e.g., in questions regarding package delivery status). However, in high-
involvement encounters (e.g., refund problems), when customers expect empathy and
understanding (Rafaeli et al., 2017), they may be disappointed with chatbot expressed emotions,
especially if they interpret such emotions as superficial or insincere. In such instances, the
opportunity for successful collaboration between the customers and the service provider (co-
creation) is not only lost, but it also becomes counterproductive, as customers perceive a decline
in their relational well-being as a result of the interaction. It is also evident that in these cases,
co-destruction emerges from the disconnect between chatbot ability and task type. As opposed
to rule-based chatbots, AI-powered chatbots have the ability to learn from past interactions,
which may lead to their premature deployment in situations that require not only mechanical or
analytical intelligence, but also intuitive and empathetic intelligence (Huang & Rust, 2018).
These co-destruction possibilities are further exacerbated when chatbot use is imposed or when
there is a lack of clear communication on how to converse with a human representative, as the
customer perceives a loss of control and freedom over the situation.
30
As advances in NLP and ML lead to more humanlike chatbot conversations (Wirtz et al.,
2018), it is becoming increasingly common for customers to be unaware that they are
interacting with a chatbot. Recent literature classifies instances where one actor is unaware that
the other actor is not human as counterfeit service encounters (Robinson et al., 2020, p. 367).
Our findings show how, similar to counterfeit goods (Eisend & Schuchert-Güler, 2006),
counterfeit service encounters initiate a process of co-destruction, as customers feel deceived
and are unable to retain full control over the conversation. These results imply that the process
of co-destruction is rendered more complex in service environments characterized by AI due to
the perception of multiple FLE identities (human vs. counterfeit human).
While lack of clarity about the identity of the FLE causes a customer to lose role clarity, it
can also spur further resource losses for not only the customer, but also the service provider.
Consistent with Smith (2013), our study shows the existence of loss cycles or downward
spirals pertaining to secondary resource losses that occur when customers attempt to regain
lost resources but instead incur additional resource losses. However, we supplement Smiths
(2013) findings by proposing that, in the context of AI-powered service interactions, these
downward spirals or loss cycles can be extended to include resource losses incurred by the
service provider. When customers lose role clarity, they are not aware of the new role that they
need to assume, that of speaking to the chatbot in a keyword-like, systematic manner. This
behavior results in resource losses for the service provider, as it limits the extent to which the
chatbot can learn from customer interactions. In AI-powered service environments, which
necessitate correct and consistent data inputs for systematic and autonomous system learning,
the lack of such data input means that the AI application is not able to learn effectively, and the
interaction degenerates over time.
31
Third, we offer a significant contribution to the extant, yet limited, literature on the
customer perspective of value co-destruction. While the benefits of AI technology adoption are
clear for service providers, it is important to obtain a customer-centric view on the
implementation of AI technologies at the frontline and to understand the factors that might lead
the customer to experience co-destruction and when. This understanding is especially important
given the crucial role of customers in AI-powered service interactions, when the customer is at
the very core of the service delivery (Kaartemo & Helkkula, 2018).
Customers do not expect reduced well-being to be an outcome when they engage in chatbot
conversations. When resource loss occurs, customers resolve to resume control of the situation
by first attributing responsibility for the resource loss and then adopting a coping strategy. Our
findings suggest that customers largely attribute their resource losses to resource misintegration
by the service provider (Table 2). Resource misintegration is perceived as an intentional action
by service providers to maximize benefits for themselves.
Customers only attribute themselves with resource deficiency or resource misintegration
in a limited number of cases. Such an attribution is the case when customers do not believe they
have the required capabilities to recognize chatbot presence (self-efficacy), and when they do
not adapt their communication style to one that can be more easily understood by the chatbot.
This tendency of customers to attribute resource loss to the service provider can be explained
by the self-serving bias, which states that in the case of a service failure, customers are more
likely to ascribe failure to third parties (Bendapudi & Leone, 2003). Previous research on the
self-serving bias suggests that this bias can be reduced when customers are given a choice of
whether to participate in service production or otherwise (Bendapudi & Leone, 2003). It follows
that cases of imposed chatbot use, which do not grant the customer any choice, exacerbate the
32
self-serving bias and result in a higher level of resource loss attributed to the service provider.
Indeed, customers clearly demarcate instances of resource loss that stem from imposed chatbot
use as deliberate unwillingness to integrate resources by the service provider.
Our study demonstrates that for the customer, the co-destruction process extends beyond
immediate resource loss and evolves into making attributions for that loss and taking
corresponding avoidance or confrontative measures aimed at the service provider. This finding
validates the importance of extending the co-destruction literature to the field of attribution of
responsibility in order to expand the limited knowledge on the consequences of co-destruction
and the array of emotional coping strategies that consumers may display in response to resource
loss (Tsarenko et al., 2019).
Practical Implications
Our findings suggest a number of strategic implications for managers and practitioners in
service sectors.
AI technologies on the frontline have been created specifically to encourage co-creation
between the service provider and the customer. However, our observations demonstrate that
co-destruction is also possible. Co-destruction emerges when the co-created service fails to
meet customers’ expectations. It is important for managers to realize that when AI applications,
such as chatbots, are introduced to the frontline, customers view such applications as a
substitute for human FLEs. As a result, customers hold similar, if not completely identical,
expectations regarding service levels. In light of this, it is important for service providers to
help customers understand any limitations inherent in the AI application and to ensure that the
33
chatbot explains the process to adopt when faced with such limitations so as to avoid customer
resource loss. More precisely, it is important for service providers to understand that customer
queries can vary significantly in their degree of complexity and involvement. While AI chatbots
can easily tackle simpler queries, problems arise when they face more complex questions.
Service providers should first ensure that the question’s degree of complexity is identified as
soon as possible, and then offer, if the question is determined to be complex, a clear and
seamless chat transfer to a human support representative as early on in the process as possible.
Although chatbot disclosure at the start of an interaction may negatively prejudice
customers against the effectiveness of such chatbots (Luo et al., 2019), our findings convey the
negative impact of perceived deception by service providers. Customers are also becoming
increasingly vigilant in determining the identity of the FLE, and may erroneously judge a
human FLE to be a chatbot, or vice versa, when the FLE identity is not disclosed. Service
providers are therefore encouraged to clearly advise customers of the identity of the FLE to
avoid feelings of deception and distrust. Such a notification can be offered at the start of the
interaction or even at the end (Belanche et al., 2020). Furthermore, disclosing the presence of
a chatbot may be regulated or considered standard practice in the near future, in light of ethical
concerns (Robinson et al., 2020).
Furthermore, we demonstrated that when customer interactions with AI chatbots are
negative, resulting in customer resource loss, such experiences can have serious negative
ramifications on service providers, as customers can opt for more costly customer support
channels, such as a phone channel. In such cases, an investment in AI technology intended to
result in cost savings, might backfire and result in a heavier load on other support channels.
When customer resource loss is significant, customers may opt for harsher action, such as
34
terminating the service, switching to a competitor, or complaining on social media. The
possibility of such behaviors also has a number of implications in terms of how the success of
chatbot applications is measured. Besides cost savings and insight gain, it is important for
service providers to adopt a customer-centric view and obtain a clear understanding in terms of
what chatbot success looks like from the customer perspective.
Limitations and Future Research
Utilizing a qualitative approach enabled us to undertake an in-depth exploration of a
distinct phenomenon, value co-destruction, in a novel context, AI-powered service interactions.
Like other qualitative studies, such an approach limits the generalizability of our study, as our
findings cannot be projected to an entire population of human-AI interactions in a statistical
sense. More importantly, despite the fact that our study is related to a growing economy that is
becoming increasingly reliant on technology and involves an increasingly multicultural
workforce (and population), we believe that our findings convey an emerging phenomenon that
cannot yet be generalized to other contexts in similar service industries across the globe.
Our study did not evaluate factors relating to consumers’ cultural behavior. Culture can
influence customer attitudes and behaviors in service settings (Chan et al., 2010); thus, this
would be a fruitful area for further work. Future research could analyze whether perceptions of
value loss, loss attributions, and coping strategies differ based on customers’ cultural value
orientations. These insights can contribute to a more developed understanding of the process of
co-destruction in AI-driven service encounters.
35
Quantifying the strength of the customers’ feelings and the prevalence of the perceived
resource loss was outside the scope of this study. Future research could build on our study to
develop a scale that can measure and quantify the extent of identified resource losses. Such
research could link measurement indictors to the concept of resource loss and provide a
systematic manner in which to measure and evaluate resource loss linked to co-destruction. A
focus on the different ways that resource loss could be countered, or at least minimized, would
also be useful to advance knowledge on the subject and especially to provide practical
managerial guidelines in terms of remedial or recovery strategies that can be adopted in cases
of value co-destruction.
Further work is also required to validate the identified resource losses in additional AI
settingsfor example, those provided by voice-controlled digital assistants, such as Alexa.
Additional empirical approaches, such as lab or field experiments, could be adopted to allow
for a more in-depth exploration of this topic.
Lastly, additional research is needed to ascertain the impact of resource misintegration and
deficiency in AI settings. AI applications, such as chatbots, are smart and have the ability to
learn from every interaction they have (Kumar et al., 2016). Thus, it is important to examine
how resource misintegration and deficiency by the customer or service provider at the start of
the interaction could have a compounded effect and cause further resource loss by the end of
the interaction.
Acknowledgements
The authors gratefully acknowledge the interviewees who participated in this study and the
anonymous reviewers of the AIRSI 2019 workshop: Artificial Intelligence and Robotics in
Service Interactions.
36
Declaration of Interest
No potential conflict of interest was reported by the authors.
37
References
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design
cues and communicative agency framing on conversational agent and company
perceptions. Computers in Human Behavior, 85, 183189.
Balaji, M. S., Khong, K. W., & Chong, A. Y. L. (2016). Determinants of negative word-of-
mouth communication using social networking sites. Information and Management,
53(4), 528540.
Bazeley, P., & Jackson, K. (2019). Qualitative data analysis with NVivo (3rd ed.). SAGE
Publications Ltd.
Belanche, D., Casaló, L. V., & Flavián, C. (2019). Artificial Intelligence in FinTech:
Understanding robo-advisors adoption among customers. Industrial Management and
Data Systems, 119(7), 14111430.
Belanche, D., Casaló, L. V., Flavián, C., & Schepers, J. (2020). Service robot
implementation: A theoretical framework and research agenda. The Service Industries
Journal, 40(34), 203225.
Bendapudi, N., & Leone, R. P. (2003). Psychological Implications of Customer Participation
in Co-production. Journal of Marketing, 67(1), 1428.
Bitner, M. J., Faranda, W. T., Hubbert, A. R., & Zeithaml, V. A. (1997). Customer
contributions and roles in service delivery. International Journal of Service Industry
Management, 8(3), 193205.
Blut, M., Heirati, N., & Schoefer, K. (2019). The Dark Side of Customer Participation : When
Customer Participation in Service Co-Development Leads to Role Stress. Journal of
Service Research.
Bock, D. E., Wolter, J. S., & Ferrell, O. C. (2020). Artificial intelligence: disrupting what we
know about services. Journal of Services Marketing, ahead-of-print.
38
Buhalis, D., & Cheng, E. S. Y. (2020). Exploring the Use of Chatbots in Hotels:Technology
Providers’Perspective. In Information and Communication Technologies in Tourism
2020 (pp. 231242). Springer International Publishing.
Buhalis, D., Harwood, T., Bogicevic, V., Viglia, G., Beldona, S., & Hofacker, C. (2019).
Technological disruptions in services: lessons from tourism and hospitality. Journal of
Service Management, 30(4), 484506.
Buhalis, D., & Sinarta, Y. (2019). Real-time co-creation and nowness service: lessons from
tourism and hospitality. Journal of Travel and Tourism Marketing, 36(5), 563582.
Čaić, M., Odekerken-Schroder, G., & Mahr, D. (2018). Service robots: Value co-creation and
co-destruction in elderly care networks. Journal of Service Management, 29(2), 178
205.
Camilleri, J., & Neuhofer, B. (2017). Value co-creation and co-destruction in the Airbnb
sharing economy. International Journal of Contemporary Hospitality Management,
29(9), 23222340.
Chan, K. W., Yim, C. K. (Bennett), & Lam, S. S. . (2010). Is Customer Participation in Value
Creation a Double-Edged Sword? Evidence from Professional Financial Services Across
Cultures. Journal of Marketing, 74(3), 4864.
Chandler, J. D., & Lusch, R. F. (2015). Service systems: A broadened framework and
research agenda on value propositions, engagement, and service experience. Journal of
Service Research, 18(1), 622.
Chowdhury, I. N., Gruber, T., & Zolkiewski, J. (2016). Every cloud has a silver lining -
Exploring the dark side of value co-creation in B2B service networks. Industrial
Marketing Management, 55, 97109.
Chung, M., Ko, E., Joung, H., & Kim, S. J. (2018). Chatbot e-service and customer
satisfaction regarding luxury brands. Journal of Business Research, October, 19.
39
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the
uncanny valley: An experimental study of humanchatbot interaction. Future
Generation Computer Systems, 92, 539548.
Cova, B., & Dalli, D. (2009). Working consumers: The next step in marketing theory?
Marketing Theory, 9(3), 315339.
Davenport, T. H., Guha, A., Grewal, D., & Bressgott, T. (2019). How artificial intelligence
will change the future of marketing. Journal of the Academy of Marketing Science.
Demoulin, N., & Willems, K. (2019). Servicescape irritants and customer satisfaction: The
moderating role of shopping motives and involvement. Journal of Business Research,
104(December 2017), 295306.
Echeverri, P., & Skålén, P. (2011). Co-creation and co-destruction: A practice-theory based
study of interactive value formation. Marketing Theory, 11(3), 351373.
Edmondson, A., & McManus, S. (2007). Methodological fit in field research. Academy of
Management Review, 32(4), 11551179.
Eisend, M., & Schuchert-güler, P. (2006). Explaining Counterfeit Purchases : A Review and
Preview. Academy of Marketing Science Review, 2006(12).
Etgar, M. (2008). A descriptive model of the consumer co-production process. Journal of the
Academy of Marketing Science, 36(1), 97108.
Füller, J. (2010). Refining Virtual Co-Creation from a Consumer Perspective. California
Management Review, 52(2), 98122.
Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive
research: Notes on the Gioia methodology. Organizational Research Methods, 16(1),
1531.
Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory: Strategies for
qualitative research. Routledge.
40
Gnewuch, U., Morana, S., & Maedche, A. (2018). Towards Designing Cooperative and Social
Conversational Agents for Customer Service. ICIS 2017: Transforming Society with
Digital Innovation, 013.
Griol, D., Carbó, J., & Molina, J. M. (2013). An automatic dialog simulation technique to
develop and evaluate interactive conversational agents. Applied Artificial Intelligence,
27(9), 759780.
Grönroos, C., & Voima, P. (2013). Critical service logic: Making sense of value creation and
co-creation. Journal of the Academy of Marketing Science, 41(2), 133150.
Harrison, T., & Waite, K. (2015). Impact of co-production on consumer perception of
empowerment. The Service Industries Journal, 35(10), 502520.
Hilton, T., & Hughes, T. (2013). Co-production and self-service: The application of Service-
Dominant Logic. Journal of Marketing Management, 29(78), 861881.
Ho, S.-H., & Ko, Y.-Y. (2008). Effects of self-service technology on customer value and
customer readiness: The case of internet banking. Internet Research, 18(4), 427446.
Holz, T., Dragone, M., & O’Hare, G. M. P. (2009). Where robots and virtual agents meet: A
survey of social interaction research across milgram’s reality-virtuality continuum.
International Journal of Social Robotics, 1(1), 8393.
Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service
Research, 21(2), 155172.
IDC. (2019). Worldwide Spending on Artificial Intelligence Systems Will Grow to Nearly
$35.8 Billion in 2019, According to New IDC Spending Guide. International Data
Corporation. https://www.idc.com/getdoc.jsp?containerId=prUS44911419
Ivanov, S., & Webster, C. (2019). Economic Fundamentals of the Use of Robots, Artificial
Intelligence, and Service Automation in Travel, Tourism, and Hospitality. In Robots,
Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality (pp.
41
3955).
Ivanov, S., Webster, C., & Garenko, A. (2018). Young Russian adults’ attitudes towards the
potential use of robots in hotels. Technology in Society, 55(December 2017), 2432.
Järvi, H., Kähkönen, A. K., & Torvinen, H. (2018). When value co-creation fails: Reasons
that lead to value co-destruction. Scandinavian Journal of Management, 34(1), 6377.
Kaartemo, V., & Helkkula, A. (2018). A systematic review of Artificial Intelligence and
robots in value co-creation: Current status and future research avenues. Journal of
Creating Value, 4(2), 118.
Kannan, P. V., & Bernoff, J. (2019). Does your company really need a chatbot? Harvard
Business Review. https://hbr.org/2019/05/does-your-company-really-need-a-chatbot
Kaplan, A., & Haenlein, M. (2018). Siri, Siri, in my hand: Who’s the fairest in the land? On
the interpretations, illustrations, and implications of artificial intelligence. Business
Horizons, 62(1), 1525.
Kelleher, C., & Peppard, J. (2011). Consumer Experience of Value Creation - a
Phenomenological Perspective. In A. Bradshaw, C. Hackley, P. Maclaran, & M. Duluth
(Eds.), European Advances in Consumer Research (Vol. 9, pp. 325332). Association
for Consumer Research.
Keyser, A. De, Köcher, S., Alkire, L., Verbeeck, C., & Kandampully, J. (2019). Frontline
service technology infusion: Conceptual archetypes and future research directions.
Journal of Service Management, 30(1), 156183.
Kim, K., Byon, K., & Baek, W. (2019). Customer-to-customer value co-creation and co-
destruction in sporting events. The Service Industries Journal, 0(0), 123.
Kumar, V., Dixit, A., Javalgi, R. (Raj) G., & Dass, M. (2016). Research framework,
strategies, and applications of Intelligent Agent Technologies (IATs) in Marketing.
Journal of the Academy of Marketing Science, 44(1), 2445.
42
Larivière, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., Wünderlich,
N. V., & De Keyser, A. (2017). “Service Encounter 2.0”: An investigation into the roles
of technology, employees and customers. Journal of Business Research, 79, 238246.
Laud, G., Bove, L., Ranaweera, C., Leo, W. W. C., Sweeney, J., & Smith, S. (2019). Value
co-destruction: a typology of resource misintegration manifestations. Journal of Services
Marketing, January.
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of
Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science,
December.
Marinova, D., de Ruyter, K., Huang, M.-H., Meuter, M. L., & Challagalla, G. (2017). Getting
smart: Learning from technology-empowered frontline interactions. Journal of Service
Research, 20(1), 2942.
Mende, M., Scott, M. L., Bitner, M. J., & Ostrom, A. L. (2017). Activating consumers for
better service coproduction outcomes through eustress: The interplay of firm-assigned
workload, service literacy, and organizational support. Journal of Public Policy and
Marketing, 36(1), 137155.
Mick, D. G., & Fournier, S. (1998). Paradoxes of technology: Consumer cognizance,
emotions, and coping strategies. Journal of Consumer Research, 25(2), 123143.
Morosan, C., & DeFranco, A. (2016). Co-creating value in hotels using mobile devices: A
conceptual model with empirical validation. International Journal of Hospitality
Management, 52, 131142.
Oram, R. (2019). Meeting Edward: Chatbots and the Changing the Face of the Hotel Guest
Experience. Oracle Hospitality Check-In. https://blogs.oracle.com/hospitality/chatbots-
and-the-changing-the-face-of-the-hotel-guest-experience
Ostrom, A. L., Parasuraman, A., Bowen, D. E., Patrício, L., & Voss, C. A. (2015). Service
43
research priorities in a rapidly changing context. Journal of Service Research, 18(2),
127159.
Patton, M. Q. (2002). Qualitative research and evaluation methods: Integrating theory and
practice. Thousand Oakes.
Plé, L. (2016). Studying customers’ resource integration by service employees in interactional
value co-creation. Journal of Services Marketing, 30(2), 152164.
Plé, L. (2017). Why do we need research on value co-destruction? Journal of Creating Value,
3(2), 162169.
Plé, L., & Chumpitaz Cáceres, R. (2010). Not always co-creation: Introducing interactional
co-destruction of value in Service-dominant Logic. Journal of Services Marketing, 24(6),
430437.
Prior, D. D., & Marcos-Cuevas, J. (2016). Value co-destruction in interfirm relationships: The
impact of actor engagement styles. Marketing Theory, 16(4), 533552.
Quach, S., & Thaichon, P. (2017). From connoisseur luxury to mass luxury: Value co-
creation and co-destruction in the online environment. Journal of Business Research,
81(May), 163172.
Rafaeli, A., Altman, D., Gremler, D. D., Huang, M.-H., Grewal, D., Iyer, B., Parasuraman,
A., & de Ruyter, K. (2017). The Future of Frontline Research. Journal of Service
Research, 20(1), 9199.
Ramaswamy, V., & Ozcan, K. (2018). What is co-creation? An interactional creation
framework and its implications for value creation. Journal of Business Research,
84(September 2016), 196205.
Roberts, D. (2018). Why generational attitudes toward technology matter | EY - Global. EY.
https://www.ey.com/en_gl/health/why-generational-attitudes-toward-technology-matter
Robertson, N., Polonsky, M., & McQuilken, L. (2014). Are my symptoms serious Dr Google?
44
A resource-based typology of value co-destruction in online self-diagnosis. Australasian
Marketing Journal, 22(3), 246256.
Robinson, S. G., Orsingher, C., Alkire, L., Keyser, A. De, Giebelhausen, M., Papamichail, K.
N., Shams, P., & Sobhy, M. (2020). Frontline encounters of the AI kind : An evolved
service encounter framework. Journal of Business Research, 116, 366376.
SmartAction. (2018). How demographics affect chatbot usage.
https://www.smartaction.ai/blog/demographics-affect-chatbot-adoption-use/
Smith, A. M. (2013). The value co-destruction process: A customer resource perspective.
European Journal of Marketing, 47(11/12), 18891909.
Stewart, J. S., Goad, E., & Cravens, K. S. (2017). Managing millennials : Embracing
generational differences. Business Horizons, 60(1), 4554.
Syam, N., & Sharma, A. (2018). Waiting for a sales renaissance in the fourth industrial
revolution: Machine learning and artificial intelligence in sales research and practice.
Industrial Marketing Management, 69(January), 135146.
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. SAGE.
Tsarenko, Y., Strizhakova, Y., & Otnes, C. C. (2019). Reclaiming the Future: Understanding
Customer Forgiveness of Service Transgressions. Journal of Service Research, 22(2),
Tussyadiah, I. P. (2020). A review of research into automation in tourism: Launching the
Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics
in Tourism. Annals of Tourism Research, 81(February), 102883.
Tuzovic, S., & Paluch, S. (2018). Conversational Commerce A New Era for Service
Business Development. In M. Bruhn & H. K. (Eds.), Service Business Development (pp.
82101). Springer Gabler.
Ukpabi, D. C., Aslam, B., & Karjaluoto, H. (2019). Chatbot Adoption in Tourism Services: A
Conceptual Exploration. In Robots, Artificial Intelligence, and Service Automation in
45
Travel, Tourism and Hospitality (pp. 105121).
Vafeas, M., Hughes, T., & Hilton, T. (2016). Antecedents to value diminution: A dyadic
perspective. Marketing Theory, 16(4), 469491.
van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., & Petersen,
J. A. (2017). Domo arigato Mr. Roboto: Emergence of automated social presence in
organizational frontlines and customers’ service experiences. Journal of Service
Research, 20(1), 4358.
Vargo, S. L., & Lusch, R. F. (2004). Evolving to a new dominant logic for Marketing.
Journal of Marketing, 68(1), 117.
Vargo, S. L., & Lusch, R. F. (2008). Service-dominant logic: Continuing the evolution.
Journal of the Academy of Marketing Science, 36(1), 110.
Verleye, K. (2015). The co-creation experience from the customer perspective: Its
measurement and determinants. Journal of Service Management, 26(2), 321342.
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A.
(2018). Brave new world: Service robots in the frontline. Journal of Service
Management, 29(5), 907931.
Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A New Chatbot for Customer
Service on Social Media. Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems, 35063510.
Yin, J., Qian, L., & Shen, J. (2019). From value co-creation to value co-destruction? The case
of dockless bike sharing in China. Transportation Research Part D: Transport and
Environment, 71(June 2018), 169185.
Zhang, T., Lu, C., Torres, E., & Chen, P.-J. (2018). Engaging customers in value co-creation
or co-destruction online. Journal of Services Marketing, 32(1), 5769.
... Value codestruction processes can occur in interactions with AI, causing a reduction in the well-being of at least one actor (Castillo et al., 2021). As described by Čaić et al. (2018), AI has, at the same time, the potential for value cocreation and codestruction, causing tradeoffs in the value formation since, in the case of socially assistive robots in elderly care, the beneficiary may recognize it as an invader of their privacy and, on the other hand, a family member may consider it a facilitator. ...
... Hottat et al. (2023) point out that the profile of the public and the context are determinants of the potential for cocreation/codestruction; for example, automation is not well accepted in high-risk services (hospital services) and high-priced services (luxury restaurants and hotels); in these cases, human interaction is preferred. Castillo et al. (2021) more specifically addressed the codestruction process in AI-enabled service interactions, drawing on the value-cocreation literature. The authors identified authenticity problems, cognition challenges, affective problems, functionality problems and integration conflicts as antecedents of codestruction from a client perspective using chatbots. ...
... The authors identified authenticity problems, cognition challenges, affective problems, functionality problems and integration conflicts as antecedents of codestruction from a client perspective using chatbots. They also reported that negative interactions trigger losses of customer resources, leading to the choice of more expensive service channels, service termination and public complaints about the company (Castillo et al., 2021). ...
Article
Purpose The adoption of artificial intelligence (AI) in frontline service encounters is a growing phenomenon in service marketing, which can lead to positive and negative results. In this context, this paper aims to review the literature on value cocreation and codestruction in AI-enabled service interactions. Design/methodology/approach A systematic literature review was carried out using the PRISMA protocol. Data were retrieved from the Web of Science and Scopus databases, from which 48 articles were selected for review. Data analysis, presentation of results and the research agenda followed the theory, context, characteristics and methodology (TCCM) framework. Findings The review especially revealed that: publications on AI-enabled value cocreation and codestruction are in the early stages of development; few articles have addressed value codestruction, and the main research emphasis is on value cocreation; interactions between human actors and AI-enabled autonomous nonhuman actors are resulting in value cocreation or value codestruction, or both, and these phenomena are also likely to occur when AI replaces more than one human actor in the service encounter; and AI is considered an increasingly independent nonhuman actor that integrates resources and interacts with other actors, yet prudence is necessary for its adoption. Originality/value This review fills a gap by jointly exploring the value cocreation and codestruction in the context of AI, presents an overview of the issues discussed and provides a research agenda with directions for future studies.
... Відповідно до Міжнародного стандарту ISO/IEC 22989:2022 ШІ розглядається як «технічна та наукова галузь, яка генерує такі результати, як контент, прогнози, рекомендації чи рішення для заданого набору цілей, визначених людиною» [3], а його підгрунтям виступають такі технології, як машинне навчання, обробка природної мови, експертні системи, нейронні мережі, глибоке навчання, фізичні роботи та роботизована автоматизація процесів [4]. Простежуючи еволюцію його використання й поширення як у виробничій, так і невиробничій сферах, більшість науковців зосереджують свою увагу на викликах, ризиках, перевагах і тенденціях, пов'язаних з використнням ШІ в бізнесі [5][6][7][8][9][10][11][12][13]. В останні роки пильна увага приділена інструментам ШІ та їх нещодавним вдосконаленням, як от системи персоналізації та рекомендацій, цифрові агенти і помічники, системи перекладу та голосові помічники, системи прогнозування [12][13][14][15][16][17]. ...
... Не зважаючи на те, що впровадження безконтактних форм реєстрації вимагає додаткових видатків з боку готелю, їх функціонал часто є ширшим і включає цифрові ключі, систему розпізнавання обличчя, технології мовного перекладу та чат-боти, відповідно до очікувань гостей [33], створюючи сучасний вид соціальної взаємодії [13]. Очікуваною перевагою цифрових ключів і систем розпізнавання обличчя є підвищення безпеки проживання в готелі [33], що вимагає чітких алгоритмів збору і збереження персональних даних та їх конфіденційності [5] з використанням блокчейн. Основною перевагою технологій мовного перекладу визначено подолання мовного бар'єру між гостями і персоналом [12]. ...
Article
Full-text available
Майбутнє готельного бізнесу щільно пов’язано з технологіями штучного інтелекту. Їх стрімкий розвиток здатний вплинути на формування позитивного досвіду гостя й створити додаткові конкурентні переваги готелю на ринку. Практика використання штучного інтелекту в готелях є незначною, а тому вивчення його можливостей для обслуговування гостя є актуальним. Метою дослідження стало встановлення переваг штучного інтелекту в гостьовому циклі обслуговування шляхом аналізу напрямів його використання у бізнес-процесах готелю. Дослідження грунтується на системному підході до вивчення, аналізу й узагальнення джерел інформації щодо потенціалу штучного інтелекту в готельному бізнесі. Методи наукової абстракції й порівння були застосовані для виділення основних бізнес-процесів в технології обслуговування гостя. Охарактеризовані інструменти штучного інтелекту на кожному етапі гостьового циклу обслуговання (бронювання, реєстрація, проживання, виїзд), визначені їх особливості. Встановлено, що протягом гостьового циклу використовуються як окремі інструменти штучного інтелекту, так і їх комплексні рішення: чат-боти, технології перекладу, алгоритми розпізнавання мови, голосові помічники, цифрові консьєржі. Проаналізовано 10 комплексних продуктів для готелів, які працюють на технологіях штучного інтелекту. Аналіз їх функцій підтвердив, що удосконалення сервісної персоналізації та розширені можливості для керування нею є пріоритетом для готельного бізнесу. Перевагами інтелектуальних рішень в гостьовому циклі визначені швидкість реагування й обслуговування, простота використання, можливості вибору альтернатив, мультикультурність, персональні налаштування й рекомендації. Теоретичне значення дослідження полягає в узагальненні особливостей застосування штучного інтелекту на різних етапах обслуговування гостя за рахунок оптимізації бізнес-процесів та формування середовища гостинності. В статті підкреслена важливість персоналізації досвіду гостей протягом всього гостьового циклу, одним з напрямів забезпечення якого може стати комплекс інструментів штучного інтелекту. На основі вивчення міжнародних практик готельного менеджменту встановлено, що створення унікальної ціннісності для гостя вимагає балансу використання людського й штучного інтелекту, що і визначено метою подальших наукових досліджень. Тип статті: теоретична.
... AI-driven conversational banking chatbots are at the forefront of catalysing a shift toward heightened customercentricity within the banking sector (Castillo et al. 2020), thereby contributing to an increase in financial performance (Adam et al. 2020). These sophisticated chatbots deliver a bespoke, responsive, and omnipresent service (Følstad et al. 2018), which significantly increases customer satisfaction and loyalty (Yun and Park 2022; Lorenza and Nurohman pertinent banking products, thereby intensifying customer engagement (Olamide et al. 2021). ...
Article
This study examines the impact of conversational AI banking chatbots on enhancing Online Customer Experience (OCE) and influencing customers’ intention to recommend (ITR) banking services. A sample of 459 digital banking users, predominantly professionals from the banking sector, was collected via convenience sampling. Using structural equation modelling, the study found that Perceived Anthropomorphism (PAN), personalisation (PER), and perceived control (PCO) positively influenced OCE, while Perceived Risk (PRI) had a negative effect. The strongest predictor of ITR was OCE, with a significant path coefficient of 0.796. AI chatbots employed Natural Language Processing (NLP) and machine learning models, optimised for banking interactions. Despite their effectiveness, the preference for human interaction in complex scenarios suggests chatbots complement rather than replace human service. This research highlights the potential of AI-enabled chatbots to enhance customer experience, though their development must continue to focus on complex task management and personalised interactions.
... Customer fulfilment, referring to how satisfied an individual regards the goods offered by a business, immediately impacts consumer retention and loyalty (Riikkinen et al., 2018). Businesses can use this to offer services and goods that reduce customer churn, such as specialized assistance or promotions (Castillo et al., 2020;Schmidt et al., 2020). ...
Article
Full-text available
Introduction. Artificial intelligence (AI) refers to a wide spectrum of breakthroughs that offer multiple advantages to companies in terms of increased sales. Considering the abundance of data and the significant increase in computational resources, organisations have rapidly turned to artificial intelligence (AI) to create financial benefits. Nevertheless, businesses continue to discover it is challenging to implement and employ AI in their everyday activities. Therefore, a comprehensive understanding is required due to the absence of an integrated comprehension of how artificial intelligence creates business value and what kind of corporate worth is anticipated. Aim and tasks. The study aims to review value-generating methods and explain how enterprises might use AI technology in their business activities. To accomplish its main aims, this study offers a thorough literature review. The working hypothesis claims that the use of AI can increase business value. Results. This study examines the research capabilities of AI, its use in the corporate environment, and its initial and secondary impacts. The impact of AI includes process efficiency, generating insights hidden in huge amounts of data, and transforming business processes in terms of procedural actions, operational efficiency, financial efficiency, market efficiency, and sustainability in terms of company profile. In addition to the favourable impacts, several recent cases have shown that unwanted and undesired consequences may develop in the absence of effective management procedures. These effects hurt the reputation of the businesses concerned and, in certain cases, resulted in huge fines and financial losses. Such findings increase the responsibility of AI enterprises to incorporate solutions that reduce the bias in data and algorithms at every stage of implementation. Conclusions. The role of artificial intelligence in the corporate environment in value creation and operational efficiency is extending. AI technologies can be used by companies to increase automation of corporate processes without direct interaction with customers, including applications that mean the use of AI in customer-facing services and products. Learning about the means by which AI might be employed will assist businesses in generating rational choices regarding the strength of implementing technologies in the supply chain. Assessing the possible implications of AI acceptance of artificial intelligence may enable firms to plan more successfully on a technology’s launch.
... Second, this work adds to the body of literature on smart interactions by verifying the effects of the subdimensions of smart interactions on UN, HE, and PR and extending the literature on PCT. Previous studies on interactivity in the HAI context focused on the factors impacting purchase intention [104,106] and engagement [17,107], but little is known about which specific subdimension of smart interaction is most efficient in stimulating UN, HE, and PR in the AIoT context. This study is one of the first theoretical and empirical works that explicates how the different subdimensions of smart interactions among AVAs and users (machines) drive UN, HE, and PR. ...
Article
Full-text available
Artificial intelligence (AI) technologies are changing the ways of interaction between humans and machines, and smart interactions have become one of the hot topics of artificial intelligent in-home voice assistants (AVAs) by connecting humans, machines, content, and AVAs. Based on the privacy calculus theory (PCT), the authors conducted an online questionnaire-based survey to investigate the influential mechanisms of smart interactions on stickiness intention (SI), demonstrated the positive (negative) effects of smart interactions on benefits and risks, and verified the moderating role of susceptibility to normative influence (SNI). The results show that smart interactions positively impact SI via utilitarian benefit and hedonic benefit; humanness has a U-shaped effect on privacy risk; personalization, connectivity, and linkage positively impact privacy risk; multimodal control negatively impacts privacy risk; and SNI positively moderates the effects of smart interactions on stickiness intention. The study enriched and expanded the literature on smart interactions in the context of AIoT and offered practical implications for AVA service providers and developers to design or optimize smart interactions for AI interactive services. By examining the double-edged sword effects of personalization and humanness, our findings offer novel insights into the privacy calculus in smart interactions.
Article
Purpose Artificial intelligence (AI) customer service has grown rapidly in recent years due to the emergence of COVID-19 and the growth of the e-commerce industry. Therefore, this study employs the integration of the stimuli–organism–response (SOR) and the task-technology fit (TTF) frameworks to understand the factors that affect individuals’ intentions towards AI customer service adoption in Malaysia. Design/methodology/approach The study utilised a survey-based research approach to investigate the factors that affect individuals’ intentions towards AI customer service adoption in Malaysia. The data were collected by conducting an online survey targeting individuals aged 18 or above who had prior customer service interaction experience with human service agents but had not yet adopted AI customer service. A sample of 339 respondents was used to evaluate the hypotheses, adopting partial least squares structural equation modelling as a symmetric analytic technique. Findings The PLS-SEM analysis revealed that social influence and anthropomorphism have a positive direct relationship with emotional trust. Furthermore, communicative competence, technology characteristics and perceived intelligence were positively correlated with TTF. Moreover, emotional trust significantly impacts AI customer service adoption. In addition, AI readiness positively moderates the association between task technology fit and AI customer service adoption. Practical implications The study provides insights to individuals, organisations, the government and educational institutions to improve the features of AI customer service and its development in Malaysia. Originality/value The originality of this study is found in its adoption of the SOR theory and TTF to understand the factors affecting AI customer service adoption. Additionally, it incorporates moderating variables during the analysis, adding depth to the findings. This approach introduces a new perspective on the factors that impact the adoption of AI customer service and offers valuable insights for practitioners seeking to formulate effective strategies to promote its adoption.
Article
Purpose The implementation of customer service chatbots in various industries is increasingly accepted globally. Previous research has not extensively explored the relationship between chatbot disclosure, technology anxiety, chatbot quality, customer experience and customer satisfaction derived from using chatbot customer service in e-commerce. Therefore, this paper aims to examine the determinant factors of customer service chatbot continuance intention by extending the expectation confirmation theory (ECT). The researchers integrate chatbot quality, technology anxiety and disclosure into ECT to comprehensively understand the phenomena. Design/methodology/approach The quantitative study uses the partial least square structural equation model disjoint two-stage approach with a sample of 310 respondents collected using purposive sampling. Findings The study reveals that perceived usefulness, confirmation and satisfaction positively affect customer service chatbot continuance intentions. Moreover, chatbot disclosure can enhance chatbot quality. However, technology anxiety negatively affects chatbot quality. Originality/value This research contributed to adapting customer service chatbots in Indonesian e-commerce, focusing on chatbot quality, technological anxiety and transparency. Furthermore, it underscores the need for clarity, addresses transaction-specific concerns and artificial intelligence-driven customer assistance in the Indonesian market.
Chapter
Thanks to technological advancements and implementations, tourism-related businesses may now provide superior products and services globally, boost their functionality, and access a more comprehensive demographic. This study aimed to look into the issues of human resources and technology and how they interact for the benefit of the tourism sector. The study employed a qualitative research design to collect in-depth data. The target population comprised managers and supervisors (47), with census sampling being used. An interview schedule was used to manage the primary data. The collected data was analysed through narration and presented in themes and sub-themes. The study revealed that employees and management had negative attitudes about technology usage; for instance, they were concerned about job loss or a low return on investment. The study further revealed strategies to treat employees’ and managers’ fears through training and involvement. Additionally, the study noted a number of available technologies that can be used in the tourism industry and the challenges of their implementation. It was also established that HR-specific issues, such as high employee turnover and seasonal workforce management, had to be taken care of. Finally, the study underscored the importance of technology for things like increased efficiency and cost savings. In conclusion, it was established that human resource factors must be considered before employing technology in the tourism sector.
Article
Full-text available
Purpose Considering the increasing impact of Artificial Intelligence (AI) on financial technology (FinTech), the purpose of this paper is to propose a research framework to better understand robo-advisor adoption by a wide range of potential customers. It also predicts that personal and sociodemographic variables (familiarity with robots, age, gender and country) moderate the main relationships. Design/methodology/approach Data from a web survey of 765 North American, British and Portuguese potential users of robo-advisor services confirm the validity of the measurement scales and provide the input for structural equation modeling and multisample analyses of the hypotheses. Findings Consumers’ attitudes toward robo-advisors, together with mass media and interpersonal subjective norms, are found to be the key determinants of adoption. The influences of perceived usefulness and attitude are slightly higher for users with a higher level of familiarity with robots; in turn, subjective norms are significantly more relevant for users with a lower familiarity and for customers from Anglo-Saxon countries. Practical implications Banks and other firms in the finance industry should design robo-advisors to be used by a wide spectrum of consumers. Marketing tactics applied should consider the customer’s level of familiarity with robots. Originality/value This research identifies the key drivers of robo-advisor adoption and the moderating effect of personal and sociodemographic variables. It contributes to understanding consumers’ perceptions regarding the introduction of AI in FinTech.
Article
Full-text available
Driven by the advancements in artificial intelligence (AI) and its related technologies, the application of intelligent automation in travel and tourism is expected to increase in the future. This paper unpacks the need to shape an automated future of tourism as a social phenomenon and an economic activity, hence contributes to theory and practice by providing directions for future research in this area. Four research priorities are suggested: designing beneficial AI, facilitating adoption, assessing the impacts of intelligent automation, and creating a sustainable future with artificial intelligence. Research in these areas will allow for a systematic knowledge production that reflects a concerted effort from the scientific community to ensuring the beneficial applications of intelligent automation in tourism. The article also launches the Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics. The Collection contains all past articles published in Annals of Tourism Research on the topic, and continues to grow as new articles are added.
Article
Full-text available
Service robots and artificial intelligence promise to increase productivity and reduce costs, prompting substantial growth in sales of service robots and research dedicated to understanding their implications. Nevertheless, marketing research on this phenomenon is scarce. To establish some fundamental insights related to this research domain, the current article seeks to complement research on robots’ human-likeness with investigations of the factors that service managers must choose for the service robots implemented in their service setting. A three-part framework, comprised of robot design, customer features, and service encounter characteristics, specifies key factors within each category that need to be analyzed together to determine their optimal adaptation to different service components. Definitions and overlapping concepts are clarified, together with previous knowledge on each variable and research gaps that need to be solved. This framework and the final research questions provide a research agenda to guide scholars and help practitioners implement service robots successfully.
Article
Emergent perspectives in marketing highlight new opportunities for co-opting customers as a means to define and cocreate value through their participation. This study delineates and empirically tests hypotheses regarding the effects of customer participation (CP) on value creation and satisfaction for both customers and employees with different cultural value orientations in the context of professional financial services. Using data collected from 349 pairs of customers and service employees in two national groups (Hong Kong and the United States) of a global financial institution, this study examines how (1) CP drives performance outcomes (i.e., customer satisfaction, employee job satisfaction, and employee job performance) through the creation of economic and relational values and (2) the effects of CP on value creation depend on participants’ cultural value orientations. Promoting CP could be a double-edged sword for firms: CP enhances customers’ economic value attainment and strengthens the relational bond between customers and employees, but it also increases employees’ job stress and hampers their job satisfaction. Moreover, the effects of CP on value creation depend on the cultural values of both customers and service employees; this result implies that arranging customers and service employees with “matched” cultural value orientations could facilitate the creation of value through CP.
Book
Most writing on sociological method has been concerned with how accurate facts can be obtained and how theory can thereby be more rigorously tested. In The Discovery of Grounded Theory, Barney Glaser and Anselm Strauss address the equally Important enterprise of how the discovery of theory from data-systematically obtained and analyzed in social research-can be furthered. The discovery of theory from data-grounded theory-is a major task confronting sociology, for such a theory fits empirical situations, and is understandable to sociologists and laymen alike. Most important, it provides relevant predictions, explanations, interpretations, and applications. In Part I of the book, "Generation Theory by Comparative Analysis," the authors present a strategy whereby sociologists can facilitate the discovery of grounded theory, both substantive and formal. This strategy involves the systematic choice and study of several comparison groups. In Part II, The Flexible Use of Data," the generation of theory from qualitative, especially documentary, and quantitative data Is considered. In Part III, "Implications of Grounded Theory," Glaser and Strauss examine the credibility of grounded theory. The Discovery of Grounded Theory is directed toward improving social scientists' capacity for generating theory that will be relevant to their research. While aimed primarily at sociologists, it will be useful to anyone Interested In studying social phenomena-political, educational, economic, industrial- especially If their studies are based on qualitative data. © 1999 by Barney G. Glaser and Frances Strauss. All rights reserved.
Article
Purpose Artificial intelligence (AI) is currently having a dramatic impact on marketing. Future manifestations of AI are expected to bring even greater change, possibly ushering in the realization of the fourth industrial revolution. In accord with such expectations, this paper aims to examine AI’s current and potential impact on prominent service theories as related to the service encounter. Design/methodology/approach This paper reviews dominant service theories and their relevance to AI within the service encounter. Findings In doing so, this paper presents an integrated definition of service AI and identifies the theoretical upheaval it creates, triggering a plethora of key research opportunities. Originality/value Although scholars and practitioners are gaining a deeper understanding of AI and its role in services, this paper highlights that much is left to be explored. Therefore, service AI may require substantial modifications to existing theories or entirely new theories.
Chapter
Virtual assistants, also known as chatbot technology is getting more prominent and is applied widely in many industries. The use of chatbots, advantages, disadvantages, and future implication should be further understood; particularly from a technology provider’ perspective. Previous studies, specialised in the hospitality context, focused solely on user’s perspective. They have widely neglected the expert’s point of view, which creates a gap in literature on the understanding of chatbot implications. The purpose of this study is to explore the use of chatbots in hotels by conducting semi-structured interviews with industry experts (technology providers). This study explores the use of chatbots and the key value the offer through interviews with chatbot experts. The findings show that the use of chatbots receive positive feedback and the benefits of chatbots outweigh the challenges. This will lead to further deployment of chatbot in the industry and the need to develop their abilities in order to achieve their full potential.
Article
While numerous studies have examined the benefits of customer participation (CP), understanding of the dark side of involving customers in service firms’ processes is limited. This study proposes that the changing role of customers who actively participate in service co-development can cause role stress and negative feelings, which may, in turn, reduce customer satisfaction and the perceived value of participation. We develop and test a comprehensive role theory-based framework for CP–role stress. Using a video-based experiment, behavioral lab experiment, and field study, we find that greater CP leads to heightened role stress, including role conflict, role overload, and role ambiguity. These adverse effects occur contingent on customers’ prior participation experience and firm- provided support. Furthermore, role stress effects vary across service co-development types depending on (a) the scope of the task (i.e., open task, closed task) and (b) the beneficiary of participation (i.e., customer, general market). Specifically, adverse effects are stronger for open than for closed tasks, and they also tend to be stronger when the beneficiary is the general market rather than the individual customer. These findings emphasize the need for more cross-context theorizing in CP research. Managers should consider these adverse effects and implement measures that reduce role stress.
Article
Artificial intelligence (AI) is radically transforming frontline service encounters, with AI increasingly playing the role of employee or customer. Programmed to speak or write like a human, AI is poised to usher in a frontline service revolution. No longer will frontline encounters between customer and employee be simply human-to-human; rather, researchers must consider an evolved paradigm where each actor could be either human or AI. Further complicating this 2 × 2 framework is whether the human, either customer or employee, recognizes when they are interacting with a non-human exchange partner. Accordingly, we develop an evolved service encounter framework and, in doing so, introduce the concept of counterfeit service, interspecific service (AI-to-human), interAI service (AI-to-AI), and offer a research agenda focused on the implementation of AI in dyadic service exchanges.