Content uploaded by Ana Isabel Canhoto
Author content
All content in this area was uploaded by Ana Isabel Canhoto on Jul 05, 2020
Content may be subject to copyright.
1
The Dark Side of AI-powered Service Interactions: Exploring the Process of Co-
destruction from the Customer Perspective
Abstract
Artificial intelligence (AI)-powered chatbots are changing the service interface from being
human-driven to technology-dominant. As a result, customers are expected to resolve issues
themselves before reaching out to customer service representatives, ultimately becoming a
central element of service production as co-creators of value. However, AI-powered
interactions can also fail, potentially leading to anger, confusion, and customer dissatisfaction.
We draw on the value co-creation literature to investigate the process of co-destruction in AI-
powered service interactions. We adopt an exploratory approach based on in-depth interviews
with 27 customers who have interacted with AI-powered chatbots in customer service settings.
We find five antecedents of failed interactions between customers and chatbots: authenticity
issues, cognition challenges, affective issues, functionality issues, and integration conflicts. We
observe that although customers do accept part of the responsibility for co-destruction, they
largely attribute the problems they experience to resource misintegration by service providers.
Our findings contribute a better understanding of value co-destruction in AI-powered service
settings and provide a richer conceptualization of the link between customer resource loss,
attributions of resource loss, and subsequent customer coping strategies. Our findings also offer
service managers insights into how to avoid and mitigate value co-destruction in AI service
settings.
Practical contribution on policy making on supporting progress towards UN Sustainable
Development Goal 8: Promoting inclusive and sustainable economic growth, employment
and decent work for all.
Our study demonstrates the disruptive effects of AI-powered technologies for end-users and
companies deploying such technologies to their customers. Our findings point to the need for
policy makers to educate businesses and entrepreneurs on the proper use of AI, particularly on
the protection of user privacy, avoidance of malicious use, and the provision of transparency so
as to avoid public and consumer loss of confidence in such technologies. In this way, the
negative effects of AI-powered technologies can be mitigated, and AI deployment can result in
both customer satisfaction and cost optimization, leading to economic growth.
Keywords
Value Co-destruction; Customer Resource loss; Artificial Intelligence; Automated service
interactions; Chatbots; Service Robots; Value Co-creation
Word Count
9,048 words (excluding References)
2
Introduction
Artificial intelligence (AI) is rapidly transforming service encounters, as frontline
employees (FLEs) are increasingly becoming supported or even replaced by AI technology.
Indeed, automated customer service agents attracted the highest share of AI investment
(USD4.5 billion worldwide) in 2019 (IDC, 2019). Such technologies, which include
conversational agents (or chatbots) and voice-controlled digital assistants (e.g., Alexa), are
fundamentally changing the nature of the service interface from one that is human-driven to
one that is predominantly autonomous and technology-dominant (Larivière et al., 2017).
By interacting with AI technologies to self-serve, a customer becomes a central element of
service production, a ‘partial employee’ and a co-creator of value (Bitner et al., 1997; Vargo &
Lusch, 2004). Active customer participation during service encounters yields several benefits.
For service providers, it results in an improved ability to understand and react to customer needs
(Etgar, 2008), while customers value the convenience and the cost savings afforded by such
technologies (Ho & Ko, 2008). AI technologies can also provide more convenient, accessible
services compared to the more ‘traditional’ services they substitute, generally enabling
customers to accomplish specific tasks more easily.
However, positive value creation is not the only outcome that can arise when customers
interact with AI-powered technologies. In the same way that value is collaboratively co-created,
it can be collaboratively co-destroyed during the process of interaction (Echeverri & Skålén,
2011). The autonomy of AI may signify suboptimal outcomes if the technology adapts in
unexpected ways or the wrong data is acted on by FLEs or customers (Bock et al., 2020).
Furthermore, AI technologies rely on customer participation, which increases service
complexity and, eventually, the likelihood of service failure (Hilton & Hughes, 2013). As
3
customers invest higher levels of effort and time into an interaction, they might feel annoyed
and frustrated when the co-created service fails to meet their expectations (Grönroos & Voima,
2013; Harrison & Waite, 2015). Indeed, these instances represent the loss of valuable resources,
such as time and patience, for the customer (Harrison & Waite, 2015).
Academic literature to date has offered an incomplete understanding of the antecedents of
co-destruction and resource loss in particular (Järvi et al., 2018; Smith, 2013). Further research
on value co-destruction is required to obtain a more complete and refined appreciation of the
processes involved (Ostrom et al., 2015). Such an understanding is especially important in light
of the pervasiveness of AI technologies in service, particularly regarding conditions and drivers
that determine how AI may lead to diminished value creation (Bock et al., 2020).
We aim to understand the process of co-destruction in AI-powered service interactions and
argue that such extant literature does not offer sufficient insights into this topic. This is a
potential result of three research limitations.
First, whereas value co-creation has been given considerable research attention (e.g., Cova
& Dalli, 2009; Morosan & DeFranco, 2016; Ramaswamy & Ozcan, 2018; Verleye, 2015), co-
destruction has been largely overlooked (Ostrom et al., 2015). There is a dearth of studies
addressing co-destruction that generally focused on conceptual discussions of the notions of
co-destruction and the associated resource loss (e.g., Echeverri & Skålén, 2011; Plé &
Chumpitaz Cáceres, 2010), with few studies empirically examining resource loss and its causes
(e.g., Smith, 2013). Customers are increasingly playing a more significant role in service
delivery, as service encounters are becoming increasingly infused with technology and
automation. We argue that, as a result of this trend, a better understanding of the antecedents of
4
value co-destruction, especially customer resource loss, is required. This understanding is
especially important to ensuring that co-destruction is avoided, especially in service encounters
that are specifically designed to foster value co-creation (Plé, 2017).
Second, what we do know about co-destruction comes from traditional service settings,
such as insurance (Blut et al., 2019). There have been calls for further research on value co-
destruction in a diversity of industries and contexts (Prior & Marcos-Cuevas, 2016), especially
in technology-driven service environments (Quach & Thaichon, 2017). Ostrom et al. (2015)
propose that rapid changes in service experience and delivery that are being brought about by
technology necessitate novel service-related knowledge. Evolved service encounters create an
opportunity to evaluate how AI affects core service areas, such as co-creation and co-
destruction (Robinson et al., 2020).
Third, a consumer-centric understanding of value co-destruction remains limited
(Camilleri & Neuhofer, 2017; Yin et al., 2019). Most co-destruction studies examine provider–
customer relationships in Business-to-Business (B2B) settings (Echeverri & Skålén, 2011; Järvi
et al., 2018; Vafeas et al., 2016), whereas few studies focus on the customer perspective (Kim
et al., 2019; Smith, 2013). Service-Dominant (S-D) logic proposes that ‘value is uniquely and
phenomenologically determined by the beneficiary’ (Vargo & Lusch, 2008, p. 7). However,
research examining how consumers individually experience value creation and destruction
remains scarce (Kelleher & Peppard, 2011). We argue that it is vital to understand the
antecedents of customer resource loss from the customer point of view, especially in AI-
powered self-service settings, where the customer lies at the very core of service delivery.
5
Our study seeks to help close these gaps by investigating the process of co-destruction in
AI-powered service settings. We aim to address two distinct objectives: first, to understand the
transformational effects of AI on co-destruction, and second, to analyze the process of co-
destruction from the customer perspective. We draw on an in-depth empirical study based on
27 interviews with customers who have already interacted with AI-powered chatbots. We
propose a conceptualization of the co-destruction process through AI technology to
demonstrate the link between customer resource loss, attributions of resource loss, and
customer coping strategies following such loss.
In the coming paragraphs, we first discuss how AI is transforming the service industry. We
then explore the theoretical concept of value co-destruction by adopting an S-D logic lens.
Then, we discuss the research method before presenting our findings and discussing the
proposed conceptualization of co-destruction in AI service settings. We offer theoretical
contributions for value co-destruction research, as well as managerial implications for the
service industry based on our findings.
Theoretical Background
AI in Service Encounters
The extant literature generally describes AI in terms of human intelligence or mimicking
intelligent human behavior, and involving a number of cognitive functions, such as rational
thinking, problem-solving, and learning (Huang & Rust, 2018; Syam & Sharma, 2018;
Tussyadiah, 2020). Distinct abilities of AI have been proposed in relation to the human skills
that can be reproduced. Huang and Rust (2018) discuss four different types of intelligence
needed for service tasks: mechanical, analytical, intuitive, and empathetic. Mechanical
6
intelligence is related to repetitive mundane tasks, which are exemplified by those performed
by call center agents. Analytical intelligence concerns the ability to process information for
problem-solving and learning. It depends largely on machine learning (ML)—a subset of AI
that allows systems to automatically learn and improve from past experiences without being
explicitly programmed to do so. Intuitive intelligence is associated with creative thinking and
problem-solving, such as that required by marketing managers and doctors. Empathetic
intelligence refers to one’s ability to identify and comprehend others’ emotions and respond
accordingly. Empathic intelligence is central for those whose occupations require interpersonal
and people skills, such as psychologists.
Chatbots, or conversational agents, are examples of AI applications that are being deployed
in service settings that are of a mechanical or analytical nature. A conversational agent is a
virtual, autonomous, technological object that can engage in proactive or reactive behavior
(Holz et al., 2009). Chatbots are a type of disembodied conversational agent, as they do not
have a physical appearance; they allow user interactions through only voice or text interfaces
(Araujo, 2018; Keyser et al., 2019). However, not all chatbots are AI-driven. For instance, rule-
based chatbots are scripted with pre-programmed logic and follow a predetermined path of
questions and answers, exhibiting minimal intelligence (Tuzovic & Paluch, 2018). Such entry-
level chatbots can be implemented to answer Frequently Asked Questions (FAQs), such as
delivery and shipping-related questions. (Buhalis & Cheng, 2020). By contrast, AI-driven
chatbots are capable of understanding and communicating via human language through natural
language processing (NLP) (Griol et al., 2013). Additionally, ML allows chatbots to
continuously learn and evolve as they obtain access to increased amounts of data (Kumar et al.,
2016).
7
Examples of AI-driven chatbots include virtual assistants Alexa (Amazon), Siri (Apple),
and Edward (made available by Edwardian Hotels). Edward can communicate in natural,
conversational language to guide tourists throughout their entire travel journey, and it learns
from every interaction (Tussyadiah, 2020). In 2019, Edward managed 69% of all guest queries,
resulting in increased efficiency and the re-assignment of staff from repetitive tasks to more
important queries (Oram, 2019).
The information-rich nature of the service industry is a possible reason for the widespread
adoption of chatbots by companies (Kumar et al., 2016), as companies are constantly striving
to streamline their operations and achieve cost savings (Ukpabi et al., 2019). In service settings,
chatbots can create a prompt, interactive, convenient, and cost-effective channel for
communicating with customers throughout their entire journey (Belanche et al., 2020; Chung
et al., 2018; Gnewuch et al., 2018).
Despite companies’ increased enthusiasm for chatbot deployment, a number of important
questions have emerged, including the potential for chatbots to significantly affect relationships
between customers and service providers at the service frontline (van Doorn et al., 2017).
Chatbots have the ability to either augment or substitute frontline service employees (Davenport
et al., 2019). The literature suggests that AI technologies can assist FLEs by helping them
perform their roles better, or they can completely replace and automate employees’ active
involvement in service encounters (Keyser et al., 2019; Marinova et al., 2017). Several studies
have challenged the classic idea that augmentation and substitution are mutually exclusive, as
both effects can emerge simultaneously during the adoption of a technology (e.g., Ivanov &
Webster, 2019). Thus, AI technology will always have an impact on a significant portion of
service encounters.
8
There is growing evidence showing that, while AI-powered chatbots can enrich the
customer experience by learning from previous customer conversations and continuously
adapting their responses from such learning (Xu et al., 2017), they can also cause discomfort
(Mende et al., 2017). Studies of human–computer interaction have reported that when chatbots
are designed to be more complex and animated, exhibiting high levels of anthropomorphism,
customers experience the negative feelings of eeriness and unease (Ciechanowski et al., 2019).
Negative emotions can lead to negative attitudes towards a service provider with resultant—
and often irreversible—reduced purchase intentions (Demoulin & Willems, 2019). Thus,
identifying the conditions under which chatbots can undermine the customer experience is an
urgent objective.
Value Co-creation and Co-destruction in Technology-Driven Service Encounters
When interacting with technology to self-serve, customers adopt a critical role in service
production. In this role as partial employees and active co-creators of value, customers become
fully engaged in solving problems and delivering the required service (Bitner et al., 1997). Here,
we will draw on the value co-creation literature to understand the antecedents to value loss in
AI-powered service encounters.
Value co-creation implies that value and experiences can no longer be merely delivered to
customers; rather, the service provider can only present value propositions (Vargo & Lusch,
2008). Once customers accept such propositions and successfully integrate their operant and
operand resources, value is co-created collaboratively and interactively (Chandler & Lusch,
2015; Hilton & Hughes, 2013). In an AI-driven service encounter, the service provider can only
create a value proposition that includes the availability of the chatbot, together with a number
9
of modules, such as the user interface, the knowledge base, and the NLP interpreter module, in
order to be able to read, understand, and derive meaning from human language (Buhalis &
Cheng, 2020). It is the customer, through the integration of operand and operant resources (e.g.,
skills, time, and access to mobile phone and the Internet), who seeks to collaborate (i.e.,
interact) with the chatbot and, as a result, determine value creation. For example, instead of
engaging in face-to-face interactions with a receptionist at a hotel, customers can get in touch
with an AI-powered chatbot that is able to answer queries at any time of the day, irrespective
of the customer’s location or language. Such individualized, contextualized experiences based
on instant dynamic engagement between the customer and the service provider are examples of
real-time value co-creation (Buhalis & Sinarta, 2019).
The premise of a collaborative process of co-creation between a service provider and a
customer has recently attracted criticism, since it inherently implies that interactions between
the two actors tend to result in value co-creation (i.e., positive valence). Recent studies have
drawn attention to the possibility that interactions between a service provider and a customer
can also result in negative outcomes, where at least one of the actors experiences a decline in
value from the interaction with the other actor (Plé & Chumpitaz Cáceres, 2010). This negative
outcome has been conceptualized as co-destruction, defined as ‘an interactional process
between service systems that results in a decline in at least one of the systems’ well-being
(which, given the nature of a service system, can be individual or organizational)’ (Plé &
Chumpitaz Cáceres, 2010, p. 431). More specifically, value co-destruction can be experienced
by any or all of the actors involved in an interaction and can be either intentional or accidental
(Plé & Chumpitaz Cáceres, 2010). Value co-destruction implies that when an actor (for
example, the customer) integrates a resource with another resource of another actor (for
example, the service provider), the well-being of any one or both of these actors diminishes
10
(Plé & Chumpitaz Cáceres, 2010). This decline in well-being stems from a discrepancy between
the actors’ expectations regarding actual or perceived resource integration (Plé, 2017).
The extant literature generally encapsulates the factors that lead to a decline in well-being,
and therefore co-destruction, under the term ‘unexpected resource loss’ (Smith, 2013, p. 1903).
For example, a customer may interact with Alexa, Google Home, or Siri, expecting the
experience to be frictionless. Instead they may find the actual experience to be frustrating and
ineffective (Kaplan & Haenlein, 2018). In this case, the customer experiences a decline in well-
being (frustration) and loses resources (time), thus experiencing value co-destruction by losing
more than what was gained. Although the customer and the service provider can both cause
resource loss, the extant literature has focused predominantly on resource loss that stems from
the service provider’s failure to fulfill its value proposition (Järvi et al., 2018). In this respect,
the literature identifies a number of factors that can act as antecedents of resource loss and co-
destruction, including absence of information (Järvi et al., 2018; Robertson et al., 2014),
mistakes (Järvi et al., 2018), indifference, and technological failure (Zhang et al., 2018).
Several studies have identified resources that customers often lose when they interact with
service providers (e.g., Plé, 2016; Smith, 2013). Although resource loss classifications differ
among these studies, there is general consensus about the types of resources that can be lost
during service interactions. A comprehensive resource framework is provided by Plé (2016),
who identifies a number of resources that customers can lose as they interact with service
providers, including economic, social, informational, emotional, temporal, and relational
resources, as well as resources related to the customer’s role, such as role clarity. The
customer’s role during service interactions has only recently been examined in the service
literature, as studies have highlighted the importance of considering the detrimental effects of
11
customer participation. Chowdhury et al. (2016) draw attention to the negative aspects resulting
from co-creation and identify role conflicts and ambiguity, both of which can lead to tension.
Blut et al. (2019) reveal that active customer participation can lead to role stress, including role
conflict, role overload, and role ambiguity, and that such stress increases based on the task
scope and the beneficiary participation.
Recent studies have also demonstrated a link between resource loss and resource deficiency
and misintegration (e.g., Smith, 2013). Resource deficiency occurs when one or more of the
actors do not possess the required operant resources (e.g., knowledge) to be used during the
interaction. As a result, resource deficiency can have a compounded effect on how other
resources are utilized during the interaction. If the customer is deficient in a particular resource
(e.g., trust), a negative influence on the delivery of another resource (e.g., information) by the
service provider may result (Vafeas et al., 2016). The actors involved in the encounter can also
intentionally or unintentionally misintegrate their own resources or the resources of other actors
during the interaction (Plé & Chumpitaz Cáceres, 2010). Resources are misintegrated when any
of the actors fail to integrate their operant and operand resources in an ‘appropriate or expected’
manner from the other actor’s perspective (Plé & Chumpitaz Cáceres, 2010, p. 432). Consistent
with this view, Laud et al. (2019), propose an extensive typology of resource misintegration
manifestations, which include deceptive integration of resources (deliberate concealment of
resource integration), misunderstanding of how to integrate resources (failure to understand
how to correctly integrate resources), negligent integration of resources (deliberate inattention
in the integration of resources) and unwillingness to integrate resources (deliberate withholding
or withdrawal of resources). As resource misintegration can manifest itself in different ways,
uncovering the distinct antecedents of resource loss makes it possible to obtain early warning
signs of co-destruction (Laud et al., 2019).
12
The negative impact of co-destruction can be so substantial that customers involved in a
failed interaction may refuse to collaborate again in subsequent interactions (Prior & Marcos-
Cuevas, 2016). Customers can also publicly manifest their feelings regarding the failed
interaction with the service provider through negative word of mouth on social media, which
can harm the service provider’s image and reputation (Balaji et al., 2016).
A significant limitation of previous co-destruction studies is the research context. Whereas
numerous studies have recently investigated co-destruction and its associated resource loss,
most studies have focused on traditional, human-to-human interaction service settings (Smith,
2013), such as travel insurance (Blut et al., 2019) and B2B settings (Chowdhury et al., 2016;
Vafeas et al., 2016). By contrast, little empirical research has explored co-destruction in relation
to AI-powered technologies, which are permeating many service settings. For instance, while
co-destruction has been explored in the context of physical (embodied) service robots in elderly
settings (Čaić et al., 2018), there has been no exploration of co-destruction in the context of
virtual (disembodied) service robots (Wirtz et al., 2018), such as AI-powered chatbots.
Virtual AI-driven service settings present a distinct context whereby (1) the FLE is replaced
by a virtual and disembodied conversational agent that is trained to understand and, importantly,
mimic human behavior (Holz et al., 2009), and (2) the customer is expected to be more involved
and to perform some of the tasks that were previously performed by the FLE (Kaartemo &
Helkkula, 2018).
AI technologies are introduced in service settings to support ‘the co-creation of value
between a service provider and customer at the organizational frontline’ (Keyser et al., 2019,
13
p. 158). However, co-creation is not the only outcome of frontline interactions, and co-
destruction remains an important possibility. It is therefore important to understand the reasons
for, and the situations that lead to, co-destruction (Echeverri & Skålén, 2011; Plé & Chumpitaz
Cáceres, 2010). Our study seeks to bridge this gap in the knowledge by exploring the
antecedents of co-destruction and the resulting resource loss in AI-powered service
environments.
Research Method
Since there is a dearth of empirical studies on this topic, we utilized an exploratory research
design involving a qualitative research method to gain a rich understanding of how customers
behave and interact with AI technologies, as well as to understand the process of co-destruction
(Edmondson & McManus, 2007). We conducted in-depth semi-structured interviews with
customers who had interacted with AI chatbots in a customer service context in the past, as
these participants were considered to have the required expertise in this area. The
implementation of AI-powered applications, such as chatbots, is predominantly concentrated
in customer service environments in service-heavy industries, such as financial services,
telecoms, retail, and travel (Kannan & Bernoff, 2019). A customer service setting, therefore,
lent itself well to be explored within our study.
Data Collection
We developed an interview guide based on a review of earlier literature. We included
questions related to the resources required during the interaction (Grönroos & Voima, 2013;
Hilton & Hughes, 2013), sources of frustration (Echeverri & Skålén, 2011), resource loss (Plé,
14
2016; Smith, 2013), and resource deficiencies and misintegration (Plé & Chumpitaz Cáceres,
2010; Smith, 2013).
Research and current reports show that service robot and chatbot usage is strongest among
the younger demographic (Ivanov et al., 2018; SmartAction, 2018; Tuzovic & Paluch, 2018).
This implies that not all demographics were equally contributive to our study. Thus, typical
case sampling, a type of purposive sampling strategy, was used to select those cases considered
‘most typical, normal or representative of the group of cases under consideration’ (Teddlie &
Tashakkori, 2009, p. 176). Purposive sampling enabled the selection of specific information-
rich cases that were closely related to the study’s aim (Patton, 2002), which is consistent with
past qualitative co-destruction studies that sought to draw on rich data from informants with
comprehensive experience in particular practices (Echeverri & Skålén, 2011; Quach &
Thaichon, 2017). Younger customers who had experience interacting with chatbots were
considered most suitable for the exploration of co-destruction—because they represent a
demographic that relies heavily on technology for their day-to-day interactions (Buhalis et al.,
2019), and because this allowed the collection of in-depth data on co-destruction episodes from
real, authentic, past experiences.
We employed a recruitment screener to qualify participants so as to ensure that the
participants had experienced interactions with an AI-powered chatbot in a customer service
context at least once in the 12 months preceding the study. Participants were asked to provide
details about their past chatbot interactions, such as the name of the service provider offering
the chatbot and the name of the chatbot (where relevant), to verify that an interaction with an
AI chatbot (and not a human representative) in a customer service context did indeed occur.
15
We conducted 27 face-to-face interviews with voluntary participants residing in Malta—a
multicultural setting that provide access to a diversity of otherwise difficult-to-reach
participants. Data was collected between June and September 2019. The data collection process
was concluded when additional interview data showed that theoretical saturation was reached
(Glaser & Strauss, 2017). On average, each interview lasted around 41 minutes. After obtaining
consent from the interviewees, we recorded and transcribed all the interviews.
The participants’ ages ranged from 21 to 46 years, fitting the sought demographic profile.
The participants had obtained a relatively high level of education (undergraduate degree or
higher) and occupied middle to upper management positions. The participants generally
enjoyed using modern technology daily, and perceived novel applications as having high utility,
which is characteristic of their generation (Roberts, 2018; Stewart et al., 2017). Indeed, the
interviewees reported using AI-powered chatbots in various contexts, the most common being
Fintech (financial technologies), an industry whereby technology is comprehensively used to
improve and automate the delivery of financial services (Belanche et al. 2019). The
interviewees mentioned that they generally used Fintech applications for convenience and
control in managing their accounts. A full list of interviewee characteristics is set out in Table
1.
16
Data Analysis
We uploaded our transcripts into an NVivo 12 project for coding and followed the
systematic approach outlined by Gioia et al. (2013)—an approach that has been employed in
previous co-destruction research (Järvi et al., 2018; Vafeas et al., 2016). This process required
organization of the data into first-order codes, which were closely linked to existing terms
offered by the interviewees so as to preserve the authenticity of their expressions. The next
stage involved establishing connections between the first-order codes, leading to the emergence
of second-order themes. This procedure involved several readings of verbatim transcripts and
establishing connections between the emerging themes and the data set. Once a viable set of
themes was established, the second-order themes were condensed further into second-order
aggregate dimensions. This process ensured a strong foundation for building a data structure,
17
while enriching the qualitative rigor of the study by clearly showing the development of the
process, from the primary data to the theoretical constructs (Gioia et al., 2013).
Following Bazeley and Jackson (2019), we identified a rich transcript as being particularly
suitable to start the coding process and to generate a preliminary list of first-order codes. When
the preliminary coding was concluded, we looked for links between the second-order themes
to classify them into more developed, aggregate dimensions. For example, ‘poor chat progress’
and ‘incorrect interpretation’ were combined into the aggregate dimension ‘cognition
challenges.’ We adopted this process for the rest of the data.
This analytical process resulted in the emergence of five distinct aggregate dimensions,
which are shown in Figure 1 and set out in more detail in the next section.
[Insert Figure 1 near here]
Findings
Our data analysis revealed that co-destruction in AI service settings emerges from five
antecedents. These antecedents, in the order in which they occur during the interaction process,
are authenticity issues, cognition challenges, affective issues, functionality issues, and
integration conflicts. Any of these antecedents can lead to the perception of a failed service
interaction and result in resource loss for customers and thus a decline in their well-being.
Customers attribute responsibility for resource losses either to themselves or to the service
provider and adopt avoidance or confrontative measures intent on regaining control over failed
18
interactions. Table 2 sets out the five antecedents with their constituent concepts, the perceived
customer resource loss, and the attributions for such resource loss.
Authenticity Issues
19
Authenticity issues stem from the shrouded presence of a chatbot during an interaction,
such as when a customer perceives that the service provider is intentionally hiding or covering
up the identity of the chatbot. Participants mentioned that when the identity of the chatbot is
not clear, they depend on a number of cues to determine whether they are interacting with a
chatbot or a human agent. These cues include receiving instant replies to questions, noticing
perfect punctuation and spelling, or repeatedly obtaining the same answers to different queries.
Customers felt deceived when they were made to believe they were interacting with a human
agent but found out that they were actually interacting with a chatbot.
This uncertainty affected customers’ role clarity—that is, the extent to which they
understood the role they must fulfill in the interaction. Customers were uncertain whether to
change their style of communication, to write in a keyword-like manner, or to write in a
different way. They did not know whether to be curt or friendly, and what stance to adopt to
demonstrate that their issue was important.
I expect that I am told that it’s a chatbot because then I would have to change the
way I pose the question. So even your English, how you write a question, it has to
be quite clear for the chatbot to understand what you’re actually asking. [I7]
Customers held themselves and the service provider equally accountable for the lack of
role clarity that arises due to authenticity issues. Customers realized that they might not possess
the skills that allow them to recognize that they are communicating with a chatbot. For example,
participants remarked that although they are generally alert and attentive during their
conversations, they might lack the ability to discern nuances that are specific to chatbot
behavior. Additionally, participants discussed the service provider’s deceptive integration of
20
resources through the deliberate misrepresentation of a chatbot presence; for example, when
the service provider intentionally opted not to disclose the chatbot identity or if the chatbot was
purposely assigned human characteristics, such as being made to write imperfect text, this was
labeled ‘unfair’ and ‘dishonest.’ Furthermore, service providers occasionally attribute human
names to chatbots for increased levels of anthropomorphism (Araujo, 2018); thus, unless this
is indicated explicitly during the chat, customers can easily mistake the chatbot for a human
representative.
Cognition Challenges
These findings demonstrate that customers perceive a number of cognitive challenges with
using chatbots. Chatbots exhibit a lack of understanding when the chat progress is poor, such
as when they ask an excessive number of questions to comprehend an issue or they consistently
provide the same answer to different customer questions. Cognitive challenges were also
observed through instances of incorrect interpretation, such as when chatbots misunderstand a
query or problem and provide an unrelated reply. One respondent complained that ‘the reply
that I received had nothing to do with my question—it was irrelevant—so it was evident that
the chatbot did not understand my question’ [I20].
The data showed that incomprehension breeds feelings of frustration and anger in
customers. Although customers may not expect the chatbot to fully resolve their problems or
issues, they do expect that, at a minimum, it is able to understand the context of their question
and to provide adequate guidance. When this is not the case—for example, when the chat results
in a deadlock—customers feel agitated and upset because they feel that they have lost control
of the interaction. If customers are already upset and frustrated pre-interaction, their anger and
frustration will only be exacerbated in cases characterized by a chatbot’s lack of cognition.
21
Interestingly, participants were observed to attribute such emotional resource losses to both
the service provider and themselves. The service provider was assigned responsibility for
deliberate inattention during the creation and evolution of the chatbot—for instance, if the
chatbot was not trained properly or not provided with the correct inputs. One participant
remarked that functionality issues led him to believe that the chatbot ‘needs more development
at this point in time’ [I19]. However, participants admitted that they also have a part to play in
ensuring that the chatbot understands their request; they pointed to their own failure to
understand how to correctly integrate resources when they did not adapt their communication
skills (e.g., their writing style and use of keywords) to be better understood by the chatbot:
The chatbot kept asking me, ‘which is your account?’, and I retyped the problem,
and it gave me the same answer. And then at the end, I realized that my wording
might not have been correct. As soon as I changed the wording, the chatbot
immediately pointed me to the right answer. [I14]
Affective Issues
The data analysis showed that customers expect chatbots to exhibit a degree of empathy.
In customer support situations, customers perceive chatbots as substituting human employees
and, as a result, expect an element of sympathy and personalization within the interaction. One
participant articulated that ‘rather than just understanding what I’m typing, I want the chatbot
to understand, to feel, what I’m feeling’ [I05]. However, the participants generally believed that
regardless of their level of technological advancement, chatbots are unable to adapt easily to
customers’ emotional state and to diffuse customers’ feelings of anger, frustration, stress, and
concern.
22
Consequently, customers experience relational resource losses when they determine that
interactions with chatbots are devoid of affective understanding. They perceive regular
interactions with human support agents as a way to develop a deeper bond with the service
provider, which often involves building a relationship with particular employees who, in turn,
may acknowledge the customer’s repeat interactions. In contrast, chatbot interactions were
described as ‘clinical’ [I16], leaving customers feeling unvalued and experiencing feelings of
detachment from the service provider.
I thought it [the chatbot] was rather impersonal... It made me feel distant from the
company. I didn’t like that. If it were my company, I would want my customers
to feel comfortable and close to the company, not distant. [I04]
Participants attributed affective issues to the service provider’s deliberate withholding of
resources and information. Participants reasoned that they felt let down by the service provider
when they were forced to interact with a chatbot rather than be given human support during
critical, urgent situations—for example, when payment fails upon a hotel checkout, resulting in
the customer potentially missing a flight as he waits for the payment transfer to be put into
effect. In such cases, which are characterized by a high degree of anxiety and strong feelings
of anger, customers expect to be met with a high level of empathy and reassurance. When
customers are required to interact with a chatbot instead, attribution judgments towards the
service provider are heightened. The service provider is perceived as withdrawing human
resources (human FLEs) and, as a result, blamed for their unwillingness to integrate resources.
23
Customers also assume a minimal degree of responsibility for resources losses that occur
as a result of affective issues. When the service provider adopts a ‘chatbot first, agent second’
policy, customers generally do have the option to speak to a human representative; however,
they have to pass through the chatbot first and instruct the chatbot to transfer the conversation
on to a human. However, not all customers are aware of this functionality and how to use it.
Participants remarked how during urgent, emotional situations, they might lack the presence of
mind to check or notice the option to transfer the conversation to a human representative.
Functionality Issues
Our findings revealed that customers perceive chatbots to be limited in their functionality.
Despite being powered by AI, chatbots can only offer limited assistance during chat
interactions. For example, customers perceive chatbots as a suitable replacement to humans
when answering simple, straightforward questions; however, they cannot be relied on to solve
more complex queries, as the scope of their abilities is generally narrow. The limited
functionality of chatbots was also reflected in their inability to process non-textual inputs, such
as pictures or emojis, which customers may prefer to use to express their feelings or preferences.
Despite expecting chatbots to be limited, customers do not expect significant resource
losses as a result of such functional limitations. Yet, participants reported the loss of significant
time resources when they needed to repeat a request or restart the entire conversation with a
human. Such temporal losses are further attenuated, as they contrast heavily with customers’
prior expectations of chatbot speed and agility.
24
I’m not going to use the [company name] chatbot ever again, because it’s a waste
of time. Because if you use it with the intent to have an immediate reply, and then
it turns out to be more complicated, then I would be more frustrated. [I04]
Participants attributed functionality issues completely to the service provider. Functionality
issues are perceived to result from the unavailability of specific chatbot features, which is the
responsibility of the service provider. More precisely, such attributions of misintegration were
observed when the chatbot failed to set a customer’s expectations—for example, by neglecting
to inform the customer of limitations in its functionality, or by specifying the degree of accuracy
in the given answers.
Integration Conflicts
Our analysis identified integration conflicts as another antecedent of resource loss. These
conflicts arise from disconnects between the chatbot and other customer support channels,
generally manned by human representatives. This can occur when, for example, information
collected by the chatbot in the initial stages of the conversation is not conveyed to a human
representative, or when the chatbot does not automatically transfer a complex or deadlocked
conversation to a human representative.
Integration conflicts do not only result in temporal and emotional resources losses (time
wastage and frustration); such conflicts also cause customers to experience informational
losses. Customers perceive that chatbot conversations are not stored; as a result, they feel that
they have lost their frame of reference relating to a specific problem or issue. In such cases,
customers perceive that they have lost their ability to request an audit trail, including details of
who they have spoken to and about what.
25
In my case, the chatbot definitely did not keep any data, and I had to repeat it and
start from scratch every time I logged in when I was speaking to a new chatbot,
because this was over a period of months…there was zero traceability. [I08]
Participants blamed the service provider for informational losses occurring through such
conflicts, although in some cases, the participants also assumed a degree of responsibility.
Customers deem the service provider responsible in cases of poor organizational procedures,
which inhibit a human representative from following up on previous conversations with
chatbots. The service provider was also blamed for not including a clear exit option when
interacting with a chatbot, thus enabling customers to shift their interaction to a human.
Participants felt that this reflected deliberate inattention on the service provider’s part.
Customers also pointed at their own limitations during chatbot conversations when they did not
notice specific options, such as the possibility of saving the chat conversation (to counter
informational losses), or the possibility of transferring the conversation to a human
representative.
Customer Reactions to Resource Losses
We also observed that resource loss influences customer emotions and subsequent
behaviors to different degrees. Milder reactions may involve an immediate call for human
support through other channels, such as phone or email, or refusal to reuse the chatbot,
especially if the negative interaction was one in a series of negative experiences.
Situations in which participants experienced deeper emotional resource losses led to
harsher reactions. A common reaction involved terminating a service with the service provider
26
or moving to a competitor, especially in cases of a new service where the customer had not yet
invested significant time and effort in developing a relationship with the service provider.
I was like, you know what? I’m okay, I don’t have the problem anymore, because
I’m stopping the service straight away. I’m fed up. [I25]
The harshest reactions involved propagating negative word of mouth on social media.
Although negative word of mouth was not as common as the other reactions identified, we
observed it to be spurred by continuous negative service from the chatbot, such as looping or
failing to understand the customer’s query.
Basically, it [the chatbot] got me nowhere…it was extremely frustrating. I think
in all it took me a good three months to get it sorted out. I had to actually get it to
Twitter, complaining live on Twitter to get someone to speak to me. [I08]
Resource loss activates customers’ desire to take control of a situation by hurting or getting
even with the service provider. This behavior may imply a coping strategy, whereby customers
select a specific, and generally negative, course of action in an attempt to restore their own
well-being (Mick & Fournier, 1998). Avoidance and confrontative coping behaviors were
evident when customers attempted to restore their well-being in AI-powered service
interactions. Attempts to call for human support and refusal to reuse the chatbot are examples
of ‘avoidance’ coping strategies, where customers attempt to distance themselves from new
technology. On the other hand, termination of service, switching to a competitor, and engaging
in negative word of mouth demonstrate ‘confrontative’ coping strategies.
27
Discussion and Theoretical Contributions
We investigated the process of co-destruction during customer interactions with AI-
powered chatbots. In doing so, we aimed to address two distinct goals: first, to understand the
transformational effects of AI on co-destruction, and second, to analyze the process of co-
destruction from the customer perspective. The findings from this empirical study contribute to
the current understanding of co-destruction in three ways.
First, we demonstrated that co-destruction is a process that is set into motion by a number
of factors (antecedents), resulting in a decline in well-being for at least one of the actors—in
this case, the customer. A decline in well-being comprises resource loss. In order to counter
such resource loss, customers undergo an exercise of responsibility attribution to determine
who was responsible for the resource loss and decide what action to take. This process allows
customers to perceive that they have re-gained a degree of control over the encounter. A
conceptualization of co-destruction based on this process is set out in Figure 2.
[Insert Figure 2 near here]
Whereas previous studies have contributed models of co-creation from the customer
perspective (Etgar, 2008; Füller, 2010), our study addresses the gap in the literature by
proposing a model of co-destruction from the customer’s viewpoint. Our model provides a
richer conceptualization of the link between customer resource loss, attributions of resource
loss, and customer coping strategies following such loss. As previous co-destruction studies
tended to investigate resource loss, attributions of resource loss, and coping strategies
separately, our study makes a noteworthy contribution to the extant understanding of value co-
28
destruction as a complete process, which has been largely overlooked in previous literature
(Ostrom et al., 2015).
Second, our study empirically demonstrated the transformational role of AI in value co-
destruction. Our conceptualization showed that while a number of antecedents of resource loss
are common to a multitude of service settings, AI-powered service settings are affected by a
number of specific antecedents, necessitating their own investigation.
Three of the identified antecedents—cognition challenges, functionality issues, and
integration conflicts—are consistent with previous studies that examined co-destruction
through interaction with physical robots or in human-to-human B2B settings (Čaić et al., 2018;
Järvi et al., 2018; Vafeas et al., 2016). Our findings suggest that cognition challenges stem from
the lack of understanding of AI technology and echo previous studies suggesting a lack of
understanding as an antecedent of resource loss (Čaić et al., 2018; Järvi et al., 2018; Vafeas et
al., 2016). The identification of functionality issues as a reason for co-destruction corroborates
previous studies that outline the inability to serve as a key antecedent of co-destruction (Järvi
et al., 2018). This notion is congruent with the idea that co-destruction occurs when an operant
resource by one of the actors (the AI technology offered by the service provider) is deemed to
be inadequate and cannot meet customer requests satisfactorily (Echeverri & Skålén, 2011).
Integration conflicts are related to inadequate coordination, which was previously proposed as
an antecedent of co-destruction, albeit in B2B settings (Vafeas et al., 2016). However, whereas
previous studies consider inadequate coordination to arise from the lack of alignment between
two actors or systems (Vafeas et al., 2016), our findings suggest that poor organizational
configurations of one actor (the service provider) cause such misalignment. The infusion of AI
29
into service thus renders value creation a more complex process whereby coordinated,
harmonized resource inputs assume even greater importance.
Two of the identified reasons for resource loss, affective and authenticity issues, are
specific to AI-powered environments. Both of these, however, have not been identified in the
extant co-destruction literature.
Chatbots, even if powered by AI, are unlikely to be autonomous and to express genuine
emotions (Robinson et al., 2020). They may be trained to mimic human responses and express
basic emotions, which may be suitable or even ideal in low-involvement, mundane chatbot
encounters (e.g., in questions regarding package delivery status). However, in high-
involvement encounters (e.g., refund problems), when customers expect empathy and
understanding (Rafaeli et al., 2017), they may be disappointed with chatbot expressed emotions,
especially if they interpret such emotions as superficial or insincere. In such instances, the
opportunity for successful collaboration between the customers and the service provider (co-
creation) is not only lost, but it also becomes counterproductive, as customers perceive a decline
in their relational well-being as a result of the interaction. It is also evident that in these cases,
co-destruction emerges from the disconnect between chatbot ability and task type. As opposed
to rule-based chatbots, AI-powered chatbots have the ability to learn from past interactions,
which may lead to their premature deployment in situations that require not only mechanical or
analytical intelligence, but also intuitive and empathetic intelligence (Huang & Rust, 2018).
These co-destruction possibilities are further exacerbated when chatbot use is imposed or when
there is a lack of clear communication on how to converse with a human representative, as the
customer perceives a loss of control and freedom over the situation.
30
As advances in NLP and ML lead to more humanlike chatbot conversations (Wirtz et al.,
2018), it is becoming increasingly common for customers to be unaware that they are
interacting with a chatbot. Recent literature classifies instances where one actor is unaware that
the other actor is not human as ‘counterfeit service encounters’ (Robinson et al., 2020, p. 367).
Our findings show how, similar to counterfeit goods (Eisend & Schuchert-Güler, 2006),
counterfeit service encounters initiate a process of co-destruction, as customers feel deceived
and are unable to retain full control over the conversation. These results imply that the process
of co-destruction is rendered more complex in service environments characterized by AI due to
the perception of multiple FLE identities (human vs. ‘counterfeit human’).
While lack of clarity about the identity of the FLE causes a customer to lose role clarity, it
can also spur further resource losses for not only the customer, but also the service provider.
Consistent with Smith (2013), our study shows the existence of ‘loss cycles’ or ‘downward
spirals’ pertaining to secondary resource losses that occur when customers attempt to regain
lost resources but instead incur additional resource losses. However, we supplement Smith’s
(2013) findings by proposing that, in the context of AI-powered service interactions, these
downward spirals or loss cycles can be extended to include resource losses incurred by the
service provider. When customers lose role clarity, they are not aware of the new role that they
need to assume, that of speaking to the chatbot in a ‘keyword-like,’ systematic manner. This
behavior results in resource losses for the service provider, as it limits the extent to which the
chatbot can learn from customer interactions. In AI-powered service environments, which
necessitate correct and consistent data inputs for systematic and autonomous system learning,
the lack of such data input means that the AI application is not able to learn effectively, and the
interaction degenerates over time.
31
Third, we offer a significant contribution to the extant, yet limited, literature on the
customer perspective of value co-destruction. While the benefits of AI technology adoption are
clear for service providers, it is important to obtain a customer-centric view on the
implementation of AI technologies at the frontline and to understand the factors that might lead
the customer to experience co-destruction and when. This understanding is especially important
given the crucial role of customers in AI-powered service interactions, when the customer is at
the very core of the service delivery (Kaartemo & Helkkula, 2018).
Customers do not expect reduced well-being to be an outcome when they engage in chatbot
conversations. When resource loss occurs, customers resolve to resume control of the situation
by first attributing responsibility for the resource loss and then adopting a coping strategy. Our
findings suggest that customers largely attribute their resource losses to resource misintegration
by the service provider (Table 2). Resource misintegration is perceived as an intentional action
by service providers to maximize benefits for themselves.
Customers only attribute themselves with resource deficiency or resource misintegration
in a limited number of cases. Such an attribution is the case when customers do not believe they
have the required capabilities to recognize chatbot presence (self-efficacy), and when they do
not adapt their communication style to one that can be more easily understood by the chatbot.
This tendency of customers to attribute resource loss to the service provider can be explained
by the self-serving bias, which states that in the case of a service failure, customers are more
likely to ascribe failure to third parties (Bendapudi & Leone, 2003). Previous research on the
self-serving bias suggests that this bias can be reduced when customers are given a choice of
whether to participate in service production or otherwise (Bendapudi & Leone, 2003). It follows
that cases of imposed chatbot use, which do not grant the customer any choice, exacerbate the
32
self-serving bias and result in a higher level of resource loss attributed to the service provider.
Indeed, customers clearly demarcate instances of resource loss that stem from imposed chatbot
use as deliberate unwillingness to integrate resources by the service provider.
Our study demonstrates that for the customer, the co-destruction process extends beyond
immediate resource loss and evolves into making attributions for that loss and taking
corresponding avoidance or confrontative measures aimed at the service provider. This finding
validates the importance of extending the co-destruction literature to the field of attribution of
responsibility in order to expand the limited knowledge on the consequences of co-destruction
and the array of emotional coping strategies that consumers may display in response to resource
loss (Tsarenko et al., 2019).
Practical Implications
Our findings suggest a number of strategic implications for managers and practitioners in
service sectors.
AI technologies on the frontline have been created specifically to encourage co-creation
between the service provider and the customer. However, our observations demonstrate that
co-destruction is also possible. Co-destruction emerges when the co-created service fails to
meet customers’ expectations. It is important for managers to realize that when AI applications,
such as chatbots, are introduced to the frontline, customers view such applications as a
substitute for human FLEs. As a result, customers hold similar, if not completely identical,
expectations regarding service levels. In light of this, it is important for service providers to
help customers understand any limitations inherent in the AI application and to ensure that the
33
chatbot explains the process to adopt when faced with such limitations so as to avoid customer
resource loss. More precisely, it is important for service providers to understand that customer
queries can vary significantly in their degree of complexity and involvement. While AI chatbots
can easily tackle simpler queries, problems arise when they face more complex questions.
Service providers should first ensure that the question’s degree of complexity is identified as
soon as possible, and then offer, if the question is determined to be complex, a clear and
seamless chat transfer to a human support representative as early on in the process as possible.
Although chatbot disclosure at the start of an interaction may negatively prejudice
customers against the effectiveness of such chatbots (Luo et al., 2019), our findings convey the
negative impact of perceived deception by service providers. Customers are also becoming
increasingly vigilant in determining the identity of the FLE, and may erroneously judge a
human FLE to be a chatbot, or vice versa, when the FLE identity is not disclosed. Service
providers are therefore encouraged to clearly advise customers of the identity of the FLE to
avoid feelings of deception and distrust. Such a notification can be offered at the start of the
interaction or even at the end (Belanche et al., 2020). Furthermore, disclosing the presence of
a chatbot may be regulated or considered standard practice in the near future, in light of ethical
concerns (Robinson et al., 2020).
Furthermore, we demonstrated that when customer interactions with AI chatbots are
negative, resulting in customer resource loss, such experiences can have serious negative
ramifications on service providers, as customers can opt for more costly customer support
channels, such as a phone channel. In such cases, an investment in AI technology intended to
result in cost savings, might backfire and result in a heavier load on other support channels.
When customer resource loss is significant, customers may opt for harsher action, such as
34
terminating the service, switching to a competitor, or complaining on social media. The
possibility of such behaviors also has a number of implications in terms of how the success of
chatbot applications is measured. Besides cost savings and insight gain, it is important for
service providers to adopt a customer-centric view and obtain a clear understanding in terms of
what chatbot success looks like from the customer perspective.
Limitations and Future Research
Utilizing a qualitative approach enabled us to undertake an in-depth exploration of a
distinct phenomenon, value co-destruction, in a novel context, AI-powered service interactions.
Like other qualitative studies, such an approach limits the generalizability of our study, as our
findings cannot be projected to an entire population of human-AI interactions in a statistical
sense. More importantly, despite the fact that our study is related to a growing economy that is
becoming increasingly reliant on technology and involves an increasingly multicultural
workforce (and population), we believe that our findings convey an emerging phenomenon that
cannot yet be generalized to other contexts in similar service industries across the globe.
Our study did not evaluate factors relating to consumers’ cultural behavior. Culture can
influence customer attitudes and behaviors in service settings (Chan et al., 2010); thus, this
would be a fruitful area for further work. Future research could analyze whether perceptions of
value loss, loss attributions, and coping strategies differ based on customers’ cultural value
orientations. These insights can contribute to a more developed understanding of the process of
co-destruction in AI-driven service encounters.
35
Quantifying the strength of the customers’ feelings and the prevalence of the perceived
resource loss was outside the scope of this study. Future research could build on our study to
develop a scale that can measure and quantify the extent of identified resource losses. Such
research could link measurement indictors to the concept of resource loss and provide a
systematic manner in which to measure and evaluate resource loss linked to co-destruction. A
focus on the different ways that resource loss could be countered, or at least minimized, would
also be useful to advance knowledge on the subject and especially to provide practical
managerial guidelines in terms of remedial or recovery strategies that can be adopted in cases
of value co-destruction.
Further work is also required to validate the identified resource losses in additional AI
settings—for example, those provided by voice-controlled digital assistants, such as Alexa.
Additional empirical approaches, such as lab or field experiments, could be adopted to allow
for a more in-depth exploration of this topic.
Lastly, additional research is needed to ascertain the impact of resource misintegration and
deficiency in AI settings. AI applications, such as chatbots, are smart and have the ability to
learn from every interaction they have (Kumar et al., 2016). Thus, it is important to examine
how resource misintegration and deficiency by the customer or service provider at the start of
the interaction could have a compounded effect and cause further resource loss by the end of
the interaction.
Acknowledgements
The authors gratefully acknowledge the interviewees who participated in this study and the
anonymous reviewers of the AIRSI 2019 workshop: Artificial Intelligence and Robotics in
Service Interactions.
36
Declaration of Interest
No potential conflict of interest was reported by the authors.
37
References
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design
cues and communicative agency framing on conversational agent and company
perceptions. Computers in Human Behavior, 85, 183–189.
Balaji, M. S., Khong, K. W., & Chong, A. Y. L. (2016). Determinants of negative word-of-
mouth communication using social networking sites. Information and Management,
53(4), 528–540.
Bazeley, P., & Jackson, K. (2019). Qualitative data analysis with NVivo (3rd ed.). SAGE
Publications Ltd.
Belanche, D., Casaló, L. V., & Flavián, C. (2019). Artificial Intelligence in FinTech:
Understanding robo-advisors adoption among customers. Industrial Management and
Data Systems, 119(7), 1411–1430.
Belanche, D., Casaló, L. V., Flavián, C., & Schepers, J. (2020). Service robot
implementation: A theoretical framework and research agenda. The Service Industries
Journal, 40(3–4), 203–225.
Bendapudi, N., & Leone, R. P. (2003). Psychological Implications of Customer Participation
in Co-production. Journal of Marketing, 67(1), 14–28.
Bitner, M. J., Faranda, W. T., Hubbert, A. R., & Zeithaml, V. A. (1997). Customer
contributions and roles in service delivery. International Journal of Service Industry
Management, 8(3), 193–205.
Blut, M., Heirati, N., & Schoefer, K. (2019). The Dark Side of Customer Participation : When
Customer Participation in Service Co-Development Leads to Role Stress. Journal of
Service Research.
Bock, D. E., Wolter, J. S., & Ferrell, O. C. (2020). Artificial intelligence: disrupting what we
know about services. Journal of Services Marketing, ahead-of-print.
38
Buhalis, D., & Cheng, E. S. Y. (2020). Exploring the Use of Chatbots in Hotels:Technology
Providers’Perspective. In Information and Communication Technologies in Tourism
2020 (pp. 231–242). Springer International Publishing.
Buhalis, D., Harwood, T., Bogicevic, V., Viglia, G., Beldona, S., & Hofacker, C. (2019).
Technological disruptions in services: lessons from tourism and hospitality. Journal of
Service Management, 30(4), 484–506.
Buhalis, D., & Sinarta, Y. (2019). Real-time co-creation and nowness service: lessons from
tourism and hospitality. Journal of Travel and Tourism Marketing, 36(5), 563–582.
Čaić, M., Odekerken-Schroder, G., & Mahr, D. (2018). Service robots: Value co-creation and
co-destruction in elderly care networks. Journal of Service Management, 29(2), 178–
205.
Camilleri, J., & Neuhofer, B. (2017). Value co-creation and co-destruction in the Airbnb
sharing economy. International Journal of Contemporary Hospitality Management,
29(9), 2322–2340.
Chan, K. W., Yim, C. K. (Bennett), & Lam, S. S. . (2010). Is Customer Participation in Value
Creation a Double-Edged Sword? Evidence from Professional Financial Services Across
Cultures. Journal of Marketing, 74(3), 48–64.
Chandler, J. D., & Lusch, R. F. (2015). Service systems: A broadened framework and
research agenda on value propositions, engagement, and service experience. Journal of
Service Research, 18(1), 6–22.
Chowdhury, I. N., Gruber, T., & Zolkiewski, J. (2016). Every cloud has a silver lining -
Exploring the dark side of value co-creation in B2B service networks. Industrial
Marketing Management, 55, 97–109.
Chung, M., Ko, E., Joung, H., & Kim, S. J. (2018). Chatbot e-service and customer
satisfaction regarding luxury brands. Journal of Business Research, October, 1–9.
39
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the
uncanny valley: An experimental study of human–chatbot interaction. Future
Generation Computer Systems, 92, 539–548.
Cova, B., & Dalli, D. (2009). Working consumers: The next step in marketing theory?
Marketing Theory, 9(3), 315–339.
Davenport, T. H., Guha, A., Grewal, D., & Bressgott, T. (2019). How artificial intelligence
will change the future of marketing. Journal of the Academy of Marketing Science.
Demoulin, N., & Willems, K. (2019). Servicescape irritants and customer satisfaction: The
moderating role of shopping motives and involvement. Journal of Business Research,
104(December 2017), 295–306.
Echeverri, P., & Skålén, P. (2011). Co-creation and co-destruction: A practice-theory based
study of interactive value formation. Marketing Theory, 11(3), 351–373.
Edmondson, A., & McManus, S. (2007). Methodological fit in field research. Academy of
Management Review, 32(4), 1155–1179.
Eisend, M., & Schuchert-güler, P. (2006). Explaining Counterfeit Purchases : A Review and
Preview. Academy of Marketing Science Review, 2006(12).
Etgar, M. (2008). A descriptive model of the consumer co-production process. Journal of the
Academy of Marketing Science, 36(1), 97–108.
Füller, J. (2010). Refining Virtual Co-Creation from a Consumer Perspective. California
Management Review, 52(2), 98–122.
Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive
research: Notes on the Gioia methodology. Organizational Research Methods, 16(1),
15–31.
Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory: Strategies for
qualitative research. Routledge.
40
Gnewuch, U., Morana, S., & Maedche, A. (2018). Towards Designing Cooperative and Social
Conversational Agents for Customer Service. ICIS 2017: Transforming Society with
Digital Innovation, 0–13.
Griol, D., Carbó, J., & Molina, J. M. (2013). An automatic dialog simulation technique to
develop and evaluate interactive conversational agents. Applied Artificial Intelligence,
27(9), 759–780.
Grönroos, C., & Voima, P. (2013). Critical service logic: Making sense of value creation and
co-creation. Journal of the Academy of Marketing Science, 41(2), 133–150.
Harrison, T., & Waite, K. (2015). Impact of co-production on consumer perception of
empowerment. The Service Industries Journal, 35(10), 502–520.
Hilton, T., & Hughes, T. (2013). Co-production and self-service: The application of Service-
Dominant Logic. Journal of Marketing Management, 29(7–8), 861–881.
Ho, S.-H., & Ko, Y.-Y. (2008). Effects of self-service technology on customer value and
customer readiness: The case of internet banking. Internet Research, 18(4), 427–446.
Holz, T., Dragone, M., & O’Hare, G. M. P. (2009). Where robots and virtual agents meet: A
survey of social interaction research across milgram’s reality-virtuality continuum.
International Journal of Social Robotics, 1(1), 83–93.
Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service
Research, 21(2), 155–172.
IDC. (2019). Worldwide Spending on Artificial Intelligence Systems Will Grow to Nearly
$35.8 Billion in 2019, According to New IDC Spending Guide. International Data
Corporation. https://www.idc.com/getdoc.jsp?containerId=prUS44911419
Ivanov, S., & Webster, C. (2019). Economic Fundamentals of the Use of Robots, Artificial
Intelligence, and Service Automation in Travel, Tourism, and Hospitality. In Robots,
Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality (pp.
41
39–55).
Ivanov, S., Webster, C., & Garenko, A. (2018). Young Russian adults’ attitudes towards the
potential use of robots in hotels. Technology in Society, 55(December 2017), 24–32.
Järvi, H., Kähkönen, A. K., & Torvinen, H. (2018). When value co-creation fails: Reasons
that lead to value co-destruction. Scandinavian Journal of Management, 34(1), 63–77.
Kaartemo, V., & Helkkula, A. (2018). A systematic review of Artificial Intelligence and
robots in value co-creation: Current status and future research avenues. Journal of
Creating Value, 4(2), 1–18.
Kannan, P. V., & Bernoff, J. (2019). Does your company really need a chatbot? Harvard
Business Review. https://hbr.org/2019/05/does-your-company-really-need-a-chatbot
Kaplan, A., & Haenlein, M. (2018). Siri, Siri, in my hand: Who’s the fairest in the land? On
the interpretations, illustrations, and implications of artificial intelligence. Business
Horizons, 62(1), 15–25.
Kelleher, C., & Peppard, J. (2011). Consumer Experience of Value Creation - a
Phenomenological Perspective. In A. Bradshaw, C. Hackley, P. Maclaran, & M. Duluth
(Eds.), European Advances in Consumer Research (Vol. 9, pp. 325–332). Association
for Consumer Research.
Keyser, A. De, Köcher, S., Alkire, L., Verbeeck, C., & Kandampully, J. (2019). Frontline
service technology infusion: Conceptual archetypes and future research directions.
Journal of Service Management, 30(1), 156–183.
Kim, K., Byon, K., & Baek, W. (2019). Customer-to-customer value co-creation and co-
destruction in sporting events. The Service Industries Journal, 0(0), 1–23.
Kumar, V., Dixit, A., Javalgi, R. (Raj) G., & Dass, M. (2016). Research framework,
strategies, and applications of Intelligent Agent Technologies (IATs) in Marketing.
Journal of the Academy of Marketing Science, 44(1), 24–45.
42
Larivière, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., Wünderlich,
N. V., & De Keyser, A. (2017). “Service Encounter 2.0”: An investigation into the roles
of technology, employees and customers. Journal of Business Research, 79, 238–246.
Laud, G., Bove, L., Ranaweera, C., Leo, W. W. C., Sweeney, J., & Smith, S. (2019). Value
co-destruction: a typology of resource misintegration manifestations. Journal of Services
Marketing, January.
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of
Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science,
December.
Marinova, D., de Ruyter, K., Huang, M.-H., Meuter, M. L., & Challagalla, G. (2017). Getting
smart: Learning from technology-empowered frontline interactions. Journal of Service
Research, 20(1), 29–42.
Mende, M., Scott, M. L., Bitner, M. J., & Ostrom, A. L. (2017). Activating consumers for
better service coproduction outcomes through eustress: The interplay of firm-assigned
workload, service literacy, and organizational support. Journal of Public Policy and
Marketing, 36(1), 137–155.
Mick, D. G., & Fournier, S. (1998). Paradoxes of technology: Consumer cognizance,
emotions, and coping strategies. Journal of Consumer Research, 25(2), 123–143.
Morosan, C., & DeFranco, A. (2016). Co-creating value in hotels using mobile devices: A
conceptual model with empirical validation. International Journal of Hospitality
Management, 52, 131–142.
Oram, R. (2019). Meeting Edward: Chatbots and the Changing the Face of the Hotel Guest
Experience. Oracle Hospitality Check-In. https://blogs.oracle.com/hospitality/chatbots-
and-the-changing-the-face-of-the-hotel-guest-experience
Ostrom, A. L., Parasuraman, A., Bowen, D. E., Patrício, L., & Voss, C. A. (2015). Service
43
research priorities in a rapidly changing context. Journal of Service Research, 18(2),
127–159.
Patton, M. Q. (2002). Qualitative research and evaluation methods: Integrating theory and
practice. Thousand Oakes.
Plé, L. (2016). Studying customers’ resource integration by service employees in interactional
value co-creation. Journal of Services Marketing, 30(2), 152–164.
Plé, L. (2017). Why do we need research on value co-destruction? Journal of Creating Value,
3(2), 162–169.
Plé, L., & Chumpitaz Cáceres, R. (2010). Not always co-creation: Introducing interactional
co-destruction of value in Service-dominant Logic. Journal of Services Marketing, 24(6),
430–437.
Prior, D. D., & Marcos-Cuevas, J. (2016). Value co-destruction in interfirm relationships: The
impact of actor engagement styles. Marketing Theory, 16(4), 533–552.
Quach, S., & Thaichon, P. (2017). From connoisseur luxury to mass luxury: Value co-
creation and co-destruction in the online environment. Journal of Business Research,
81(May), 163–172.
Rafaeli, A., Altman, D., Gremler, D. D., Huang, M.-H., Grewal, D., Iyer, B., Parasuraman,
A., & de Ruyter, K. (2017). The Future of Frontline Research. Journal of Service
Research, 20(1), 91–99.
Ramaswamy, V., & Ozcan, K. (2018). What is co-creation? An interactional creation
framework and its implications for value creation. Journal of Business Research,
84(September 2016), 196–205.
Roberts, D. (2018). Why generational attitudes toward technology matter | EY - Global. EY.
https://www.ey.com/en_gl/health/why-generational-attitudes-toward-technology-matter
Robertson, N., Polonsky, M., & McQuilken, L. (2014). Are my symptoms serious Dr Google?
44
A resource-based typology of value co-destruction in online self-diagnosis. Australasian
Marketing Journal, 22(3), 246–256.
Robinson, S. G., Orsingher, C., Alkire, L., Keyser, A. De, Giebelhausen, M., Papamichail, K.
N., Shams, P., & Sobhy, M. (2020). Frontline encounters of the AI kind : An evolved
service encounter framework. Journal of Business Research, 116, 366–376.
SmartAction. (2018). How demographics affect chatbot usage.
https://www.smartaction.ai/blog/demographics-affect-chatbot-adoption-use/
Smith, A. M. (2013). The value co-destruction process: A customer resource perspective.
European Journal of Marketing, 47(11/12), 1889–1909.
Stewart, J. S., Goad, E., & Cravens, K. S. (2017). Managing millennials : Embracing
generational differences. Business Horizons, 60(1), 45–54.
Syam, N., & Sharma, A. (2018). Waiting for a sales renaissance in the fourth industrial
revolution: Machine learning and artificial intelligence in sales research and practice.
Industrial Marketing Management, 69(January), 135–146.
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. SAGE.
Tsarenko, Y., Strizhakova, Y., & Otnes, C. C. (2019). Reclaiming the Future: Understanding
Customer Forgiveness of Service Transgressions. Journal of Service Research, 22(2),
Tussyadiah, I. P. (2020). A review of research into automation in tourism: Launching the
Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics
in Tourism. Annals of Tourism Research, 81(February), 102883.
Tuzovic, S., & Paluch, S. (2018). Conversational Commerce – A New Era for Service
Business Development. In M. Bruhn & H. K. (Eds.), Service Business Development (pp.
82–101). Springer Gabler.
Ukpabi, D. C., Aslam, B., & Karjaluoto, H. (2019). Chatbot Adoption in Tourism Services: A
Conceptual Exploration. In Robots, Artificial Intelligence, and Service Automation in
45
Travel, Tourism and Hospitality (pp. 105–121).
Vafeas, M., Hughes, T., & Hilton, T. (2016). Antecedents to value diminution: A dyadic
perspective. Marketing Theory, 16(4), 469–491.
van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., & Petersen,
J. A. (2017). Domo arigato Mr. Roboto: Emergence of automated social presence in
organizational frontlines and customers’ service experiences. Journal of Service
Research, 20(1), 43–58.
Vargo, S. L., & Lusch, R. F. (2004). Evolving to a new dominant logic for Marketing.
Journal of Marketing, 68(1), 1–17.
Vargo, S. L., & Lusch, R. F. (2008). Service-dominant logic: Continuing the evolution.
Journal of the Academy of Marketing Science, 36(1), 1–10.
Verleye, K. (2015). The co-creation experience from the customer perspective: Its
measurement and determinants. Journal of Service Management, 26(2), 321–342.
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A.
(2018). Brave new world: Service robots in the frontline. Journal of Service
Management, 29(5), 907–931.
Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A New Chatbot for Customer
Service on Social Media. Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems, 3506–3510.
Yin, J., Qian, L., & Shen, J. (2019). From value co-creation to value co-destruction? The case
of dockless bike sharing in China. Transportation Research Part D: Transport and
Environment, 71(June 2018), 169–185.
Zhang, T., Lu, C., Torres, E., & Chen, P.-J. (2018). Engaging customers in value co-creation
or co-destruction online. Journal of Services Marketing, 32(1), 57–69.