Content uploaded by Maria Roussou
Author content
All content in this area was uploaded by Maria Roussou on Sep 03, 2019
Content may be subject to copyright.
Transformation through Provocation?
Designing a ‘Bot of Conviction’ to Challenge Conceptions and Evoke Critical
Reflection
Maria Roussou
National and Kapodistrian University
of Athens
Athens, Greece
mroussou@di.uoa.gr
Sara Perry
University of York
York, UK
sara.perry@york.ac.uk
Akrivi Katifori
Athena Research & Innovation Center
Athens, Greece
vivi@di.uoa.gr
Stavros Vassos
Helvia Technologies
Athens, Greece
stavros@helvia.io
Angeliki Tzouganatou
University of Hamburg
Hamburg, Germany
angeliki.tzouganatou@uni-hamburg.
de
Sierra McKinney
University of York
York, UK
slm589@york.ac.uk
ABSTRACT
Can a chatbot enable us to change our conceptions, to be crit-
ically reective? To what extent can interaction with a tech-
nologically “minimal” medium such as a chatbot evoke emo-
tional engagement in ways that can challenge us to act on the
world? In this paper, we discuss the design of a provocative
bot, a “bot of conviction”, aimed at triggering conversations
on complex topics (e.g. death, wealth distribution, gender
equality, privacy) and, ultimately, soliciting specic actions
from the user it converses with. We instantiate our design
with a use case in the cultural sector, specically a Neolithic
archaeological site that acts as a stage of conversation on
such hard themes. Our larger contributions include an in-
teraction framework for bots of conviction, insights gained
from an iterative process of participatory design and evalua-
tion, and a vision for bot interaction mechanisms that can
apply to the HCI community more widely.
CCS CONCEPTS
•Human-centered computing →
Interaction design the-
ory, concepts and paradigms;Interaction techniques.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear
this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request
permissions from permissions@acm.org.
CHI 2019, May 4–9, 2019, Glasgow, Scotland UK
©2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-5970-2/19/05.. .$15.00
https://doi.org/10.1145/3290605.3300857
KEYWORDS
Chatbots; conversational agents; UX design; provocative in-
teraction; emotional engagement; cultural informatics
ACM Reference Format:
Maria Roussou, Sara Perry, Akrivi Katifori, Stavros Vassos, An-
geliki Tzouganatou, and Sierra McKinney. 2019. Transformation
through Provocation?: Designing a ‘Bot of Conviction’ to Chal-
lenge Conceptions and Evoke Critical Reection. In CHI Conference
on Human Factors in Computing Systems Proceedings (CHI 2019),
May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA,
13 pages. https://doi.org/10.1145/3290605.3300857
1 INTRODUCTION
The study and design of concepts, metaphors, practices, and
evaluation methods in User Experience (UX) has been the
steady endeavor of researchers and practitioners working in
the eld of human computer interaction (HCI) for a number
of years now. An increasing emphasis in UX is given to
the aective dimension, for example the design of emotive,
hedonic [
31
], enchanting [
29
], empathic [
54
] or critically
reective [
39
] [
7
] [
3
] [
26
] approaches to interaction between
humans and the digital world. Within this landscape, we have
witnessed a surge of dierent interactive systems in various
elds (cultural heritage, tourism, education, e-commerce,
etc.) that rely on detecting the human user’s emotional state
and responding to it appropriately.
This ‘turn’ to aect [
52
] in the design of experiences, inter-
faces, and interaction methods has, however, been primarily
manifested in systems that attempt to capture users’ emo-
tional states and oer, in return, a relevant response. Rarely
is the user’s digitally mediated emotional engagement with
the content regarded as an opportunity to trigger a deeper
connection, to critically reect on the issues at stake, to chal-
lenge and provoke a call to action.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 1
PLEASE CITE AS:
Roussou, M., Perry, S., Katifori, A., Vassos, S., Tzouganatou, A., McKinney, S.
(2019) Transformation through Provocation? Designing a ‘Bot of Conviction’ to
Challenge Conceptions and Evoke Critical Reflection. In CHI '19 Proceedings
of the 2019 CHI Conference on Human Factors in Computing Systems,
Glasgow, Scotland, 4-9 May. New York: ACM. Paper No. 627.
Provoking this kind of “conversation” wherein the human
participant can be challenged into thinking about what their
principles or assumptions actually mean and, subsequently,
act on them to transform their experience, is at the core of the
work we propose in this paper. Based on an aective practices
model of emotional engagement [
53
] and inspired by both
Graham’s [
13
] call for digital media that are able “to move us,
to inspire us, to challenge us,” and his reference to Sample’s
concept of “bots of conviction” [
36
], we engaged in designing
a conversational agent (CA), or chatbot. Its aim is to evoke its
user’s emotional engagement with complex topics (e.g. death,
wealth distribution, gender equality, privacy) and, ultimately,
solicit specic actions from the user it converses with.
We chose to explore the design of a chatbot because it is a
“minimal” digital medium, it is direct and simple to use, and it
is playful. But how can we design conversational interaction
with a chatbot in ways that can trigger critical reection? To
what extent can interaction with such a technologically “min-
imal” medium bring out deeper emotions that can challenge
us to act on the world?
In this paper, we introduce an interactional pattern that,
we argue, can ignite a dialogue between a participant and
a bot, aiming ultimately to transform the participant’s con-
ceptions. We start by dening key concepts related to our
goals of emotional engagement, provocation and transforma-
tion. We then review the variety of chatbots used today, with
particular emphasis on chatbots used in the cultural sector,
as this is where our use case is situated. Next, we describe
the iterative process of designing a Bot of Conviction (BoC),
which follows a carefully planned out and executed, through
formative evaluation, procedure of content and interaction
design and development. Section 5 demonstrates how we
apply our pattern to the design of a chatbot for a specic ar-
chaeological site. Finally, the paper concludes by discussing
our pattern, its limitations and its potential to fulll the goal
of igniting users’ transformation through a call for action.
2 BACKGROUND
Chatbots and the Post-app Internet
The literature on conversational agents, intelligent virtual
humans, virtual assistants, and chatbots is extensive. Whilst
not exactly the same [
51
], these terms are often used in-
terchangeably to denote systems that engage the user, to a
greater or lesser degree, in natural language-like conversa-
tion (spoken or written) with a digital entity.
Chatbots have been touted as advantageous tools that
can facilitate communication, provide easier access to in-
formation, and combat digital divides [
9
]. They oer novel,
immediate engagement mechanisms and, in light of the pop-
ularity of texting, they can attract a younger demographic
in multimodal ways [
49
]. In addition, chatbots can operate
on both a browser and a mobile phone, oering a solution
to the challenge of app installation and overload [9].
Chatbots serve a broad range of purposes [
32
], with the
most common application being that of a rst-level help
desk or service chatbot that can recommend responses to
low-level customer queries. These chatbots lower the thresh-
old for people to ask for information, work well on simple
issues, and provide a more amiable and personable style of
information delivery. As customer service chatbots become
commonplace, the eld is now turning to advancing the
creation of agents that are able to build relationships with
their human conversational partners [
28
] as well as virtual
humans that can converse with the user in more emotive,
persuasive, and provocative ways [
38
] [
40
]. At the same time,
conceptual and ethical issues are informing the design of
guidelines for bots [21] [37].
Despite the aforementioned attempts, in the majority of
conversational agents, the typical form of interaction is an
independent single-turn exchange: the user asks, the chatbot
responds, and this usually completes the interaction for the
particular question/topic.
Chatbots in the Cultural Sector
Conversational interfaces are increasingly espoused by cul-
tural organizations within their digital strategies as means
to attract new audiences and extend the museums’ physi-
cal location. They are regularly proclaimed to oer novel
engagement mechanisms that can empower visitors of muse-
ums and broaden the ways that cultural content is perceived.
Many current cultural heritage chatbot initiatives operate
within a site’s physical space, allowing for varying levels
of interactivity. The chatbots’ most usual in situ purpose is
serving as exhibition guides [
20
] [
50
] [
6
] and helping visitors
in organizing their visit [
15
]. These bots resemble customer
service bots, as their primary aim is to oer information to
the visitor.
In more sophisticated examples, visitors input a keyword,
color or even an emotion, and the chatbot will respond with
a selection of related artworks [
1
]; or interact with embodied
virtual agents in the informal education space (e.g., Ada
and Grace [
46
], Max [
19
], Coach Mike [
22
], Alan Turing’s
Avatar [
12
]), either by spoken natural language or via typed
text. Some embed gamication elements into their touring
functionality [
19
] [
48
], challenging users with exploratory
clues or quizzes that manifest in rewards, including virtual
currency that has cash value in museum gift shops.
However, despite these examples, the use of conversa-
tional agents by museums and the heritage sector is still
quite limited. Most are purely info-delivery oriented and
object- or exhibit- centered, providing little opportunity for
meaningful interactivity, creative expression, or critical en-
gagement. In response to these limitations, we seek to extend
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 2
the “traditional” canon of the museum/heritage bot into a
challenging, provocative engine of social commentary and
self-reection.
3 DEFINING EMOTIONAL ENGAGEMENT,
PROVOCATION, AND TRANSFORMATION
Contemporary denitions of emotion and aect (e.g. see [
52
]
and [
53
]) increasingly aim to depart from the psychobio-
logical “basic emotions” approaches, which reduce aect to
simplistic innate human universals and do not account for
the multiplicity of factors that mix in any given individual’s
aective practices. Rather, in recent and more complex con-
ceptualizations of the term, emotion is framed as “embodied
meaning-making” [
52
, p.4], and focus is put on the actions
that are generated through such embodied work (actions
that may be small or large, personally-oriented or externally-
oriented, visible or invisible, etc.). Recognizing that emotion
has action embedded into it allows us to attend to the aec-
tive practices that characterize meaning-making–the actions
that feed into and ow out from it. Therefore, rather than
try to crudely measure emotion as biological response, we
turn our attention instead to the acts (or lack thereof) that
are generated through people’s practices with our Bot of
Conviction.
The importance of such a exible and act-centered under-
standing of emotion cannot be understated. It permits us to
operate in cross-cultural contexts (as our concern with ana-
lyzing resulting actions means that we do not need to rely
on typical English-language emotion descriptors to dene
aective experiences) and to embrace the true complexity
of emotive experiences. It also appreciates the intentionality
and control–but also the historical motivations and personal
relationships–that can be at the core of such experiences. In
line with this conception of emotion, we look for repetitions,
apparent inconsistencies and unique occurrences in actions
(e.g. spoken or written words, non-discursive oral expres-
sions, bodily movements and gestural reactions, interactions
with human and non-human things, other proxemics, draw-
ings or other visual inscriptions, etc.) that emerge in people’s
social practices. In terms of our BoC, this means emotional
engagement is demonstrated via interaction with the bot
itself and is inherent in the very act of chatting to it. Rather
than designing the bot to trigger simplistic “basic emotions”,
we create conditions inside the chats with the purpose of
soliciting specic intended actions from participants.
As we see it, to respond at all to the bot is to aectively
engage with it (as a user could easily just walk away). Such
basic response actions suggest the ecacy of the bot’s con-
ditions in provoking a reply. Provocation, here, is dened
in simple terms: acting on others to elicit a particular recip-
rocal action. At the most supercial level, the bot acts on
the user, engaging them suciently to complete a full chat.
Preferably, however, this provocation works more deeply,
evidenced through analysis of the types of inputs generated
by users. Here, deeper provocation entails users reconsider-
ing their points of view, demonstrating forms of conscious
reection or alternative perspective-taking in their chats.
Moreover, at its deepest level, as we dene it, provocation
leads to transformation: users take action beyond the chat
itself, for instance telling others about their reections, or
integrating ideas generated through engagement with the
bot into their own everyday meaning-making practices. Here
transformation is loosely aligned with Hennes’ [
14
, p.114]
concern that “The dierence between the activity of the be-
ginning and that of the end is a kind of transformational
growth that aects experience in the future”. In this way, our
denition goes further than some in the heritage sector [
43
,
p.104] who see transformation as “simply instances when vis-
itors’ sense of self and community [a]re destabilised”. Rather,
we are interested in eecting genuine change in individuals
which is evidenced, following Soren’s model of transforma-
tional museum experiences [
44
, p.248], in behaviours which
are “more inclusive, discriminating, emotionally capable of
change, and reective”.
In creating provocation, it is necessary to consider the
ethical implications of the work on users’ wellbeing. How-
ever, drawing inspiration from Katrikh [
17
] and Gargett [
11
],
it is our position that, in order to develop a transformative
experience as outlined above, eorts should not lie in mini-
mizing discomfort but rather in generating the opportunity
to “promote dialogue, process emotion, and ultimately to
allow visitors to reach a place of equilibrium” [17, p.7].
4 DESIGNING BOTS OF CONVICTION
We have turned to the concept of Bots of Conviction to ex-
plore the potential for more open conversational agents that
focus on asking (not necessarily answering) questions, and
provoking critical thought. In particular, our motivation lies
in exploring “hard” themes that are emotive and controver-
sial in nature, such as life and death, power, wealth, social
structure and hierarchy (or lack thereof), gender equality,
etc. Critically, these topics are relevant across time and space,
meaning they shaped people’s lives thousands of years ago
in myriad ways and they continue to evolve diachronically,
remaining relevant to humans today. Cultural heritage sits
at the center of debates about identity, politics, sociality
and economics, regularly appropriated by interested par-
ties in the present to justify past actions and to lay claims
to the future. Not often, however, do heritage sites foster
environments where such debates are purposefully and con-
structively facilitated, such that the resulting dialogue leads
to positive social change [
11
] [
33
]. The chatbot presents
what is arguably the perfect opportunity to experiment with
discussion-based models of aective engagement. This, then,
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 3
is the “space” our work aspires to occupy, the nexus being to
enable genuinely critical reection, respect, care, and ethics
in dialogue.
Seeking inspiration, we looked towards Sample’s deni-
tion of bots of conviction [
36
]. Otherwise known as “protest
bots”, these computer programs work to reveal “the injustice
and inequality of the world and imagin[e] alternatives”; they
ask questions about “how, when, who and why”; and they
are typied by ve key traits: topicality, uncanniness, accu-
mulation, oppositionality and groundedness in data. Unlike
the “typical” BoC, however, which is usually Twitter-based,
generative and broadcast oriented (in the sense that it is
not intended to foster a two-way conversational ow), we
sought to develop something more amenable to the usual
cultural heritage context. Indeed, by Sample’s logic, BoCs are
completely ‘automatic’ in nature and “do not oer solutions.
Instead they create messy moments...” [
36
]. Yet the museums
sector is bound by ethical codes which demand a basic level
of accountability to and responsibility for their audiences.
The evidence also indicates that people may purposefully
visit museums to change their minds, and that such change
derives from more than one-way information delivery [
33
].
Moreover, this is a sector wherein practitioners are often
underfunded, understaed, with variable digital expertise
and sometimes little capacity to maintain or manage the fall-
out of Articial Intelligence or uncontrolled generativity. So,
while we borrow elements from Sample’s original denition
of the BoC (specically the concepts of uncanniness and
oppositionality), we intentionally modify it to account for
the cultural heritage context and its associated duties of care.
The Context
This work is situated within a larger project set to explore
the potential emotive connections of visitors to museums
and archaeological sites, and how digital tools can enhance
these sites’ relevance to people’s lives today [
16
] [
35
]. The
archaeological site of Çatalhöyük, a 9000-year-old Neolithic
settlement in Turkey, has been chosen as an ideal use case
for applying the Bot of Conviction.
More than 1000 specialists from around the world have
been excavating Çatalhöyük for 60 years, yet only a small
fraction of the settlement (7%) has been unearthed. Since
its inscription as an UNESCO World Heritage site in 2012,
there has been an increase in visitor numbers despite its
remote location in the center of Turkey. What visitors en-
counter, however, at the site is essentially an excavation,
where features are dicult to see and signicance is hard to
understand or relate to. The interpretation of the archaeolog-
ical record remains limited on-site, especially if the audience
lacks archaeological literacy. Nevertheless, interest in the
site is large; nearly 10,000 Facebook users, most of whom will
never visit the actual physical site, follow the Çatalhöyük
excavation research project.
What makes Çatalhöyük unique and relevant to our ap-
plication is that, according to evidence, it was occupied by
up to 8000 people at once without obvious hierarchy (i.e.,
egalitarian socio-economic organization). No houses with
distinctive features (belonging to royalty or religious hier-
archy, for example) have yet been found. There is also no
evidence of social distinction based on gender, with men
and women seeming to have equal social status. Residents
repeatedly built and rebuilt their homes on the same spot,
creating a mound of more than 21 meters high over 1000
years. Exquisite sculptural art and wall paintings, street-less
neighborhoods, and burials of the dead beneath oors of
homes are further reasons to choose Çatalhöyük as the stage
to explore digital forms of interaction with topics that can
provoke the people of today.
The rst step towards the realization of our BoC was the
design and development of a “traditional” infobot, based on
key themes and topics underlying the cultural site of inter-
est. This bot would serve as a baseline for understanding
the added value of the BoC and consisted of a lengthy de-
sign process. It entailed: i) selecting and curating content
to construct the chatbot’s knowledge base; ii) designing the
bot’s form and interaction mechanisms, and programming
its level of “pickiness” when responding to user input; and iii)
designing the conversational aspects of the bot in a way that
could encourage the kind of action we sought. We present
each of these steps in the design process below, elaborating
on (iii).
Curating the Content
The content (themes and topics) of conversation is of utmost
importance in a chatbot that aspires to provoke its audience.
We followed an inclusive approach to content selection and
curation that sought the involvement of both content domain
experts and end-users. We recruited domain experts and
held live chat sessions with end-users to rst create basic
content for the bot. From there we elaborated the bot with
more complex reective and emotional components, to tie
the topics to the deeper underlying message or overarching
themes.
User-led content curation. As our rst use case, presented in
detail in Section 5, pertained to the aforementioned Stone
Age archaeological site of Çatalhöyük, and as we aimed to
create a user-centered experience, it was critical to begin by
reviewing the Facebook page of the Çatalhöyük Research
Project, from its prole creation in 2010 to April 2017 (251
posts). By researching Facebook followers’ reactions, com-
ments and interests, we had the opportunity to create rel-
evant content tailored to the ‘needs’ of the user. Thus, we
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 4
researched the types of posts and the types of comments
on posts. This thematic analysis focused on grouping and
selecting the topics that people seemed to be more engaged
with, as evidenced by their posing questions below posts
or by liking, commenting or sharing [
49
]. In other words,
the selection process focused on relevance. People mostly
oered comments regarding the following topics: burials,
wall paintings, archaeological process, the site’s landscape
and importance, chronology, plastering and gurines. This
rst phase of selection led to an early design of the chatbot
in terms of the topics it would be conversing about.
Live chat sessions. To further develop the bot’s content, this
early selection of content was augmented, rened and tested
through a series of live chat sessions with the public. The
sessions were held on the topics that were identied by the
thematic analysis and then validated by domain experts, al-
lowing us to construct the bot’s factual database. Specically,
ve live chat sessions were conducted on the site’s Facebook
page, between June and October of 2017. Domain experts
on each topic were recruited and assigned to a session, with
sessions covering the topics of i) burials, ii) coprolites and
latrines (poop and toilets!), iii) the archaeological process
and excavations, iv) wall paintings, and v) wall reliefs and
plastering. The sessions were advertised through the site’s
social media channels as well as disseminated to the pub-
lic via marketing-like posts through the contact lists of our
project team.
Each session began with a post at the appointed starting
time announcing its topic. Followers were then asked to pose
their question below the post or to send a private message.
An average number of 17 users connected actively to the
live events across all sessions. Although it was stated in the
advertisements that the live chat sessions would last one
hour, users continued to post questions up to ten hours after
some of the sessions.
Designing Interaction
This next step involved integrating the content from the the-
matic analysis and the live chat sessions into the bot, and
embellishing it with richer media and a more evocative form.
This process was, at its core, a design process, both in terms
of the design of conversation and the creation of the inter-
action mechanisms and visual elements that make up the
chatbot’s “character”. It revealed a set of design challenges
in dierent aspects of the bot and highlighted the need for
the identication of guidelines and best practices in the eld.
Design decisions that had to be made included:
Language style.
Informality is a dening trait of chatbot
personality but how chatty, witty or funny should a bot be,
especially if it is intended as a Bot of Conviction, considering
the diverse public it targets?
Casual vs non-casual content.
Should the bot content
be mostly non-casual, i.e. mostly information that experts
have prepared about the topic at hand? Or, more casual,
including responses to everyday questions, e.g. about the
weather or the user’s mood that day?
Canned reply controls.
What is the right balance be-
tween buttons and quick replies (“canned reply controls”)
and free text input? In particular, should users click on pre-
dened answers on buttons to make selections or be able to
ask free-form questions?
Alternative responses.
How many alternative responses
to the same questions are sucient, to allow the bot to reply
in slightly dierent ways, so that responses would not seem
formulaic should the user repeat the chat?
Picky vs non-picky.
What is the pickiness of the chat-
bot’s responses? When the chatbot is “picky”, it only matches
a response when there is high condence on similarity, so
it answers, “I do not know” in most cases when it gets a
question it cannot recognize. A non-picky bot matches a
response when there is also low condence on similarity, so
when asked something it will return an answer even though
the matching score is very low.
Use of multimedia.
What is the best use of images, links
and emojis within the bot?
Making the bot personable.
How can we develop a few
key elements that will make the bot personable and the
experience personalized and natural for the user? E.g., the
bot addressing the users with their (Facebook) names when
chatting.
Following the study described below, the infobot param-
eter conguration was xed to the most eective variant
(witty / casual / with buttons for user replies but free text as
well / alternative bot responses / non-picky / use of images /
personal with referring to user’s rst name).
Formative evaluation. To test our design decisions as well
as the early prototype of the bot, we conducted a study
with 27 participants (14 men, 13 women, aged between 21
and 57), located in dierent countries and with dierent
backgrounds (in terms of expertise in relation to the content).
Specically, we recruited researchers from the Çatalhöyük
Research Project team (2 users: 1 man, 1 woman), followers
of the Çatalhöyük Research Project Facebook page (5 users:
2 men, 3 women), and people who had no prior knowledge
about the site and hence were completely unfamiliar with its
stories and signicance (20 users: 11 men, 9 women). Some
participants were involved in the live sessions but, otherwise,
had not interacted with the chatbot before.
The evaluation sessions were carried out with each person
separately, either face-to-face or remotely. After completing
a consent form, participants were instructed to use Messen-
ger to interact with the chatbot in order to learn more about
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 5
the UNESCO World Heritage archaeological site of Çatal-
höyük. They were advised to converse freely with the bot
for approximately 10 minutes and then asked to undertake
specic tasks, namely respond to questions that the bot had
(e.g. “Where did the people of Çatalhöyük bury their dead?”)
and did not have (e.g. “Did the people of Çatalhöyük play?”)
answers to. The session was followed by a semi-structured
interview (conducted via Skype in the cases of remote partic-
ipants) and an online questionnaire. All interactions between
bot and users were logged and timestamped.
The results of this formative study are briey outlined
here, in relation to our objectives. Firstly, with regards to
users’ engagement with the bot, participants reported to
have found it interesting and spent an average time of 16.08
minutes chatting with it. The majority of users considered
the images very helpful and enjoyed the anthropomorphic
persona of the chatbot. This is consistent with the ndings
of other researchers who note that playful interactions are a
key aspect of the adoption of CAs [
23
] [
24
] and that people
react socially to virtual characters–even if they know that
they are conversing with a machine [51].
However, as soon as the bot was faced with questions that
it could not understand and thus could not reply to appro-
priately, users reported losing interest. It seems that users
expect what Cassell refers to as “interactional intelligence”
[
4
], the “social smarts” that would enable engagement [
5
]
[
24
]. Instead, the bot oered mostly “propositional intelli-
gence”, i.e. informational content upon request, like convers-
ing with a knowledge domain expert. However, chatting is
generally associated not just with information exchange, but
also the exchange of perspectives and opinions. Therefore,
the next natural step was to equip the bot with the possibil-
ity to hold a meaningful dialogue with its users, challenging
them to approach the presented topic through a completely
new perspective.
Designing for Provocation
The results from the formative evaluation of the chatbot’s
content and form informed our next step which was to exper-
iment with the insertion of patterns of provocation that play
with the idea of the Bot of Conviction. To create our BoC
we developed a conversational pattern that enables the bot
to initiate a kind of “Socratic dialogue”, where the chatbot
embarks on a soft interrogation, asking questions to nd out
more about the other person’s beliefs and ideas, while still
maintaining control over the structure and direction of the
conversation.
This pattern resembles a gure-8 (Figure 1). It entails the
bot making a declaration designed to commit the user to
a point of view that they may or may not agree with. It
begins with the bot either asking a question or making a
bold statement. This prompts the user to respond, either
Figure 1: A gure-8 design pattern for a Bot of Conviction.
positively or negatively or neither, and continue further into
the conversation. In other words, a dialogue between the user
and the bot plays out based on one of three types of response
(yes, no, ambiguous). After a few exchanges, the users will
be questioned about their response to the topic, the center
of the formation, before entering the second section, which
concludes with a summarizing statement. This statement
is one of intent/conviction by the bot, meant to arm and
transform the point of view of the user, thereafter pushing
them back out into the traditional/standard experience.
The structure engages the user by reversing the roles of
the traditional infobot, with the bot asking questions rst
thus provoking users to generate the answers, all while main-
taining a guided and controlled exchange. According to the
pattern, the user’s answers place them on distinct paths. The
bot’s responses are designed to be sensible for a variety of
user responses, and consistently incorporate questions to
provoke a user response (Figure 2).
The pattern in a nutshell:
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 6
(1)
Bot makes a declarative value judgment - a provoca-
tion.
(2)
User responds positively (variations of yes), negatively
(variations of no), or ambiguously (everything else).
(3)
Exchange of ideas: 2-3 interactions based on whether
the user is categorized as positive, negative or am-
biguous. In all cases, the user’s response to the bot’s
question should be one of these three categories.
(4)
Assessment points. Partway through the dialogue, the
bot tests the user’s conviction.
(5)
Final statement of intent/conviction - culmination of
the provocation.
Figure 2: The BoC design pattern “algorithm”.
Entryway into the BoC.
The BoC pattern is blended into
the more traditional informational bot rather than being a
separate program. Users enter into the more challenging/self-
reective dialogue of the BoC via one of the following theme-
oriented means:
•
In relation to specic trigger words linked to themes
(e.g., burial, goddess). In other words, the user enters
one of the words and the conversation based on the
pattern is triggered.
•
After a number of interactions from the user on a
particular thematic topic (where interaction is a single
instance of user-bot exchange) or typing/selecting a
button with the word “Intrigue me’.
5 THE DEVELOPMENT OF CHATÇAT
In the previous sections we outlined why we consider provo-
cation to be an important underlying approach to design-
ing interaction with conversational agents. We went on to
present the iterative design of such provocative agents, or
Bots of Conviction, which encompasses a pattern of provo-
cation that culminates in a statement of conviction. In this
section we illustrate our design method through an instanti-
ation, a Facebook Messenger-based conversational interface
for the archaeological site of Çatalhöyük, named ChatÇat
(Figure 3), that aims to inform about the site and, more im-
portantly, to compel critical reection about the past and
action in the present amongst its users.
The rationale behind designing, creating and developing
a chatbot for the particular site of Çatalhöyük is twofold:
to address the challenges that the site faces by oering a
digital experience of it; and to leverage the many threads
of conversation that the site has to oer and that can be
developed around it.
Applying the Design Paern
From a multitude of topics, we selected four themes to apply
the BoC pattern to. In keeping with our pattern philosophy,
each “episode” begins with an opening question conceived
to provoke the user’s reaction:
•
Death: Would you bury someone you care about under
your bed? Or: Surely, you have people buried under
your oors?
•
Wealth: Do you live in a community where there are a
few people with lots of money and lots of people with
little or no money?
•
Equality: Does it surprise you that the evidence from
Çatalhöyük suggests men and women lived very simi-
lar lives and things were more or less equal between
them?
•
Privacy: Çatalhöyük’s homes had no windows, just
one main room, and an entrance from the roof! It’s
perfect, don’t you think?
In the example below, we apply the BoC pattern and
present the conversation episode on the rst theme, which is
linked to the interpretation of the evidence of burials found at
Çatalhöyük. Figure 4 depicts an excerpt of an actual conversa-
tion, using a variation of the bot’s questions and statements.
The tone of the bot is kept informal, incorporating also a bit
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 7
Figure 3: The branding of the ChatÇat bot.
of witty ChatÇat personality, images and emojis.
•Intro - Provocation (A):
Bot
:Surely, you have people buried under your oors?
User
: [yes, no, never, I have, I haven’t, I would, I
wouldn’t, not, no way, sure, surely not, huh?, etc.]
•If user responds positively:
Bot (Yes-1)
:I thought so! Do you have lots of people
buried in your house?
User
: [Any response, e.g., No, Yes, Just the one, OMG
thousands...]
Bot (Yes-2)
:Do you plan on being buried in the house?
User: [Any response]
Bot (Testing conviction)
:I’d like to be kept in a house.
I think it shows that people cared about you. Don’t you
think so?
[continue to Finale]
•If user responds negatively:
Bot (No-1):Well, where do you bury them then?
User
: [cemetery, graveyard, cremated, cremated at
home... If users mention any element of home, house,
etc. the thread continues with the nal positive re-
sponse, in this case Yes-2]
Bot (No-2)
:Why would you put them so far away?
Don’t you want them close to you? Where you can be
connected?
User: [Any response]
[continue to Finale]
•If user responds ambiguously:
Bot:Seriously, don’t you bury people in your houses?
User
: [Positive (see positive stream), Negative (see
negative stream), Ambiguous]
•If user remains ambiguous:
Bot
:I don’t get what’s so confusing. We buried people
in our houses to show we cared. [continue to Finale]
•Finale (Statement of conviction):
Bot
:It is easy to forget when burial places seem so far
away but people live and work above the dead every
day. At Çatalhöyük we buried our loved ones in places
where they could remain a part of our daily lives. It is
through our close relationship with the dead that we
stayed connected to our past.
6 IMPLEMENTATION
Since our goal was not to advance bot technology but to
explore and design a form of interaction that provokes users
to step out of their comfort zone, we deliberately decided
against developing our BoC with sophisticated AI technology.
Instead, we chose to implement a rule-based chatbot. Our
primary reason was the need to have control over the UX,
to construct a guided conversational approach. A rule-based
system was deemed sucient to test if such an approach can
actually evoke an emotional reaction.
We chose Facebook’s Messenger platform for two main
reasons. Firstly, it is the largest and fastest growing mes-
saging platform, with a wide user base [
45
]; secondly, it is
easy to author due to the broader set of tools available to
developers.
Each theme, as implemented, requires an interaction of
approximately 3 to 4 minutes with the bot. Nevertheless,
the pattern can be extended to include multiple gure-8
interactions in sequence in a conversational episode, if longer
engagements are desired.
7 REVISITING THE BOT OF CONVICTION
Six months after the rst formative studies that were carried
out to rene the design of the interaction with the earlier,
primarily informational bot, we contacted almost all of the
27 participants to see whether they were willing and able to
spend some time testing the BoC. Nine (2 men and 7 women)
of them responded positively and were either sent the link
to ChatÇat or were observed using it in person.
Participants were encouraged to chat freely with the bot
in order to refresh their memory, but only for a few minutes,
since they were presumably already familiar with it. They
were then instructed to click on the button or type “Intrigue
me”, for the BoC to kick in. After their experience, they were
asked to respond to a questionnaire. We also asked users if
they would be willing to be interviewed about their experi-
ence so that we could follow up directly with self-selected
individuals whose logs we studied and whose interactions
surprised or otherwise interested us. We extended the ques-
tionnaire used in our previous formative evaluation with ve
new questions focusing on the nature of the dialogue and
larger conceptual issues regarding perspective-taking and
challenging of users’ assumptions.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 8
Figure 4: Excerpt from a conversation with ChatÇat.
To date, we have managed to obtain a full set of data (logs,
questionnaire and interview responses) from 5 participants.
In terms of usability, no major conversational breakdown
occurred, conrming that the pattern is sound and allowing
us to focus on responses related to perspective-taking.
The initial, direct “provocation” of the bot seemed to ef-
fectively engage the attention of the users and subsequently
reinforce the point with follow-up questions. “I got a mini-
shock, surprised I’d say, with this question coming out of the
blue ‘How would you feel if your grandmother was buried un-
der your bed?’” (Steve). Users were pleasantly surprised and
seemed to genuinely reect on how the “radical” opinions of
the bot actually revealed current preconceptions about the
past while also demonstrating that the same basic human
needs and emotions continue to drive our own beliefs and
practices today.
The bot as a conversation partner seems to take the ini-
tiative and challenge the user, trying to promote its point of
view and to make the user reect, through a series of ques-
tions: “I liked that the bot was asking me questions that were
a bit provocative and caught my interest” (Vicky). Having
the bot asking the questions in this way appears to work
very much in favor of promoting the illusion that the user is
talking to an actual intelligent agent, a somewhat strongly
opinionated one perhaps, but still intelligent: “I really liked
it, I didn’t expect to like it because I’m usually too cynical,
like ‘Oh it’s just a machine’. But I think we became friends
with Chatcat” (Irene). The user is asked to respond and the
responses are seemingly taken into account, but the chat-
bot, as a true conversational partner at times will seem to
care more about expressing its own opinion than listening
to the opinion of the user. Thus, the bot transforms from
a mere neutral information provider to a rather stubborn
conversation partner. This subtle transfer of control of the
dialogue from the user to the chatbot seemingly works to
foster respect on the part of the user towards the bot, thus
promoting a deeper mental and emotional engagement in
the dialogue.
8 DISCUSSION
From our perspective, the approach of many chatbots used to-
day is not necessarily productive for facilitating conversation,
let alone genuine and extended dialogue [
10
]. In the context
in which we are working (cultural heritage), without mech-
anisms to foster reective debate and action, bots are little
more than simplistic customer service lines or relentless in-
formation providers with the potential to worsen cultural di-
vides and reinforce problematic-but prevalent-contemporary
practices of nationalistic appropriation, racial bias [
37
], re-
actionary populism, and imperialism. Substantial audience
research in the cultural sector clearly demonstrates that her-
itage sites are places where people often come purposefully
to change their minds and are open to transformation [
25
],
suggesting the potential for chatbots to provoke such action.
However, beyond heritage, the same concerns for fostering
respectful dialogue and argumentation leading to construc-
tive social change in the world today are of increasing ur-
gency [41].
Our intention is not to design an aective bot that rec-
ognizes users’ emotional states and displays or responds
directly to these. Rather we aim to dene simple means by
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 9
which a bot can provoke processes of critical reection and
action. In this sense, the work described here focuses on
designing, testing and rening conversational patterns for
interested practitioners to create socially benecial change
through dialogue around topics of broad public concern;
and thus contributing to making critical design in HCI more
approachable. Our work subscribes to the perspective chang-
ing, dialogical framing of Bardzells’ (re)denition of “critical
design” [
2
] and is informed by Bardzell et al’s reections
on designing for provocativeness [
3
]. In addition, our ap-
proach to ‘critical reection’ loosely relates to the model
for historical empathy [
8
] where users are guided through
activities designed to facilitate historical contextualization,
perspective-taking and aective connection.
One of the positive aspects of our approach is that it guides
the conversation in such a way that it is not easy to de-rail.
Detecting the variations of a user’s yes and no responses
is relatively straightforward, while classifying all other re-
sponses as ambiguous makes the pattern error-proof to a
great extent. The simplicity of the implementation thus be-
comes one of its main strengths, especially for the elds of
heritage and education where many institutions do not have
the funding to acquire expensive solutions. Understanding
how the pattern works naturally leads to the design of a BoC,
which requires nothing more than access to a text editor to
implement.
A conversation ‘episode’ comprises a user’s participation
in a chat with the BoC. Our intended actions are multi-tiered,
starting with the most basic: the user responds to the bot, and
ideally responds to the whole sequence of the chat, leading
to its conclusion, and ultimately to a new chat or continued
interaction with the bot’s online oerings. Such actions, if
performed by the user, would suggest the ecacy of the bot‘s
conditions in provoking a reply.
The next tier entails the user demonstrating evidence of a
reconsideration of their original point of view through their
chat responses. Such actions, if evidenced in the inputted
text and through associated evaluation (e.g., interviews, ques-
tionnaires) would suggest the ecacy of the bot’s conditions
in provoking reection or alternative perspective-taking.
From here, the next tier entails the user taking some form
of action beyond the episode itself, suggesting the ecacy
of the bot’s conditions in provoking transformation. In other
words, the user’s interactions with the bot lead to change
in their future ways of thinking, conversing with others or
acting on the world.
Evidently, each tier requires a dierent assessment frame,
as the most basic tier can be counted or quantied: ‘yes-
-user responded–chat continued’; ‘no-user left chat’. The
second tier requires discursive analysis of chat text and asso-
ciated qualitative data collection (as we have done through
interviews and surveys), wherein patterns of responses can
potentially be deduced with a focus on looking for change
in the user’s point of view. The third tier requires a more
longitudinal evaluation approach, e.g. follow-up via user re-
port (survey, interview), which again may be analyzed for
patterns in behavior, and which must appreciate that tying
human behaviors directly back to the inuences of the bot
will be challenging and necessarily open to interpretation,
as with all aective practice.
In its rst incarnation, our BoC appears to conrm the
promise for rule-based chat patterns to provoke perspective-
taking and the challenging of user assumptions. Understand-
ing of its potential for transformational change now depends
on wider development and evaluation.
Limitations and Further Work
A number of limitations were encountered in the course
of this work, pointing to several possible future directions
that could be pursued on an empirical, methodological, and
practical level. Most importantly, in terms of evaluation, the
study relating to the BoC was limited with respect to the
number of participants and requires further attention to its
eectiveness in natural settings, longitudinally, and in terms
of the questions posed at the outset. Furthermore, the evalu-
ation instruments of post-experience self-reporting, through
questionnaires and semi-structured interviews, present limi-
tations [
18
]. Other researchers [
47
] and for other technolo-
gies [
27
] [
42
] have identied the limitations presented by
traditional methods when attempting to capture users’ inter-
action in unfolding, in-the-moment activities. We are already
exploring ways of embedding evaluation in the experience it-
self, e.g. by weaving it into Messenger in a relatively seamless
manner. Nevertheless, as the issues pertaining to evaluation
methods range beyond the scope of this paper, we have kept
them out of the discussion and plan to address them in sub-
sequent studies. Beyond evaluation, the BoC has potential as
a dialogue facilitator in multi-user contexts. We are already
exploring such potential in related work in informal educa-
tion contexts, with preliminary results showing the fostering
of historical empathy among middle school-aged users [
30
].
Contributions
UX and interaction designers increasingly have to wrestle
with the complex problem of designing digital encounters
that have relevance for users. In this space, we believe that
our contributions are threefold:
•
Adding to the corpus of theoretical design considera-
tions [
37
] concerning interactions with conversational
agents. If critical design and reection should be a core
technology design outcome of HCI [
34
] [
2
] [
39
], a key
goal of this work is to contribute towards creating the
conditions in a user-to-bot interaction episode which
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 10
have the intention of soliciting specic intended ac-
tions from participants.
•
Proposing a design methodology that adopts a user
centered participatory approach, to make chatbots and
conversational agents more relevant to their users.
•
Ultimately contributing to the design of digital sys-
tems that evoke meaningful interaction, envisioning a
world where HCI can become the impetus for personal
transformation and social change.
We have presented the design of a human-bot conver-
sational experience that relies on a relatively simple set of
strategies to facilitate meaning making. Although instanti-
ated in a very specic cultural heritage setting, we believe
that our framework is replicable in other contexts and an
urgency for fostering critical dialogue in the world today.
ACKNOWLEDGMENTS
This work is part of the EMOTIVE project, which has re-
ceived funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement
No. 727188. The authors would like to thank the EMOTIVE
team and Dr Vasilis Vlachokyriakos for their comments and
insightful conversations on drafts of this paper. We also wish
to thank the users that participated in our formative studies,
and our anonymous reviewers.
REFERENCES
[1]
Auckland Art Gallery. 2018. Auckland Art Gallery’s new chatbot: art-
icial intelligence. http://www.scoop.co.nz/stories/CU1805/S00203/
auckland-art-gallerys-new-chatbot- art-icial-intelligence.htm Last
accessed 31 December 2018.
[2]
Jerey Bardzell and Shaowen Bardzell. 2013. What is "critical" about
critical design?. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems - CHI ’13. ACM Press, New York, New
York, USA, 3297. https://doi.org/10.1145/2470654.2466451
[3]
Shaowen Bardzell, Jerey Bardzell, Jodi Forlizzi, John Zimmerman, and
John Antanitis. 2012. Critical design and critical theory: the challenge
of designing for provocation. In Proceedings of the Designing Interactive
Systems Conference on - DIS ’12. ACM Press, New York, New York, USA,
288. https://doi.org/10.1145/2317956.2318001
[4]
Justine Cassell. 2001. Embodied conversational agents: representation
and intelligence in user interfaces. AI Magazine 22, 4 (2001), 67–83.
https://doi.org/10.1609/aimag.v22i4.1593
[5]
Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vihjalmsson, and
Hao Yan. 2000. Human conversation as a system framework: designing
embodied conversational agents. In Embodied conversational agents.
MIT Press, Cambridge, MA, USA, Chapter 2, 29–62.
[6]
Dot - Akron Art Museum. 2018. Dot - Akron Art Mu-
seum guide. https://akronartmuseum.org/calendar/
connect-with-dot-launch-party/12829 Last accessed 31 December
2018.
[7]
Anthony Dunne and Fiona Raby. 2001. Design Noir: The Secret Life of
Electronic Objects. Birkhäuser, Basel, Switzerland.
[8] Jason Endacott and Sarah Brooks. 2013. An Updated Theoretical and
Practical Model for Promoting Historical Empathy. Social Studies
Research and Practice 8, 1 (2013), 41–58. http://www.socstrpr.org/
wp-content/uploads/2013/04/MS{_}06482{_}no3.pdf
[9]
Asbjørn Følstad and Petter Bae Brandtzæg. 2017. Chatbots and the
new world of HCI. interactions 24, 4 (jun 2017), 38–42. https://doi.
org/10.1145/3085558
[10]
Smithsonian Institution & Museweb Foundation. 2016. Storytelling
Toolkit - Facilitated Dialogue. Technical Report. 21 pages. https://
museumonmainstreet.org/sites/default/les/facilitated_dialogue.pdf
[11]
Katrina Gargett. 2018. Re-thinking the guided tour: co-creation, dialogue
and practices of facilitation at York Minster. MA Thesis. University of
York.
[12]
Avelino J. Gonzalez, James R. Hollister, Ronald F. DeMara, Jason Leigh,
Brandan Lanman, Sang-Yoon Lee, Shane Parker, Christopher Walls,
Jeanne Parker, Josiah Wong, Clayton Barham, and Bryan Wilder. 2017.
AI in Informal Science Education: Bringing Turing Back to Life to
Perform the Turing Test. International Journal of Articial Intelli-
gence in Education 27, 2 (jun 2017), 353–384. https://doi.org/10.1007/
s40593-017-0144-1
[13]
Shawn Graham. 2017. An Introduction to Twitter Bots with Tracery.
https://programminghistorian.org/en/lessons/intro-to-twitterbots
[14]
Tom Hennes. 2002. Rethinking the Visitor Experience: Transforming
Obstacle into Purpose. Curator: The Museum Journal 45, 2 (apr 2002),
105–117. https://doi.org/10.1111/j.2151-6952.2002.tb01185.x
[15]
Anne Frank House. 2017. Anne Frank House bot for Messenger
launch. https://www.annefrank.org/en/about-us/news-and-press/
news/2017/3/21/anne-frank-house-launches-bot- messenger/ Last ac-
cessed 31 December 2018.
[16]
Akrivi Katifori, Maria Roussou, Sara Perry, George Drettakis, Sebastian
Vizcay, and Julien Philip. 2018. The EMOTIVE Project - Emotive virtual
cultural experiences through personalized storytelling. In EuroMed
2018, International Conference on Cultural Heritage. Lemessos, Cyprus.
[17]
Mark Katrikh. 2018. Creating Safe(r) Spaces for Visitors and Sta in
Museum Programs. Journal of Museum Education 43, 1 (jan 2018), 7–15.
https://doi.org/10.1080/10598650.2017.1410673
[18]
A. Baki Kocaballi, Liliana Laranjo, and Enrico Coiera. 2018. Mea-
suring User Experience in Conversational Interfaces: A Compari-
son of Six Questionnaires. In Proceedings of the 32Nd International
BCS Human Computer Interaction Conference (HCI ’18). BCS Learn-
ing & Development Ltd., Swindon, UK, Article 21, 12 pages. https:
//doi.org/10.14236/ewic/HCI2018.21
[19]
Stefan Kopp, Christian Becker, and Ipke Wachsmuth. 2006. The Virtual
Human Max - Modeling Embodied Conversation. In KI 2006 - Demo
Presentation, Extended Abstracts. 19–22.
[20]
Stefan Kopp, Lars Gesellensetter, Nicole C. Krämer, and Ipke
Wachsmuth. 2005. A Conversational Agent as Museum Guide âĂŞ
Design and Evaluation of a Real-World Application. In Intelligent
Virtual Agents. IVA 2005. Lecture Notes in Computer Science, vol 3661,
T. Panayiotopoulos, J. Gratch, Ruth Aylett, Ballin Dan, Olivier Patrick,
and T. Rist (Eds.). Springer Berlin Heidelberg, 329–343. https:
//doi.org/10.1007/11550617_28
[21]
Peter M. Krat, Michael Macy, and Alex "Sandy" Pentland. 2017. Bots
as Virtual Confederates. In Proceedings of the 2017 ACM Conference on
Computer Supported Cooperative Work and Social Computing - CSCW
’17. ACM Press, New York, New York, USA, 183–190. https://doi.org/
10.1145/2998181.2998354
[22]
H. Chad Lane, Clara Cahill, Susan Foutz, Daniel Auerbach, Dan Noren,
Catherine Lussenhop, and William Swartout. 2013. The Eects of a
Pedagogical Agent for Informal Science Education on Learner Behav-
iors and Self-ecacy. In Articial Intelligence in Education. AIED 2013.
Lecture Notes in Computer Science, vol 7926, H. Chad Lane, Kalina Yacef,
Jack Mostow, and P.Pavlik (Eds.). Springer Berlin Heidelberg, Memphis,
TN, USA, 309–318. https://doi.org/10.1007/978-3-642-39112-5_32
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 11
[23]
Q. Vera Liao, Werner Geyer, Muhammed Mas-ud Hussain, Praveen
Chandar, Matthew Davis, Yasaman Khazaeni, Marco Patricio Crasso,
Dakuo Wang, Michael Muller, and N. Sadat Shami. 2018. All Work
and no Play? Conversations with a Question-and-Answer Chatbot in
the Wild. In Proceedings of the 2018 CHI Conference on Human Factors
in Computing Systems - CHI ’18. ACM, New York, NY, USA, 1–13.
https://doi.org/10.1145/3173574.3173577
[24]
Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA":
The Gulf between User Expectation and Experience of Conversational
Agents. In Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems - CHI ’16. ACM Press, New York, New York, USA,
5286–5297. https://doi.org/10.1145/2858036.2858288
[25]
Bernadette Lynch. 2013. Reective debate, radical transparency and
trust in museums. Museum Management and Curatorship 28, 1 (2013),
1–13.
[26]
Matt Malpass. 2013. Between Wit and Reason: Dening As-
sociative, Speculative, and Critical Design in Practice. Design
and Culture 5, 3 (nov 2013), 333–356. https://doi.org/10.2752/
175470813X13705953612200
[27]
Timothy Marsh, Peter Wright, and Shamus P. Smith. 2001. Evalua-
tion for the design of experience in virtual environments: modeling
breakdown of interaction and illusion. Cyberpsychology & behavior: the
impact of the Internet, multimedia and virtual reality on behavior and so-
ciety 4, 2 (2001), 225–238. https://doi.org/10.1089/109493101300117910
[28]
Nikita Mattar and Ipke Wachsmuth. 2014. Let’s Get Personal. In
Human-Computer Interaction. Advanced Interaction Modalities and
Techniques. HCI 2014. Lecture Notes in Computer Science, vol 8511.
Springer, Cham, 450–461. https://doi.org/10.1007/978-3-319- 07230-2_
43
[29]
John McCarthy, Peter Wright, Jayne Wallace, and Andy Dearden.
2006. The experience of enchantment in human-computer inter-
action. Personal and Ubiquitous Computing 10, 6 (2006), 369–378.
https://doi.org/10.1007/s00779-005-0055-2
[30]
Sierra McKinney. 2018. Generating pre-historical empathy in classrooms.
Master’s thesis. University of York.
[31]
Michael Minge and Manfred Thüring. 2018. Hedonic and pragmatic
halo eects at early stages of User Experience. International Journal
of Human-Computer Studies 109 (jan 2018), 13–25. https://doi.org/10.
1016/j.ijhcs.2017.07.007
[32]
Elahe Paikari and André van der Hoek. 2018. A framework for un-
derstanding chatbots and their future. In Proceedings of the 11th In-
ternational Workshop on Cooperative and Human Aspects of Software
Engineering - CHASE ’18. ACM Press, New York, New York, USA, 13–16.
https://doi.org/10.1145/3195836.3195859
[33]
Sara Perry. 2018. The Enchantment of the Archaeological Record.
In 24th Annual Meeting of the European Association of Archaeologists.
European Association of Archaeologists, Barcelona, Spain.
[34]
James Pierce, Phoebe Sengers, Tad Hirsch, Tom Jenkins, William Gaver,
and Carl DiSalvo. 2015. Expanding and Rening Design and Criticality
in HCI. In Proceedings of the 33rd Annual ACM Conference on Human
Factors in Computing Systems - CHI ’15. ACM Press, New York, New
York, USA, 2083–2092. https://doi.org/10.1145/2702123.2702438
[35]
Maria Roussou and Akrivi Katifori. 2018. Flow, Staging, Waynding,
Personalization: Evaluating User Experience with Mobile Museum
Narratives. Multimodal Technologies and Interaction 2, 2 (jun 2018), 32.
https://doi.org/10.3390/MTI2020032
[36]
Mark Sample. 2014. A protest bot is a bot so specic you can’t mistake
it for bullshit: A Call for Bots of Conviction. http://bit.ly/2F3fYGO
[37]
Ari Schlesinger, Kenton P. O’Hara, and Alex S. Taylor. 2018. Let’s
Talk About Race: Identity, Chatbots, and AI. In Proceedings of the 2018
CHI Conference on Human Factors in Computing Systems - CHI ’18.
ACM Press, New York, New York, USA, 1–14. https://doi.org/10.1145/
3173574.3173889
[38]
M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M.
ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller,
E. de Sevin, M. Valstar, and M. Wollmer. 2012. Building Autonomous
Sensitive Articial Listeners. IEEE Transactions on Aective Computing
3, 2 (apr 2012), 165–183. https://doi.org/10.1109/T-AFFC.2011.34
[39]
Phoebe Sengers, Kirsten Boehner, Shay David, and Joseph ’Josh’ Kaye.
2005. Reective design. In Proceedings of the 4th decennial conference
on Critical computing between sense and sensibility - CC ’05. ACM
Press, New York, New York, USA, 49. https://doi.org/10.1145/1094562.
1094569
[40]
Samira Shaikh. 2017. A persuasive virtual chat agent based on so-
ciolinguistic theories of inuence. AI Matters 3, 2 (jul 2017), 26–27.
https://doi.org/10.1145/3098888.3098899
[41]
Walter Sinnott-Armstrong. 2018. Think Again: How to Reason and
Argue. Penguin, London, UK.
[42]
Mel Slater. 2004. How Colorful Was Your Day? Why Question-
naires Cannot Assess Presence in Virtual Environments. Presence:
Teleoperators and Virtual Environments 13, 4 (aug 2004), 484–493.
https://doi.org/10.1162/1054746041944849
[43]
Laurajane Smith. 2016. Changing views? Emotional intelligence, reg-
isters of engagement, and the museum visit. In Museums as Sites of
Historical Consciousness: Perspectives on museum theory and practice
in Canada, Vivienne Gosselin and Phaedra Livingstone (Eds.). UBC
Press, Vancouver, Canada, Chapter 6, 101–121.
[44]
Barbara J. Soren. 2009. Museum experiences that change visitors.
Museum Management and Curatorship 24, 3 (sep 2009), 233–251. https:
//doi.org/10.1080/09647770903073060
[45]
Statista. 2018. Number of monthly active Facebook Messenger users
from April 2014 to September 2017 (in millions). https://www.statista.
com/statistics/417295/facebook-messenger- monthly-active-users/
Last accessed 31 December 2018.
[46]
William Swartout, David Traum, Ron Artstein, Dan Noren, Paul De-
bevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth
Narayanan, Diane Piepol, Chad Lane, Jacquelyn Morie, Priti Aggarwal,
Matt Liewer, Jen-Yuan Chiang, Jillian Gerten, Selina Chu, and Kyle
White. 2010. Ada and Grace: Toward Realistic and Engaging Virtual
Museum Guides. In IVA 2010, J. Allbeck (Ed.). Springer-Verlag Berlin
Heidelberg, 286–300. http://ict.usc.edu/pubs/adaandgrace.pdf
[47]
Ella Tallyn, Hector Fried, Rory Gianni, Amy Isard, and Chris Speed.
2018. The Ethnobot: Gathering Ethnographies in the Age of IoT. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing
Systems - CHI ’18. ACM Press, New York, New York, USA, 1–13. https:
//doi.org/10.1145/3173574.3174178
[48]
The House Museums of Milan. 2016. Di Casa in casa adventour. https:
//www.facebook.com/dicasaincasagame/ Last accessed 31 December
2018.
[49]
Angeliki Tzouganatou. 2017. Chatbot Experience for ÇATALHÖYÜK.
Master’s thesis. University of York.
[50]
Stavros Vassos, Eirini Malliaraki, Federica dal Falco, Jessica Di Mag-
gio, Manlio Massimetti, Maria Giulia Nocentini, and Angela Testa.
2016. Art-Bots: Toward Chat-Based Conversational Experiences in
Museums. In Interactive Storytelling. 9th International Conference
on Interactive Digital Storytelling, ICIDS 2016, Frank Nack and An-
drew S. Gordon (Eds.). Los Angeles, CA, USA, 433–437. https:
//doi.org/10.1007/978-3-319-48279-8_43
[51]
Astrid M. von der Pütten, Nicole C. Krämer, Jonathan Gratch, and
Sin-Hwa Kang. 2010. “It doesn’t matter what you are!” Explaining
social eects of agents and avatars. Computers in Human Behavior 26,
6 (nov 2010), 1641–1650. https://doi.org/10.1016/j.chb.2010.06.012
[52]
Margaret Wetherell. 2012. Aect and Emotion (1st ed.). Sage Publica-
tions Ltd, London, UK. 192 pages.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 12
[53]
Margaret Wetherell, Laurajane Smith, and Gary Campbell. 2018. Intro-
duction: Aective heritage practices. In Emotion, Aective Practices,
and the Past in the Present, Laurajane Smith, Margaret Wetherell, and
Gary Campbell (Eds.). Routledge, London, 1–21.
[54]
Peter Wright and John McCarthy. 2008. Empathy and experience in
HCI. In Proceeding of the twenty-sixth annual CHI conference on Human
factors in computing systems - CHI ’08. ACM Press, New York, New
York, USA, 637. https://doi.org/10.1145/1357054.1357156
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 13