ChapterPDF Available

Transformation through Provocation? Designing a 'Bot of Conviction' to Challenge Conceptions and Evoke Critical Reflection



Can a chatbot enable us to change our conceptions, to be critically reflective? To what extent can interaction with a technologically “minimal” medium such as a chatbot evoke emotional engagement in ways that can challenge us to act on the world? In this paper, we discuss the design of a provocative bot, a “bot of conviction”, aimed at triggering conversations on complex topics (e.g. death, wealth distribution, gender equality, privacy) and, ultimately, soliciting specific actions from the user it converses with. We instantiate our design with a use case in the cultural sector, specifically a Neolithic archaeological site that acts as a stage of conversation on such hard themes. Our larger contributions include an interaction framework for bots of conviction, insights gained from an iterative process of participatory design and evaluation, and a vision for bot interaction mechanisms that can apply to the HCI community more widely. DOWNLOAD OPEN ACCESS ARTICLE AT:
Transformation through Provocation?
Designing a ‘Bot of Conviction’ to Challenge Conceptions and Evoke Critical
Maria Roussou
National and Kapodistrian University
of Athens
Athens, Greece
Sara Perry
University of York
York, UK
Akrivi Katifori
Athena Research & Innovation Center
Athens, Greece
Stavros Vassos
Helvia Technologies
Athens, Greece
Angeliki Tzouganatou
University of Hamburg
Hamburg, Germany
Sierra McKinney
University of York
York, UK
Can a chatbot enable us to change our conceptions, to be crit-
ically reective? To what extent can interaction with a tech-
nologically “minimal” medium such as a chatbot evoke emo-
tional engagement in ways that can challenge us to act on the
world? In this paper, we discuss the design of a provocative
bot, a “bot of conviction”, aimed at triggering conversations
on complex topics (e.g. death, wealth distribution, gender
equality, privacy) and, ultimately, soliciting specic actions
from the user it converses with. We instantiate our design
with a use case in the cultural sector, specically a Neolithic
archaeological site that acts as a stage of conversation on
such hard themes. Our larger contributions include an in-
teraction framework for bots of conviction, insights gained
from an iterative process of participatory design and evalua-
tion, and a vision for bot interaction mechanisms that can
apply to the HCI community more widely.
Human-centered computing
Interaction design the-
ory, concepts and paradigms;Interaction techniques.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear
this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request
permissions from
CHI 2019, May 4–9, 2019, Glasgow, Scotland UK
©2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-5970-2/19/05.. .$15.00
Chatbots; conversational agents; UX design; provocative in-
teraction; emotional engagement; cultural informatics
ACM Reference Format:
Maria Roussou, Sara Perry, Akrivi Katifori, Stavros Vassos, An-
geliki Tzouganatou, and Sierra McKinney. 2019. Transformation
through Provocation?: Designing a ‘Bot of Conviction’ to Chal-
lenge Conceptions and Evoke Critical Reection. In CHI Conference
on Human Factors in Computing Systems Proceedings (CHI 2019),
May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA,
13 pages.
The study and design of concepts, metaphors, practices, and
evaluation methods in User Experience (UX) has been the
steady endeavor of researchers and practitioners working in
the eld of human computer interaction (HCI) for a number
of years now. An increasing emphasis in UX is given to
the aective dimension, for example the design of emotive,
hedonic [
], enchanting [
], empathic [
] or critically
reective [
] [
] [
] [
] approaches to interaction between
humans and the digital world. Within this landscape, we have
witnessed a surge of dierent interactive systems in various
elds (cultural heritage, tourism, education, e-commerce,
etc.) that rely on detecting the human user’s emotional state
and responding to it appropriately.
This ‘turn’ to aect [
] in the design of experiences, inter-
faces, and interaction methods has, however, been primarily
manifested in systems that attempt to capture users’ emo-
tional states and oer, in return, a relevant response. Rarely
is the user’s digitally mediated emotional engagement with
the content regarded as an opportunity to trigger a deeper
connection, to critically reect on the issues at stake, to chal-
lenge and provoke a call to action.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 1
Roussou, M., Perry, S., Katifori, A., Vassos, S., Tzouganatou, A., McKinney, S.
(2019) Transformation through Provocation? Designing a ‘Bot of Conviction’ to
Challenge Conceptions and Evoke Critical Reflection. In CHI '19 Proceedings
of the 2019 CHI Conference on Human Factors in Computing Systems,
Glasgow, Scotland, 4-9 May. New York: ACM. Paper No. 627.
Provoking this kind of “conversation” wherein the human
participant can be challenged into thinking about what their
principles or assumptions actually mean and, subsequently,
act on them to transform their experience, is at the core of the
work we propose in this paper. Based on an aective practices
model of emotional engagement [
] and inspired by both
Graham’s [
] call for digital media that are able “to move us,
to inspire us, to challenge us,” and his reference to Sample’s
concept of “bots of conviction” [
], we engaged in designing
a conversational agent (CA), or chatbot. Its aim is to evoke its
user’s emotional engagement with complex topics (e.g. death,
wealth distribution, gender equality, privacy) and, ultimately,
solicit specic actions from the user it converses with.
We chose to explore the design of a chatbot because it is a
“minimal” digital medium, it is direct and simple to use, and it
is playful. But how can we design conversational interaction
with a chatbot in ways that can trigger critical reection? To
what extent can interaction with such a technologically “min-
imal” medium bring out deeper emotions that can challenge
us to act on the world?
In this paper, we introduce an interactional pattern that,
we argue, can ignite a dialogue between a participant and
a bot, aiming ultimately to transform the participant’s con-
ceptions. We start by dening key concepts related to our
goals of emotional engagement, provocation and transforma-
tion. We then review the variety of chatbots used today, with
particular emphasis on chatbots used in the cultural sector,
as this is where our use case is situated. Next, we describe
the iterative process of designing a Bot of Conviction (BoC),
which follows a carefully planned out and executed, through
formative evaluation, procedure of content and interaction
design and development. Section 5 demonstrates how we
apply our pattern to the design of a chatbot for a specic ar-
chaeological site. Finally, the paper concludes by discussing
our pattern, its limitations and its potential to fulll the goal
of igniting users’ transformation through a call for action.
Chatbots and the Post-app Internet
The literature on conversational agents, intelligent virtual
humans, virtual assistants, and chatbots is extensive. Whilst
not exactly the same [
], these terms are often used in-
terchangeably to denote systems that engage the user, to a
greater or lesser degree, in natural language-like conversa-
tion (spoken or written) with a digital entity.
Chatbots have been touted as advantageous tools that
can facilitate communication, provide easier access to in-
formation, and combat digital divides [
]. They oer novel,
immediate engagement mechanisms and, in light of the pop-
ularity of texting, they can attract a younger demographic
in multimodal ways [
]. In addition, chatbots can operate
on both a browser and a mobile phone, oering a solution
to the challenge of app installation and overload [9].
Chatbots serve a broad range of purposes [
], with the
most common application being that of a rst-level help
desk or service chatbot that can recommend responses to
low-level customer queries. These chatbots lower the thresh-
old for people to ask for information, work well on simple
issues, and provide a more amiable and personable style of
information delivery. As customer service chatbots become
commonplace, the eld is now turning to advancing the
creation of agents that are able to build relationships with
their human conversational partners [
] as well as virtual
humans that can converse with the user in more emotive,
persuasive, and provocative ways [
] [
]. At the same time,
conceptual and ethical issues are informing the design of
guidelines for bots [21] [37].
Despite the aforementioned attempts, in the majority of
conversational agents, the typical form of interaction is an
independent single-turn exchange: the user asks, the chatbot
responds, and this usually completes the interaction for the
particular question/topic.
Chatbots in the Cultural Sector
Conversational interfaces are increasingly espoused by cul-
tural organizations within their digital strategies as means
to attract new audiences and extend the museums’ physi-
cal location. They are regularly proclaimed to oer novel
engagement mechanisms that can empower visitors of muse-
ums and broaden the ways that cultural content is perceived.
Many current cultural heritage chatbot initiatives operate
within a site’s physical space, allowing for varying levels
of interactivity. The chatbots’ most usual in situ purpose is
serving as exhibition guides [
] [
] [
] and helping visitors
in organizing their visit [
]. These bots resemble customer
service bots, as their primary aim is to oer information to
the visitor.
In more sophisticated examples, visitors input a keyword,
color or even an emotion, and the chatbot will respond with
a selection of related artworks [
]; or interact with embodied
virtual agents in the informal education space (e.g., Ada
and Grace [
], Max [
], Coach Mike [
], Alan Turing’s
Avatar [
]), either by spoken natural language or via typed
text. Some embed gamication elements into their touring
functionality [
] [
], challenging users with exploratory
clues or quizzes that manifest in rewards, including virtual
currency that has cash value in museum gift shops.
However, despite these examples, the use of conversa-
tional agents by museums and the heritage sector is still
quite limited. Most are purely info-delivery oriented and
object- or exhibit- centered, providing little opportunity for
meaningful interactivity, creative expression, or critical en-
gagement. In response to these limitations, we seek to extend
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 2
the “traditional” canon of the museum/heritage bot into a
challenging, provocative engine of social commentary and
Contemporary denitions of emotion and aect (e.g. see [
and [
]) increasingly aim to depart from the psychobio-
logical “basic emotions” approaches, which reduce aect to
simplistic innate human universals and do not account for
the multiplicity of factors that mix in any given individual’s
aective practices. Rather, in recent and more complex con-
ceptualizations of the term, emotion is framed as “embodied
meaning-making” [
, p.4], and focus is put on the actions
that are generated through such embodied work (actions
that may be small or large, personally-oriented or externally-
oriented, visible or invisible, etc.). Recognizing that emotion
has action embedded into it allows us to attend to the aec-
tive practices that characterize meaning-making–the actions
that feed into and ow out from it. Therefore, rather than
try to crudely measure emotion as biological response, we
turn our attention instead to the acts (or lack thereof) that
are generated through people’s practices with our Bot of
The importance of such a exible and act-centered under-
standing of emotion cannot be understated. It permits us to
operate in cross-cultural contexts (as our concern with ana-
lyzing resulting actions means that we do not need to rely
on typical English-language emotion descriptors to dene
aective experiences) and to embrace the true complexity
of emotive experiences. It also appreciates the intentionality
and control–but also the historical motivations and personal
relationships–that can be at the core of such experiences. In
line with this conception of emotion, we look for repetitions,
apparent inconsistencies and unique occurrences in actions
(e.g. spoken or written words, non-discursive oral expres-
sions, bodily movements and gestural reactions, interactions
with human and non-human things, other proxemics, draw-
ings or other visual inscriptions, etc.) that emerge in people’s
social practices. In terms of our BoC, this means emotional
engagement is demonstrated via interaction with the bot
itself and is inherent in the very act of chatting to it. Rather
than designing the bot to trigger simplistic “basic emotions”,
we create conditions inside the chats with the purpose of
soliciting specic intended actions from participants.
As we see it, to respond at all to the bot is to aectively
engage with it (as a user could easily just walk away). Such
basic response actions suggest the ecacy of the bot’s con-
ditions in provoking a reply. Provocation, here, is dened
in simple terms: acting on others to elicit a particular recip-
rocal action. At the most supercial level, the bot acts on
the user, engaging them suciently to complete a full chat.
Preferably, however, this provocation works more deeply,
evidenced through analysis of the types of inputs generated
by users. Here, deeper provocation entails users reconsider-
ing their points of view, demonstrating forms of conscious
reection or alternative perspective-taking in their chats.
Moreover, at its deepest level, as we dene it, provocation
leads to transformation: users take action beyond the chat
itself, for instance telling others about their reections, or
integrating ideas generated through engagement with the
bot into their own everyday meaning-making practices. Here
transformation is loosely aligned with Hennes’ [
, p.114]
concern that “The dierence between the activity of the be-
ginning and that of the end is a kind of transformational
growth that aects experience in the future”. In this way, our
denition goes further than some in the heritage sector [
p.104] who see transformation as “simply instances when vis-
itors’ sense of self and community [a]re destabilised”. Rather,
we are interested in eecting genuine change in individuals
which is evidenced, following Soren’s model of transforma-
tional museum experiences [
, p.248], in behaviours which
are “more inclusive, discriminating, emotionally capable of
change, and reective”.
In creating provocation, it is necessary to consider the
ethical implications of the work on users’ wellbeing. How-
ever, drawing inspiration from Katrikh [
] and Gargett [
it is our position that, in order to develop a transformative
experience as outlined above, eorts should not lie in mini-
mizing discomfort but rather in generating the opportunity
to “promote dialogue, process emotion, and ultimately to
allow visitors to reach a place of equilibrium” [17, p.7].
We have turned to the concept of Bots of Conviction to ex-
plore the potential for more open conversational agents that
focus on asking (not necessarily answering) questions, and
provoking critical thought. In particular, our motivation lies
in exploring “hard” themes that are emotive and controver-
sial in nature, such as life and death, power, wealth, social
structure and hierarchy (or lack thereof), gender equality,
etc. Critically, these topics are relevant across time and space,
meaning they shaped people’s lives thousands of years ago
in myriad ways and they continue to evolve diachronically,
remaining relevant to humans today. Cultural heritage sits
at the center of debates about identity, politics, sociality
and economics, regularly appropriated by interested par-
ties in the present to justify past actions and to lay claims
to the future. Not often, however, do heritage sites foster
environments where such debates are purposefully and con-
structively facilitated, such that the resulting dialogue leads
to positive social change [
] [
]. The chatbot presents
what is arguably the perfect opportunity to experiment with
discussion-based models of aective engagement. This, then,
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 3
is the “space” our work aspires to occupy, the nexus being to
enable genuinely critical reection, respect, care, and ethics
in dialogue.
Seeking inspiration, we looked towards Sample’s deni-
tion of bots of conviction [
]. Otherwise known as “protest
bots”, these computer programs work to reveal “the injustice
and inequality of the world and imagin[e] alternatives”; they
ask questions about “how, when, who and why”; and they
are typied by ve key traits: topicality, uncanniness, accu-
mulation, oppositionality and groundedness in data. Unlike
the “typical” BoC, however, which is usually Twitter-based,
generative and broadcast oriented (in the sense that it is
not intended to foster a two-way conversational ow), we
sought to develop something more amenable to the usual
cultural heritage context. Indeed, by Sample’s logic, BoCs are
completely ‘automatic’ in nature and “do not oer solutions.
Instead they create messy moments...” [
]. Yet the museums
sector is bound by ethical codes which demand a basic level
of accountability to and responsibility for their audiences.
The evidence also indicates that people may purposefully
visit museums to change their minds, and that such change
derives from more than one-way information delivery [
Moreover, this is a sector wherein practitioners are often
underfunded, understaed, with variable digital expertise
and sometimes little capacity to maintain or manage the fall-
out of Articial Intelligence or uncontrolled generativity. So,
while we borrow elements from Sample’s original denition
of the BoC (specically the concepts of uncanniness and
oppositionality), we intentionally modify it to account for
the cultural heritage context and its associated duties of care.
The Context
This work is situated within a larger project set to explore
the potential emotive connections of visitors to museums
and archaeological sites, and how digital tools can enhance
these sites’ relevance to people’s lives today [
] [
]. The
archaeological site of Çatalhöyük, a 9000-year-old Neolithic
settlement in Turkey, has been chosen as an ideal use case
for applying the Bot of Conviction.
More than 1000 specialists from around the world have
been excavating Çatalhöyük for 60 years, yet only a small
fraction of the settlement (7%) has been unearthed. Since
its inscription as an UNESCO World Heritage site in 2012,
there has been an increase in visitor numbers despite its
remote location in the center of Turkey. What visitors en-
counter, however, at the site is essentially an excavation,
where features are dicult to see and signicance is hard to
understand or relate to. The interpretation of the archaeolog-
ical record remains limited on-site, especially if the audience
lacks archaeological literacy. Nevertheless, interest in the
site is large; nearly 10,000 Facebook users, most of whom will
never visit the actual physical site, follow the Çatalhöyük
excavation research project.
What makes Çatalhöyük unique and relevant to our ap-
plication is that, according to evidence, it was occupied by
up to 8000 people at once without obvious hierarchy (i.e.,
egalitarian socio-economic organization). No houses with
distinctive features (belonging to royalty or religious hier-
archy, for example) have yet been found. There is also no
evidence of social distinction based on gender, with men
and women seeming to have equal social status. Residents
repeatedly built and rebuilt their homes on the same spot,
creating a mound of more than 21 meters high over 1000
years. Exquisite sculptural art and wall paintings, street-less
neighborhoods, and burials of the dead beneath oors of
homes are further reasons to choose Çatalhöyük as the stage
to explore digital forms of interaction with topics that can
provoke the people of today.
The rst step towards the realization of our BoC was the
design and development of a “traditional” infobot, based on
key themes and topics underlying the cultural site of inter-
est. This bot would serve as a baseline for understanding
the added value of the BoC and consisted of a lengthy de-
sign process. It entailed: i) selecting and curating content
to construct the chatbot’s knowledge base; ii) designing the
bot’s form and interaction mechanisms, and programming
its level of “pickiness” when responding to user input; and iii)
designing the conversational aspects of the bot in a way that
could encourage the kind of action we sought. We present
each of these steps in the design process below, elaborating
on (iii).
Curating the Content
The content (themes and topics) of conversation is of utmost
importance in a chatbot that aspires to provoke its audience.
We followed an inclusive approach to content selection and
curation that sought the involvement of both content domain
experts and end-users. We recruited domain experts and
held live chat sessions with end-users to rst create basic
content for the bot. From there we elaborated the bot with
more complex reective and emotional components, to tie
the topics to the deeper underlying message or overarching
User-led content curation. As our rst use case, presented in
detail in Section 5, pertained to the aforementioned Stone
Age archaeological site of Çatalhöyük, and as we aimed to
create a user-centered experience, it was critical to begin by
reviewing the Facebook page of the Çatalhöyük Research
Project, from its prole creation in 2010 to April 2017 (251
posts). By researching Facebook followers’ reactions, com-
ments and interests, we had the opportunity to create rel-
evant content tailored to the ‘needs’ of the user. Thus, we
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 4
researched the types of posts and the types of comments
on posts. This thematic analysis focused on grouping and
selecting the topics that people seemed to be more engaged
with, as evidenced by their posing questions below posts
or by liking, commenting or sharing [
]. In other words,
the selection process focused on relevance. People mostly
oered comments regarding the following topics: burials,
wall paintings, archaeological process, the site’s landscape
and importance, chronology, plastering and gurines. This
rst phase of selection led to an early design of the chatbot
in terms of the topics it would be conversing about.
Live chat sessions. To further develop the bot’s content, this
early selection of content was augmented, rened and tested
through a series of live chat sessions with the public. The
sessions were held on the topics that were identied by the
thematic analysis and then validated by domain experts, al-
lowing us to construct the bot’s factual database. Specically,
ve live chat sessions were conducted on the site’s Facebook
page, between June and October of 2017. Domain experts
on each topic were recruited and assigned to a session, with
sessions covering the topics of i) burials, ii) coprolites and
latrines (poop and toilets!), iii) the archaeological process
and excavations, iv) wall paintings, and v) wall reliefs and
plastering. The sessions were advertised through the site’s
social media channels as well as disseminated to the pub-
lic via marketing-like posts through the contact lists of our
project team.
Each session began with a post at the appointed starting
time announcing its topic. Followers were then asked to pose
their question below the post or to send a private message.
An average number of 17 users connected actively to the
live events across all sessions. Although it was stated in the
advertisements that the live chat sessions would last one
hour, users continued to post questions up to ten hours after
some of the sessions.
Designing Interaction
This next step involved integrating the content from the the-
matic analysis and the live chat sessions into the bot, and
embellishing it with richer media and a more evocative form.
This process was, at its core, a design process, both in terms
of the design of conversation and the creation of the inter-
action mechanisms and visual elements that make up the
chatbot’s “character”. It revealed a set of design challenges
in dierent aspects of the bot and highlighted the need for
the identication of guidelines and best practices in the eld.
Design decisions that had to be made included:
Language style.
Informality is a dening trait of chatbot
personality but how chatty, witty or funny should a bot be,
especially if it is intended as a Bot of Conviction, considering
the diverse public it targets?
Casual vs non-casual content.
Should the bot content
be mostly non-casual, i.e. mostly information that experts
have prepared about the topic at hand? Or, more casual,
including responses to everyday questions, e.g. about the
weather or the user’s mood that day?
Canned reply controls.
What is the right balance be-
tween buttons and quick replies (“canned reply controls”)
and free text input? In particular, should users click on pre-
dened answers on buttons to make selections or be able to
ask free-form questions?
Alternative responses.
How many alternative responses
to the same questions are sucient, to allow the bot to reply
in slightly dierent ways, so that responses would not seem
formulaic should the user repeat the chat?
Picky vs non-picky.
What is the pickiness of the chat-
bot’s responses? When the chatbot is “picky”, it only matches
a response when there is high condence on similarity, so
it answers, “I do not know” in most cases when it gets a
question it cannot recognize. A non-picky bot matches a
response when there is also low condence on similarity, so
when asked something it will return an answer even though
the matching score is very low.
Use of multimedia.
What is the best use of images, links
and emojis within the bot?
Making the bot personable.
How can we develop a few
key elements that will make the bot personable and the
experience personalized and natural for the user? E.g., the
bot addressing the users with their (Facebook) names when
Following the study described below, the infobot param-
eter conguration was xed to the most eective variant
(witty / casual / with buttons for user replies but free text as
well / alternative bot responses / non-picky / use of images /
personal with referring to user’s rst name).
Formative evaluation. To test our design decisions as well
as the early prototype of the bot, we conducted a study
with 27 participants (14 men, 13 women, aged between 21
and 57), located in dierent countries and with dierent
backgrounds (in terms of expertise in relation to the content).
Specically, we recruited researchers from the Çatalhöyük
Research Project team (2 users: 1 man, 1 woman), followers
of the Çatalhöyük Research Project Facebook page (5 users:
2 men, 3 women), and people who had no prior knowledge
about the site and hence were completely unfamiliar with its
stories and signicance (20 users: 11 men, 9 women). Some
participants were involved in the live sessions but, otherwise,
had not interacted with the chatbot before.
The evaluation sessions were carried out with each person
separately, either face-to-face or remotely. After completing
a consent form, participants were instructed to use Messen-
ger to interact with the chatbot in order to learn more about
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 5
the UNESCO World Heritage archaeological site of Çatal-
höyük. They were advised to converse freely with the bot
for approximately 10 minutes and then asked to undertake
specic tasks, namely respond to questions that the bot had
(e.g. “Where did the people of Çatalhöyük bury their dead?”)
and did not have (e.g. “Did the people of Çatalhöyük play?”)
answers to. The session was followed by a semi-structured
interview (conducted via Skype in the cases of remote partic-
ipants) and an online questionnaire. All interactions between
bot and users were logged and timestamped.
The results of this formative study are briey outlined
here, in relation to our objectives. Firstly, with regards to
users’ engagement with the bot, participants reported to
have found it interesting and spent an average time of 16.08
minutes chatting with it. The majority of users considered
the images very helpful and enjoyed the anthropomorphic
persona of the chatbot. This is consistent with the ndings
of other researchers who note that playful interactions are a
key aspect of the adoption of CAs [
] [
] and that people
react socially to virtual characters–even if they know that
they are conversing with a machine [51].
However, as soon as the bot was faced with questions that
it could not understand and thus could not reply to appro-
priately, users reported losing interest. It seems that users
expect what Cassell refers to as “interactional intelligence”
], the “social smarts” that would enable engagement [
]. Instead, the bot oered mostly “propositional intelli-
gence”, i.e. informational content upon request, like convers-
ing with a knowledge domain expert. However, chatting is
generally associated not just with information exchange, but
also the exchange of perspectives and opinions. Therefore,
the next natural step was to equip the bot with the possibil-
ity to hold a meaningful dialogue with its users, challenging
them to approach the presented topic through a completely
new perspective.
Designing for Provocation
The results from the formative evaluation of the chatbot’s
content and form informed our next step which was to exper-
iment with the insertion of patterns of provocation that play
with the idea of the Bot of Conviction. To create our BoC
we developed a conversational pattern that enables the bot
to initiate a kind of “Socratic dialogue”, where the chatbot
embarks on a soft interrogation, asking questions to nd out
more about the other person’s beliefs and ideas, while still
maintaining control over the structure and direction of the
This pattern resembles a gure-8 (Figure 1). It entails the
bot making a declaration designed to commit the user to
a point of view that they may or may not agree with. It
begins with the bot either asking a question or making a
bold statement. This prompts the user to respond, either
Figure 1: A gure-8 design pattern for a Bot of Conviction.
positively or negatively or neither, and continue further into
the conversation. In other words, a dialogue between the user
and the bot plays out based on one of three types of response
(yes, no, ambiguous). After a few exchanges, the users will
be questioned about their response to the topic, the center
of the formation, before entering the second section, which
concludes with a summarizing statement. This statement
is one of intent/conviction by the bot, meant to arm and
transform the point of view of the user, thereafter pushing
them back out into the traditional/standard experience.
The structure engages the user by reversing the roles of
the traditional infobot, with the bot asking questions rst
thus provoking users to generate the answers, all while main-
taining a guided and controlled exchange. According to the
pattern, the user’s answers place them on distinct paths. The
bot’s responses are designed to be sensible for a variety of
user responses, and consistently incorporate questions to
provoke a user response (Figure 2).
The pattern in a nutshell:
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 6
Bot makes a declarative value judgment - a provoca-
User responds positively (variations of yes), negatively
(variations of no), or ambiguously (everything else).
Exchange of ideas: 2-3 interactions based on whether
the user is categorized as positive, negative or am-
biguous. In all cases, the user’s response to the bot’s
question should be one of these three categories.
Assessment points. Partway through the dialogue, the
bot tests the user’s conviction.
Final statement of intent/conviction - culmination of
the provocation.
Figure 2: The BoC design pattern “algorithm”.
Entryway into the BoC.
The BoC pattern is blended into
the more traditional informational bot rather than being a
separate program. Users enter into the more challenging/self-
reective dialogue of the BoC via one of the following theme-
oriented means:
In relation to specic trigger words linked to themes
(e.g., burial, goddess). In other words, the user enters
one of the words and the conversation based on the
pattern is triggered.
After a number of interactions from the user on a
particular thematic topic (where interaction is a single
instance of user-bot exchange) or typing/selecting a
button with the word “Intrigue me’.
In the previous sections we outlined why we consider provo-
cation to be an important underlying approach to design-
ing interaction with conversational agents. We went on to
present the iterative design of such provocative agents, or
Bots of Conviction, which encompasses a pattern of provo-
cation that culminates in a statement of conviction. In this
section we illustrate our design method through an instanti-
ation, a Facebook Messenger-based conversational interface
for the archaeological site of Çatalhöyük, named ChatÇat
(Figure 3), that aims to inform about the site and, more im-
portantly, to compel critical reection about the past and
action in the present amongst its users.
The rationale behind designing, creating and developing
a chatbot for the particular site of Çatalhöyük is twofold:
to address the challenges that the site faces by oering a
digital experience of it; and to leverage the many threads
of conversation that the site has to oer and that can be
developed around it.
Applying the Design Paern
From a multitude of topics, we selected four themes to apply
the BoC pattern to. In keeping with our pattern philosophy,
each “episode” begins with an opening question conceived
to provoke the user’s reaction:
Death: Would you bury someone you care about under
your bed? Or: Surely, you have people buried under
your oors?
Wealth: Do you live in a community where there are a
few people with lots of money and lots of people with
little or no money?
Equality: Does it surprise you that the evidence from
Çatalhöyük suggests men and women lived very simi-
lar lives and things were more or less equal between
Privacy: Çatalhöyük’s homes had no windows, just
one main room, and an entrance from the roof! It’s
perfect, don’t you think?
In the example below, we apply the BoC pattern and
present the conversation episode on the rst theme, which is
linked to the interpretation of the evidence of burials found at
Çatalhöyük. Figure 4 depicts an excerpt of an actual conversa-
tion, using a variation of the bot’s questions and statements.
The tone of the bot is kept informal, incorporating also a bit
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 7
Figure 3: The branding of the ChatÇat bot.
of witty ChatÇat personality, images and emojis.
Intro - Provocation (A):
:Surely, you have people buried under your oors?
: [yes, no, never, I have, I haven’t, I would, I
wouldn’t, not, no way, sure, surely not, huh?, etc.]
If user responds positively:
Bot (Yes-1)
:I thought so! Do you have lots of people
buried in your house?
: [Any response, e.g., No, Yes, Just the one, OMG
Bot (Yes-2)
:Do you plan on being buried in the house?
User: [Any response]
Bot (Testing conviction)
:I’d like to be kept in a house.
I think it shows that people cared about you. Don’t you
think so?
[continue to Finale]
If user responds negatively:
Bot (No-1):Well, where do you bury them then?
: [cemetery, graveyard, cremated, cremated at
home... If users mention any element of home, house,
etc. the thread continues with the nal positive re-
sponse, in this case Yes-2]
Bot (No-2)
:Why would you put them so far away?
Don’t you want them close to you? Where you can be
User: [Any response]
[continue to Finale]
If user responds ambiguously:
Bot:Seriously, don’t you bury people in your houses?
: [Positive (see positive stream), Negative (see
negative stream), Ambiguous]
If user remains ambiguous:
:I don’t get what’s so confusing. We buried people
in our houses to show we cared. [continue to Finale]
Finale (Statement of conviction):
:It is easy to forget when burial places seem so far
away but people live and work above the dead every
day. At Çatalhöyük we buried our loved ones in places
where they could remain a part of our daily lives. It is
through our close relationship with the dead that we
stayed connected to our past.
Since our goal was not to advance bot technology but to
explore and design a form of interaction that provokes users
to step out of their comfort zone, we deliberately decided
against developing our BoC with sophisticated AI technology.
Instead, we chose to implement a rule-based chatbot. Our
primary reason was the need to have control over the UX,
to construct a guided conversational approach. A rule-based
system was deemed sucient to test if such an approach can
actually evoke an emotional reaction.
We chose Facebook’s Messenger platform for two main
reasons. Firstly, it is the largest and fastest growing mes-
saging platform, with a wide user base [
]; secondly, it is
easy to author due to the broader set of tools available to
Each theme, as implemented, requires an interaction of
approximately 3 to 4 minutes with the bot. Nevertheless,
the pattern can be extended to include multiple gure-8
interactions in sequence in a conversational episode, if longer
engagements are desired.
Six months after the rst formative studies that were carried
out to rene the design of the interaction with the earlier,
primarily informational bot, we contacted almost all of the
27 participants to see whether they were willing and able to
spend some time testing the BoC. Nine (2 men and 7 women)
of them responded positively and were either sent the link
to ChatÇat or were observed using it in person.
Participants were encouraged to chat freely with the bot
in order to refresh their memory, but only for a few minutes,
since they were presumably already familiar with it. They
were then instructed to click on the button or type “Intrigue
me”, for the BoC to kick in. After their experience, they were
asked to respond to a questionnaire. We also asked users if
they would be willing to be interviewed about their experi-
ence so that we could follow up directly with self-selected
individuals whose logs we studied and whose interactions
surprised or otherwise interested us. We extended the ques-
tionnaire used in our previous formative evaluation with ve
new questions focusing on the nature of the dialogue and
larger conceptual issues regarding perspective-taking and
challenging of users’ assumptions.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 8
Figure 4: Excerpt from a conversation with ChatÇat.
To date, we have managed to obtain a full set of data (logs,
questionnaire and interview responses) from 5 participants.
In terms of usability, no major conversational breakdown
occurred, conrming that the pattern is sound and allowing
us to focus on responses related to perspective-taking.
The initial, direct “provocation” of the bot seemed to ef-
fectively engage the attention of the users and subsequently
reinforce the point with follow-up questions. “I got a mini-
shock, surprised I’d say, with this question coming out of the
blue ‘How would you feel if your grandmother was buried un-
der your bed?’” (Steve). Users were pleasantly surprised and
seemed to genuinely reect on how the “radical” opinions of
the bot actually revealed current preconceptions about the
past while also demonstrating that the same basic human
needs and emotions continue to drive our own beliefs and
practices today.
The bot as a conversation partner seems to take the ini-
tiative and challenge the user, trying to promote its point of
view and to make the user reect, through a series of ques-
tions: “I liked that the bot was asking me questions that were
a bit provocative and caught my interest” (Vicky). Having
the bot asking the questions in this way appears to work
very much in favor of promoting the illusion that the user is
talking to an actual intelligent agent, a somewhat strongly
opinionated one perhaps, but still intelligent: “I really liked
it, I didn’t expect to like it because I’m usually too cynical,
like ‘Oh it’s just a machine’. But I think we became friends
with Chatcat” (Irene). The user is asked to respond and the
responses are seemingly taken into account, but the chat-
bot, as a true conversational partner at times will seem to
care more about expressing its own opinion than listening
to the opinion of the user. Thus, the bot transforms from
a mere neutral information provider to a rather stubborn
conversation partner. This subtle transfer of control of the
dialogue from the user to the chatbot seemingly works to
foster respect on the part of the user towards the bot, thus
promoting a deeper mental and emotional engagement in
the dialogue.
From our perspective, the approach of many chatbots used to-
day is not necessarily productive for facilitating conversation,
let alone genuine and extended dialogue [
]. In the context
in which we are working (cultural heritage), without mech-
anisms to foster reective debate and action, bots are little
more than simplistic customer service lines or relentless in-
formation providers with the potential to worsen cultural di-
vides and reinforce problematic-but prevalent-contemporary
practices of nationalistic appropriation, racial bias [
], re-
actionary populism, and imperialism. Substantial audience
research in the cultural sector clearly demonstrates that her-
itage sites are places where people often come purposefully
to change their minds and are open to transformation [
suggesting the potential for chatbots to provoke such action.
However, beyond heritage, the same concerns for fostering
respectful dialogue and argumentation leading to construc-
tive social change in the world today are of increasing ur-
gency [41].
Our intention is not to design an aective bot that rec-
ognizes users’ emotional states and displays or responds
directly to these. Rather we aim to dene simple means by
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 9
which a bot can provoke processes of critical reection and
action. In this sense, the work described here focuses on
designing, testing and rening conversational patterns for
interested practitioners to create socially benecial change
through dialogue around topics of broad public concern;
and thus contributing to making critical design in HCI more
approachable. Our work subscribes to the perspective chang-
ing, dialogical framing of Bardzells’ (re)denition of “critical
design” [
] and is informed by Bardzell et al’s reections
on designing for provocativeness [
]. In addition, our ap-
proach to ‘critical reection’ loosely relates to the model
for historical empathy [
] where users are guided through
activities designed to facilitate historical contextualization,
perspective-taking and aective connection.
One of the positive aspects of our approach is that it guides
the conversation in such a way that it is not easy to de-rail.
Detecting the variations of a user’s yes and no responses
is relatively straightforward, while classifying all other re-
sponses as ambiguous makes the pattern error-proof to a
great extent. The simplicity of the implementation thus be-
comes one of its main strengths, especially for the elds of
heritage and education where many institutions do not have
the funding to acquire expensive solutions. Understanding
how the pattern works naturally leads to the design of a BoC,
which requires nothing more than access to a text editor to
A conversation ‘episode’ comprises a user’s participation
in a chat with the BoC. Our intended actions are multi-tiered,
starting with the most basic: the user responds to the bot, and
ideally responds to the whole sequence of the chat, leading
to its conclusion, and ultimately to a new chat or continued
interaction with the bot’s online oerings. Such actions, if
performed by the user, would suggest the ecacy of the bot‘s
conditions in provoking a reply.
The next tier entails the user demonstrating evidence of a
reconsideration of their original point of view through their
chat responses. Such actions, if evidenced in the inputted
text and through associated evaluation (e.g., interviews, ques-
tionnaires) would suggest the ecacy of the bot’s conditions
in provoking reection or alternative perspective-taking.
From here, the next tier entails the user taking some form
of action beyond the episode itself, suggesting the ecacy
of the bot’s conditions in provoking transformation. In other
words, the user’s interactions with the bot lead to change
in their future ways of thinking, conversing with others or
acting on the world.
Evidently, each tier requires a dierent assessment frame,
as the most basic tier can be counted or quantied: ‘yes-
-user responded–chat continued’; ‘no-user left chat’. The
second tier requires discursive analysis of chat text and asso-
ciated qualitative data collection (as we have done through
interviews and surveys), wherein patterns of responses can
potentially be deduced with a focus on looking for change
in the user’s point of view. The third tier requires a more
longitudinal evaluation approach, e.g. follow-up via user re-
port (survey, interview), which again may be analyzed for
patterns in behavior, and which must appreciate that tying
human behaviors directly back to the inuences of the bot
will be challenging and necessarily open to interpretation,
as with all aective practice.
In its rst incarnation, our BoC appears to conrm the
promise for rule-based chat patterns to provoke perspective-
taking and the challenging of user assumptions. Understand-
ing of its potential for transformational change now depends
on wider development and evaluation.
Limitations and Further Work
A number of limitations were encountered in the course
of this work, pointing to several possible future directions
that could be pursued on an empirical, methodological, and
practical level. Most importantly, in terms of evaluation, the
study relating to the BoC was limited with respect to the
number of participants and requires further attention to its
eectiveness in natural settings, longitudinally, and in terms
of the questions posed at the outset. Furthermore, the evalu-
ation instruments of post-experience self-reporting, through
questionnaires and semi-structured interviews, present limi-
tations [
]. Other researchers [
] and for other technolo-
gies [
] [
] have identied the limitations presented by
traditional methods when attempting to capture users’ inter-
action in unfolding, in-the-moment activities. We are already
exploring ways of embedding evaluation in the experience it-
self, e.g. by weaving it into Messenger in a relatively seamless
manner. Nevertheless, as the issues pertaining to evaluation
methods range beyond the scope of this paper, we have kept
them out of the discussion and plan to address them in sub-
sequent studies. Beyond evaluation, the BoC has potential as
a dialogue facilitator in multi-user contexts. We are already
exploring such potential in related work in informal educa-
tion contexts, with preliminary results showing the fostering
of historical empathy among middle school-aged users [
UX and interaction designers increasingly have to wrestle
with the complex problem of designing digital encounters
that have relevance for users. In this space, we believe that
our contributions are threefold:
Adding to the corpus of theoretical design considera-
tions [
] concerning interactions with conversational
agents. If critical design and reection should be a core
technology design outcome of HCI [
] [
] [
], a key
goal of this work is to contribute towards creating the
conditions in a user-to-bot interaction episode which
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
have the intention of soliciting specic intended ac-
tions from participants.
Proposing a design methodology that adopts a user
centered participatory approach, to make chatbots and
conversational agents more relevant to their users.
Ultimately contributing to the design of digital sys-
tems that evoke meaningful interaction, envisioning a
world where HCI can become the impetus for personal
transformation and social change.
We have presented the design of a human-bot conver-
sational experience that relies on a relatively simple set of
strategies to facilitate meaning making. Although instanti-
ated in a very specic cultural heritage setting, we believe
that our framework is replicable in other contexts and an
urgency for fostering critical dialogue in the world today.
This work is part of the EMOTIVE project, which has re-
ceived funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement
No. 727188. The authors would like to thank the EMOTIVE
team and Dr Vasilis Vlachokyriakos for their comments and
insightful conversations on drafts of this paper. We also wish
to thank the users that participated in our formative studies,
and our anonymous reviewers.
Auckland Art Gallery. 2018. Auckland Art Gallery’s new chatbot: art-
icial intelligence.
auckland-art-gallerys-new-chatbot- art-icial-intelligence.htm Last
accessed 31 December 2018.
Jerey Bardzell and Shaowen Bardzell. 2013. What is "critical" about
critical design?. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems - CHI ’13. ACM Press, New York, New
York, USA, 3297.
Shaowen Bardzell, Jerey Bardzell, Jodi Forlizzi, John Zimmerman, and
John Antanitis. 2012. Critical design and critical theory: the challenge
of designing for provocation. In Proceedings of the Designing Interactive
Systems Conference on - DIS ’12. ACM Press, New York, New York, USA,
Justine Cassell. 2001. Embodied conversational agents: representation
and intelligence in user interfaces. AI Magazine 22, 4 (2001), 67–83.
Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vihjalmsson, and
Hao Yan. 2000. Human conversation as a system framework: designing
embodied conversational agents. In Embodied conversational agents.
MIT Press, Cambridge, MA, USA, Chapter 2, 29–62.
Dot - Akron Art Museum. 2018. Dot - Akron Art Mu-
seum guide.
connect-with-dot-launch-party/12829 Last accessed 31 December
Anthony Dunne and Fiona Raby. 2001. Design Noir: The Secret Life of
Electronic Objects. Birkhäuser, Basel, Switzerland.
[8] Jason Endacott and Sarah Brooks. 2013. An Updated Theoretical and
Practical Model for Promoting Historical Empathy. Social Studies
Research and Practice 8, 1 (2013), 41–58.
Asbjørn Følstad and Petter Bae Brandtzæg. 2017. Chatbots and the
new world of HCI. interactions 24, 4 (jun 2017), 38–42. https://doi.
Smithsonian Institution & Museweb Foundation. 2016. Storytelling
Toolkit - Facilitated Dialogue. Technical Report. 21 pages. https://les/facilitated_dialogue.pdf
Katrina Gargett. 2018. Re-thinking the guided tour: co-creation, dialogue
and practices of facilitation at York Minster. MA Thesis. University of
Avelino J. Gonzalez, James R. Hollister, Ronald F. DeMara, Jason Leigh,
Brandan Lanman, Sang-Yoon Lee, Shane Parker, Christopher Walls,
Jeanne Parker, Josiah Wong, Clayton Barham, and Bryan Wilder. 2017.
AI in Informal Science Education: Bringing Turing Back to Life to
Perform the Turing Test. International Journal of Articial Intelli-
gence in Education 27, 2 (jun 2017), 353–384.
Shawn Graham. 2017. An Introduction to Twitter Bots with Tracery.
Tom Hennes. 2002. Rethinking the Visitor Experience: Transforming
Obstacle into Purpose. Curator: The Museum Journal 45, 2 (apr 2002),
Anne Frank House. 2017. Anne Frank House bot for Messenger
news/2017/3/21/anne-frank-house-launches-bot- messenger/ Last ac-
cessed 31 December 2018.
Akrivi Katifori, Maria Roussou, Sara Perry, George Drettakis, Sebastian
Vizcay, and Julien Philip. 2018. The EMOTIVE Project - Emotive virtual
cultural experiences through personalized storytelling. In EuroMed
2018, International Conference on Cultural Heritage. Lemessos, Cyprus.
Mark Katrikh. 2018. Creating Safe(r) Spaces for Visitors and Sta in
Museum Programs. Journal of Museum Education 43, 1 (jan 2018), 7–15.
A. Baki Kocaballi, Liliana Laranjo, and Enrico Coiera. 2018. Mea-
suring User Experience in Conversational Interfaces: A Compari-
son of Six Questionnaires. In Proceedings of the 32Nd International
BCS Human Computer Interaction Conference (HCI ’18). BCS Learn-
ing & Development Ltd., Swindon, UK, Article 21, 12 pages. https:
Stefan Kopp, Christian Becker, and Ipke Wachsmuth. 2006. The Virtual
Human Max - Modeling Embodied Conversation. In KI 2006 - Demo
Presentation, Extended Abstracts. 19–22.
Stefan Kopp, Lars Gesellensetter, Nicole C. Krämer, and Ipke
Wachsmuth. 2005. A Conversational Agent as Museum Guide âĂŞ
Design and Evaluation of a Real-World Application. In Intelligent
Virtual Agents. IVA 2005. Lecture Notes in Computer Science, vol 3661,
T. Panayiotopoulos, J. Gratch, Ruth Aylett, Ballin Dan, Olivier Patrick,
and T. Rist (Eds.). Springer Berlin Heidelberg, 329–343. https:
Peter M. Krat, Michael Macy, and Alex "Sandy" Pentland. 2017. Bots
as Virtual Confederates. In Proceedings of the 2017 ACM Conference on
Computer Supported Cooperative Work and Social Computing - CSCW
’17. ACM Press, New York, New York, USA, 183–190.
H. Chad Lane, Clara Cahill, Susan Foutz, Daniel Auerbach, Dan Noren,
Catherine Lussenhop, and William Swartout. 2013. The Eects of a
Pedagogical Agent for Informal Science Education on Learner Behav-
iors and Self-ecacy. In Articial Intelligence in Education. AIED 2013.
Lecture Notes in Computer Science, vol 7926, H. Chad Lane, Kalina Yacef,
Jack Mostow, and P.Pavlik (Eds.). Springer Berlin Heidelberg, Memphis,
TN, USA, 309–318.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Q. Vera Liao, Werner Geyer, Muhammed Mas-ud Hussain, Praveen
Chandar, Matthew Davis, Yasaman Khazaeni, Marco Patricio Crasso,
Dakuo Wang, Michael Muller, and N. Sadat Shami. 2018. All Work
and no Play? Conversations with a Question-and-Answer Chatbot in
the Wild. In Proceedings of the 2018 CHI Conference on Human Factors
in Computing Systems - CHI ’18. ACM, New York, NY, USA, 1–13.
Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA":
The Gulf between User Expectation and Experience of Conversational
Agents. In Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems - CHI ’16. ACM Press, New York, New York, USA,
Bernadette Lynch. 2013. Reective debate, radical transparency and
trust in museums. Museum Management and Curatorship 28, 1 (2013),
Matt Malpass. 2013. Between Wit and Reason: Dening As-
sociative, Speculative, and Critical Design in Practice. Design
and Culture 5, 3 (nov 2013), 333–356.
Timothy Marsh, Peter Wright, and Shamus P. Smith. 2001. Evalua-
tion for the design of experience in virtual environments: modeling
breakdown of interaction and illusion. Cyberpsychology & behavior: the
impact of the Internet, multimedia and virtual reality on behavior and so-
ciety 4, 2 (2001), 225–238.
Nikita Mattar and Ipke Wachsmuth. 2014. Let’s Get Personal. In
Human-Computer Interaction. Advanced Interaction Modalities and
Techniques. HCI 2014. Lecture Notes in Computer Science, vol 8511.
Springer, Cham, 450–461. 07230-2_
John McCarthy, Peter Wright, Jayne Wallace, and Andy Dearden.
2006. The experience of enchantment in human-computer inter-
action. Personal and Ubiquitous Computing 10, 6 (2006), 369–378.
Sierra McKinney. 2018. Generating pre-historical empathy in classrooms.
Master’s thesis. University of York.
Michael Minge and Manfred Thüring. 2018. Hedonic and pragmatic
halo eects at early stages of User Experience. International Journal
of Human-Computer Studies 109 (jan 2018), 13–25.
Elahe Paikari and André van der Hoek. 2018. A framework for un-
derstanding chatbots and their future. In Proceedings of the 11th In-
ternational Workshop on Cooperative and Human Aspects of Software
Engineering - CHASE ’18. ACM Press, New York, New York, USA, 13–16.
Sara Perry. 2018. The Enchantment of the Archaeological Record.
In 24th Annual Meeting of the European Association of Archaeologists.
European Association of Archaeologists, Barcelona, Spain.
James Pierce, Phoebe Sengers, Tad Hirsch, Tom Jenkins, William Gaver,
and Carl DiSalvo. 2015. Expanding and Rening Design and Criticality
in HCI. In Proceedings of the 33rd Annual ACM Conference on Human
Factors in Computing Systems - CHI ’15. ACM Press, New York, New
York, USA, 2083–2092.
Maria Roussou and Akrivi Katifori. 2018. Flow, Staging, Waynding,
Personalization: Evaluating User Experience with Mobile Museum
Narratives. Multimodal Technologies and Interaction 2, 2 (jun 2018), 32.
Mark Sample. 2014. A protest bot is a bot so specic you can’t mistake
it for bullshit: A Call for Bots of Conviction.
Ari Schlesinger, Kenton P. O’Hara, and Alex S. Taylor. 2018. Let’s
Talk About Race: Identity, Chatbots, and AI. In Proceedings of the 2018
CHI Conference on Human Factors in Computing Systems - CHI ’18.
ACM Press, New York, New York, USA, 1–14.
M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M.
ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller,
E. de Sevin, M. Valstar, and M. Wollmer. 2012. Building Autonomous
Sensitive Articial Listeners. IEEE Transactions on Aective Computing
3, 2 (apr 2012), 165–183.
Phoebe Sengers, Kirsten Boehner, Shay David, and Joseph ’Josh’ Kaye.
2005. Reective design. In Proceedings of the 4th decennial conference
on Critical computing between sense and sensibility - CC ’05. ACM
Press, New York, New York, USA, 49.
Samira Shaikh. 2017. A persuasive virtual chat agent based on so-
ciolinguistic theories of inuence. AI Matters 3, 2 (jul 2017), 26–27.
Walter Sinnott-Armstrong. 2018. Think Again: How to Reason and
Argue. Penguin, London, UK.
Mel Slater. 2004. How Colorful Was Your Day? Why Question-
naires Cannot Assess Presence in Virtual Environments. Presence:
Teleoperators and Virtual Environments 13, 4 (aug 2004), 484–493.
Laurajane Smith. 2016. Changing views? Emotional intelligence, reg-
isters of engagement, and the museum visit. In Museums as Sites of
Historical Consciousness: Perspectives on museum theory and practice
in Canada, Vivienne Gosselin and Phaedra Livingstone (Eds.). UBC
Press, Vancouver, Canada, Chapter 6, 101–121.
Barbara J. Soren. 2009. Museum experiences that change visitors.
Museum Management and Curatorship 24, 3 (sep 2009), 233–251. https:
Statista. 2018. Number of monthly active Facebook Messenger users
from April 2014 to September 2017 (in millions). https://www.statista.
com/statistics/417295/facebook-messenger- monthly-active-users/
Last accessed 31 December 2018.
William Swartout, David Traum, Ron Artstein, Dan Noren, Paul De-
bevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth
Narayanan, Diane Piepol, Chad Lane, Jacquelyn Morie, Priti Aggarwal,
Matt Liewer, Jen-Yuan Chiang, Jillian Gerten, Selina Chu, and Kyle
White. 2010. Ada and Grace: Toward Realistic and Engaging Virtual
Museum Guides. In IVA 2010, J. Allbeck (Ed.). Springer-Verlag Berlin
Heidelberg, 286–300.
Ella Tallyn, Hector Fried, Rory Gianni, Amy Isard, and Chris Speed.
2018. The Ethnobot: Gathering Ethnographies in the Age of IoT. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing
Systems - CHI ’18. ACM Press, New York, New York, USA, 1–13. https:
The House Museums of Milan. 2016. Di Casa in casa adventour. https:
// Last accessed 31 December
Angeliki Tzouganatou. 2017. Chatbot Experience for ÇATALHÖYÜK.
Master’s thesis. University of York.
Stavros Vassos, Eirini Malliaraki, Federica dal Falco, Jessica Di Mag-
gio, Manlio Massimetti, Maria Giulia Nocentini, and Angela Testa.
2016. Art-Bots: Toward Chat-Based Conversational Experiences in
Museums. In Interactive Storytelling. 9th International Conference
on Interactive Digital Storytelling, ICIDS 2016, Frank Nack and An-
drew S. Gordon (Eds.). Los Angeles, CA, USA, 433–437. https:
Astrid M. von der Pütten, Nicole C. Krämer, Jonathan Gratch, and
Sin-Hwa Kang. 2010. “It doesn’t matter what you are!” Explaining
social eects of agents and avatars. Computers in Human Behavior 26,
6 (nov 2010), 1641–1650.
Margaret Wetherell. 2012. Aect and Emotion (1st ed.). Sage Publica-
tions Ltd, London, UK. 192 pages.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
Page 12
Margaret Wetherell, Laurajane Smith, and Gary Campbell. 2018. Intro-
duction: Aective heritage practices. In Emotion, Aective Practices,
and the Past in the Present, Laurajane Smith, Margaret Wetherell, and
Gary Campbell (Eds.). Routledge, London, 1–21.
Peter Wright and John McCarthy. 2008. Empathy and experience in
HCI. In Proceeding of the twenty-sixth annual CHI conference on Human
factors in computing systems - CHI ’08. ACM Press, New York, New
York, USA, 637.
CHI 2019 Paper
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK
Paper 627
... Δύο εφαρμογές των BoCs στον τομέα της πολιτιστικής κληρονομιάς είναι το ChatÇat και το Bo the Chatbot. Το ChatÇat, το Bot of Conviction του Çatalhöyük's (Roussou, Perry, Katifori et al. 2019), δημιουργήθηκε για να διερευνήσει παραδείγματα προτύπων rule-based bots για αρχαιολόγους και επαγγελματίες του χώρου της πολιτιστικής κληρονομιάς που επιζητούν ελεγχόμενη, εποικοδομητική αλλά προκλητική διάδραση με το κοινό τους (Tzouganatou, 2018). Πρόκειται για ένα bot που σχεδιάστηκε για διάλογο με έναν χρήστη και, αφενός παρέχει πληροφορίες για τον νεολιθικό οικισμό του Çatalhöyük, ο οποίος περιλαμβάνεται στον κατάλογο της UNESCO, αφετέρου ωθεί σε στοχασμό αναφορικά με συγκεκριμένες πρακτικές των κατοίκων του οικισμού, σύμφωνα με την προσέγγιση των BoC. ...
Full-text available
Η κοινωνική αλληλεπίδραση είναι παρούσα σε πολλές πτυχές της καθημερινότητας μας, ιδιαίτερα δε στον τομέα της εκπαίδευσης. Όπως δείχνουν οι έρευνες επηρεάζει θετικά τη μάθηση σε μουσειακό περιβάλλον, ενώ ο διάλογος, ως μια συνηθισμένη μορφή κοινωνικής αλληλεπίδρασης, αποτελεί αναπόσπαστο μέρος της συνεργατικής μάθησης, ειδικότερα στο πλαίσιο της εκπαίδευσης της Ιστορίας. Στο άρθρο αυτό εξετάζουμε τη χρήση συνεργατικών και διαλογικών εμπειριών σε χώρους πολιτισμού, παρουσιάζοντας δύο εμπειρίες που δημιουργήθηκαν για τον αρχαιολογικό χώρο της Αρχαίας Αγοράς της Αθήνας. Η πρώτη είναι μια δραστηριότητα συνεργατικής διαδραστικής ψηφιακής αφήγησης (storytelling) που αφορά τον χώρο της Αγοράς, στο πλαίσιο της οποίας οι χρήστες συζητούν στα σημεία επιλογών και παίρνουν αποφάσεις από κοινού. Η δεύτερη είναι μια συζήτηση με chatbot, το οποίο λειτουργεί ως διαμεσολαβητής, οδηγώντας τους συμμετέχοντες σε διάλογο γύρω από θέματα που σχετίζονται με την προαναφερόμενη ψηφιακή αφήγηση και την αρχαία Αθήνα, ενώ παράλληλα είναι σύμφυτα με σύγχρονους προβληματισμούς του ανθρώπου, όπως η ελευθερία, η θρησκεία, η κοινωνική δομή κ.α. Οι μαθητές εμπλέκονται σε εποικοδομητικό διάλογο μεταξύ τους, μέσα από τη λήψη προοπτικής, τη συλλογική δημιουργία νοήματος και τις συνδέσεις με το παρόν, με διαφορετικό τρόπο στην κάθε εμπειρία. Η συγκριτική παρουσίαση των δύο εμπειριών έχει ως στόχο να συμβάλει στην κατανόηση του τρόπου με τον οποίο τα χαρακτηριστικά κάθε εμπειρίας συνδέονται με την ανάπτυξη ιστορικού αναστοχασμού, ενώ παράλληλα υπογραμμίζει την αξία των διαλογικών και συνεργατικών προσεγγίσεων για την επαφή των μαθητών με τα πολιτιστικά αγαθά.
... In order to explore RQ2, we aimed to capture meaningfulness in interactive media as a structured evaluation tool for a SG prototype, in order to further support the development of applications that include specific meaning-making practices [29]. Unfortunately, there are not many focused studies on how to structurally analyze meaningfulness in interactive media. ...
Full-text available
This contribution analyzes the impact of factors related to story structure, meaningfulness, and concentration in the design of Serious Games. To explore them, the authors carried out an experimental evaluation aiming to identify relevant aspects affecting the cognitive-emotional impact of immersive Virtual Reality (VR), specifically Educational Environmental Narrative (EEN) Games. The experiment was designed around three main research questions: if passive or active interaction is preferable for factual and spatial knowledge acquisition; whether meaningfulness is a relevant experience in a serious game (SG) context; and if concentration impacts knowledge acquisition and engagement also in VR educational games. The findings highlight that passive interaction should only be encouraged for factual knowledge acquisition, that meaningfulness is a relevant experience and should be included in serious game design, and, finally, that concentration is a factor that impacts the experience in immersive games. The authors discuss potential design paths to improve both factual and spatial knowledge acquisition, such as abstract concept-oriented design, concluding that SGs should contain game mechanics explicitly supporting players’ moments of reflection, and story structures explicitly aligned to educational facts.
Full-text available
Chatbots are rapidly growing application area of conversational artificial intelligence. The aim of the paper is to explore the evaluation of user experience with chatbot applications in museums and galleries. Introduction to principles of chatbots, their creation and testing is provided. Methods of user experience evaluation are explained and the indicators that can be used to assess user experience with chatbots are listed. History and classification of museum chatbots is briefly summarized. A systematic review according to the PRISMA methodology was conducted to map the latest trends in museum chatbots’ development and namely to answer two research questions: (1) What chatbots have been developed for the needs of museums and galleries? and (2) Was the visitor experience with these chatbots evaluated? The research gap in measuring visitor experience with chatbots was identified.
Full-text available
The future of educational automation in higher education is commonly seen as an inevitable trajectory and beyond the control of individual institutions or communities. Much research has focused on how such technologies can remove agency and reproduce inequalities through encoded biases. Indeed, many conceptualisations of educational automation are problematic, but less is known about what can be done to take more control over them. By moving away from critique alone, this paper seeks to demystify educational automation and develop a methodology that enables both institutions and staff to take greater control over the technologies in their institutional work. This methodology emerges from multiple research projects exploring digital education, automation, and educational futures and brings together the findings from these to find ways to establish ethical praxis in future forms of educational automation. This methodology and its attendant ethical praxis posit that critique must be used in tandem with creativity and activism to fully realise new and just educational futures.
This contribution describes an experiment carried out in 2020 with the goal of exploring factors affecting the cognitive-emotional impact of immersive VR Serious Games, and specifically of Educational Environmental Narrative Games. The experimental evaluation was aimed at better understanding three research questions: if passive or active interaction is preferable for users’ factual and spatial knowledge acquisition; if meaningfulness could be considered as a relevant experience in a serious game (SG) context; and if distraction has an impact on knowledge acquisition and engagement in immersive VR educational games. Although the experiment involved only a limited number of participants, our results led to the identification of some relevant tendencies and factors which ought to be considered in the development of future SGs, and which reveal the need for further studies in HCI and game design.KeywordsVR Serious GamesCognitionAttention
Full-text available
Social interaction has been recognized as positively affecting learning, with dialogue–as a common form of social interaction–comprising an integral part of collaborative learning. Interactive storytelling is defined as a branching narrative in which users can experience different story lines with alternative endings, depending on the choices they make at various decision points of the story plot. In this research, we aim to harness the power of dialogic practices by incorporating dialogic activities in the decision points of interactive digital storytelling experiences set in a history education context. Our objective is to explore interactive storytelling as a collaborative learning experience for remote learners, as well as its effect on promoting historical empathy. As a preliminary validation of this concept, we recorded the perspective of 14 educators, who supported the value of the specific conceptual design. Then, we recruited 15 adolescents who participated in our main study in 6 groups. They were called to experience collaboratively an interactive storytelling experience set in the Athens Ancient Agora (Market) wherein we used the story decision/branching points as incentives for dialogue. Our results suggest that this experience design can indeed support small groups of remote users, in-line with special circumstances like those of the COVID-19 pandemic, and confirm the efficacy of the approach to establish engagement and promote affect and reflection on historical content. Our contribution thus lies in proposing and validating the application of interactive digital storytelling as a dialogue-based collaborative learning experience for the education of history.
Digital archaeology is both a pervasive practice and a unique subdiscipline within archaeology. The diverse digital methods and tools employed by archaeologists have led to a proliferation of innovative practice that has fundamentally reconfigured the discipline. Rather than reviewing specific technologies, this review situates digital archaeology within broader theoretical debates regarding craft and embodiment; materiality; the uncanny; and ethics, politics, and accessibility. A future digital archaeology must move beyond skeuomorphic submission and replication of previous structural inequalities to foment new archaeological imaginaries. Expected final online publication date for the Annual Review of Anthropology Volume 51 is October 2022. Please see for revised estimates.
Full-text available
Empirical studies increasingly testify to the capacity for archaeological and cultural heritage sites to engender wonder, transformation, attachment, and community bonding among diverse individuals. Following political theorist Jane Bennett, these sites have the power to ‘enchant’ and, in so doing, they are seedbeds of human generosity, ethical mindfulness, and care for the world at large. However, the means by which such enchantment is created, and the extent to which these intimate encounters with the prehistoric or historic record can be deliberately crafted, are little understood. Worsening the predicament, professional practices commonly thwart the potential for archaeology to provoke ethical action amongst humans. Here, I propose a multi-stranded conceptual model for generating enchantment with the archaeological record across both professional audiences and broader publics. With reference to the European Commission-funded EMOTIVE Project, I articulate one particular strand of this model: facilitated dialogue. Alongside exploring the role of digital culture in its advancement, I argue that an enchantment-led approach is imperative for achieving a truly socially-beneficial archaeological discipline.
Conference Paper
Full-text available
User experience (UX) has become an important aspect in the evaluation of interactive systems. In parallel, conversational interfaces have been increasingly used in many work and everyday settings. Although there have been various methods developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This study reviews the six main questionnaires for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure various UX dimensions. We found that (i) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (ii) two questionnaires assessed affect, and one assessed frustration dimension; and, (iii) enchantment, playfulness and motivation dimensions have not been covered sufficiently by any questionnaires. We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension.
Conference Paper
Full-text available
Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways?
Conference Paper
Full-text available
Many conversational agents (CAs) are developed to answer users' questions in a specialized domain. In everyday use of CAs, user experience may extend beyond satisfying information needs to the enjoyment of conversations with CAs, some of which represent playful interactions. By studying a field deployment of a Human Resource chatbot, we report on users' interest areas in conversational interactions to inform the development of CAs. Through the lens of statistical modeling, we also highlight rich signals in conversational interactions for inferring user satisfaction with the instrumental usage and playful interactions with the agent. These signals can be utilized to develop agents that adapt functionality and interaction styles. By contrasting these signals, we shed light on the varying functions of conversational interactions. We discuss design implications for CAs, and directions for developing adaptive agents based on users' conversational behaviors.
Conference Paper
Full-text available
Computational systems and objects are becoming increasingly closely integrated with our daily activities. Ubiquitous and pervasive computing first identified the emerging challenges of studying technology used on-the-move and in widely varied contexts. With IoT, previously sporadic experiences are interconnected across time and space in numerous and complex ways. This increasing complexity has multiplied the challenges facing those who study human experience to inform design. This paper describes the results of a study that used a chatbot or 'Ethnobot' to gather ethnographic data, and considers the opportunities and challenges in collecting this data in the absence of a human ethnographer. This study involved 13 participants gathering information about their experiences at the Royal Highland Show. We demonstrate the effectiveness of the Ethnobot in this setting, discuss the benefits and drawbacks of chatbots as a tool for ethnographic data collection, and conclude with recommendations for the design of chatbots for this purpose.
The first book to be published on the work of their partnership (in 2001), Design Noir is the essential primary source for understanding the theoretical and conceptual underpinnings for Dunne & Raby's work. Consisting of three elements - a 'manifesto' on the possibilities of designing with and for the 'secret life' of electronic objects; notes for an embryonic network of critical designers and, most famously, the presentation of the Placebo Project – a prototype for a critical design poetics enacted around electronic furniture-objects – Design Noir offers an in-depth exploration of one of the most seminal design projects of the last two decades, one that arguably initiated speculating through design in its contemporary forms. By detailing the logic and character of the objects that were constructed; the involvement of users with these objects over-time, and in the creation of a new kinds of spatially and temporally distributed moments of critique and engagement with things, Design Noir presents the case-study of the Placebo projectas a far more complex and subtler project than is often thought. As a bold and in many ways unprecedented experiment in design writing and book designing, Design Noir is itself an instance of the speculative propositional design it expounds.
An Introduction to Twitter Bots with Tracery This lesson explains how to create simple twitterbots using Tracery and the Cheap Bots Done Quick service. Tracery exists in multiple languages and can be integrated into websites, games, bots.
Conference Paper
Chatbots have rapidly become a mainstay in software development. A range of chatbots contribute regularly to the creation of actual production software. It is somewhat difficult, however, to precisely delineate hype from reality. Questions arise as to what distinguishes a chatbot from an ordinary software tool, what might be desirable properties of chatbots, and where their future may lie. This position paper introduces a starting framework through which we examine the current state of chatbots and identify directions for future work.
Visitors come to museums for many reasons, including to learn something new about our world, not specifically to have an emotional response. Visitors unprepared for personal experiences can manifest their confusion in a multitude of ways. Anticipating such reactions, museums must engage in dialogue, to help visitors process emotions and ultimately allow them to reach a place of equilibrium. One of the Museum of Tolerance’s approaches to safety and to creating responsible conversations is a framework developed for both understanding and managing key issues that arise when facilitating challenging conversations. Used in training, this Five Layers of Taking Care reminds us of the many interests and needs of participants and stakeholders involved in a given conversation and the responsibility that museum educators have to approach them with compassion, mindfulness, and skilled responses. The five layers are: the guide, the questioner, the group, the museum, and the world.