ChapterPDF Available

Different Chatbots for Different Purposes: Towards a Typology of Chatbots to Understand Interaction Design

Authors:

Abstract and Figures

Chatbots are emerging as interactive systems. However, we lack knowledge on how to classify chatbots and how such classification can be brought to bear in analysis of chatbot interaction design. In this workshop paper, we propose a typology of chatbots to support such classification and analysis. The typology dimensions address key characteristics that differentiate current chatbots: the duration of the user’s relation with the chatbot (short-term and long-term), and the locus of control for user’s interaction with the chatbot (user-driven and chatbot-driven). To explore the usefulness of the typology, we present four example chatbot purposes for which the typology may support analysis of high-level chatbot interaction design. Furthermore, we analyse a sample of 57 chatbots according to the typology dimensions. The relevance and application of the typology for developers and service providers are discussed.
Content may be subject to copyright.
Different Chatbots for Different Purposes: Towards a
Typology of Chatbots to Understand Interaction Design
Asbjørn Følstad1, Marita Skjuve1, Petter Bae Brandtzaeg1,2
1 SINTEF, Oslo, Norway
2 University of Oslo, Department of Media and Communication, Oslo, Norway
asf@sintef.no
Abstract. Chatbots are emerging as interactive systems. However, we lack
knowledge on how to classify chatbots and how such classification can be
brought to bear in analysis of chatbot interaction design. In this workshop pa-
per, we propose a typology of chatbots to support such classification and analy-
sis. The typology dimensions address key characteristics that differentiate cur-
rent chatbots: the duration of the user's relation with the chatbot (short-term and
long-term), and the locus of control for user's interaction with the chatbot (user-
driven and chatbot-driven). To explore the usefulness of the typology, we pre-
sent four example chatbot purposes for which the typology may support analy-
sis of high-level chatbot interaction design. Furthermore, we analyse a sample
of 57 chatbots according to the typology dimensions. The relevance and appli-
cation of the typology for developers and service providers are discussed.
Keywords: Chatbots, typology, interaction design
1 Introduction
There is great variation in how chatbots are implemented. From a user-centred design
perspective, the variation in high-level approaches to interaction design is particularly
interesting. One source of variation concerns the level of control bestowed on the
chatbot. While some chatbots are designed to resemble Victorian servants, only aim-
ing to satisfy their masters' requests, others are designed to persuade its users and lead
them towards a particular goal. Another source of variation concerns the duration of
the relation with the chatbot. While some chatbots target brief one-off encounters,
others aim for establishing and maintaining long-term relations with their users.
Choice of high-level approach to chatbot interaction design is important, as it
needs to fit the users' needs and desires in a given use-case and also reflect the
strengths and limitations of the underlying technology on which the chatbot depends.
As an illustration of the importance of these choices, consider two well-known
chatbots: Woebot (https://woebot.io), a self-help chatbot where users learn to cope
with mental health issues, and Google Assistant (https://assistant.google.com), a per-
sonal assistant helping users with tasks such as planning, search, and controlling
smart home devices. Whereas Woebot takes the user through a long-term program
This is the authors' version of a paper with reference: Følstad A., Skjuve M., Brandtzaeg P.B.
(2019) Different Chatbots for Different Purposes: Towards a Typology of Chatbots to Under-
stand Interaction Design. In: Bodrunova S. et al. (eds) Internet Science. INSCI 2018. Lecture
Notes in Computer Science, vol 11551. Springer, Cham.
The version of record is available at: https://doi.org/10.1007/978-3-030-17705-8_13
consisting of brief daily interactions where much of the user input is predefined,
Google Assistant awaits the user's requests and seeks to serve these with minimal
requirements on the user as to how or what the input should be.
While different chatbot purposes clearly require different overall approaches to in-
teraction design, there is little guidance to be found in the literature on how to classify
chatbots and how to analyse interaction design with regard to different chatbot types.
The contribution of this paper is to propose a typology of chatbots, intended as a
first step towards a framework that enables chatbot classification and provides better
understanding of chatbot interaction design. To exemplify one possible use of the
typology to support interaction design, we demonstrate how the proposed typology
may be used to guide analysis of high-level interaction design in four common chat-
bot purposes: customer service, personal assistants, content curation, and coaching.
We also apply the typology to classify a set of chatbots of some current prominence.
The paper is structured as follows. First we present a brief overview of relevant
background on chatbots, chatbot interaction design, and typologies. We then propose a
typology of chatbots, show example uses of the typology to support analysis of high-
level interaction design, and present a study where the typology is applied for classify-
ing current chatbots. Finally, we discuss the typology and propose future work.
2 Background
2.1 Chatbots and chatbot interaction design
Chatbots are conversational agents that provide users' with access to data and services
through natural language dialogue [7]. While the term chatbots typically is applied for
text-based interaction, it may also encompass voice-based conversational agents such
as Apple's Siri and Amazon's Alexa. Chatbots are used for a range of application are-
as such as customer support [12], health [6], and education [8], in addition to market-
ing, entertainment, and general assistance with simple tasks.
While conversational user interfaces have been an object of research and develop-
ment since the sixties [11], the literature comprehensively treating how to design for
chatbots is somewhat limited. However, major tech companies have provided guidelines
on conversational interaction design, such as Google's guide to conversation design
(https://developers.google.com/actions/design/), IBM's resources on conversational UX
design (http://conversational-ux.mybluemix.net/design/conversational-ux/), and Ama-
zon's design guide for Alexa (https://developer.amazon.com/designing-for-voice/).
Material on conversational design is also found in developer and designer blogs, and
in some practitioner-oriented textbooks on conversational design [e.g. 9].
2.2 Typologies
Typologies are much used for classification purposes, in particular within the social
sciences [1]. Typologies can support analysis and design of information systems as is
facilitates learning across instances, for example as transfer of knowledge between
instances of the same type [5].
Collier et al. [4] provided a three-step template for typology development. First,
the general concept is outlined. Second, key dimensions capturing salient variation in
the concept are identified. Third, the dimensions are cross tabulated, and each type
within the cross tabulation is described.
Within a typology, the classes of a dimension should be collectively exhaustive
and mutually exclusive. That is, the typology should include all possible cases and
each case should fit exclusively within only one type.
The current literature provides little support for designers and developers in distinguish-
ing between different chatbots types, and even less on different approaches to analyse
chatbot interaction design in correspondence to such types. IBM's research group on con-
versational UX design suggest to differentiate between four interactional styles in conver-
sational systems: system-centric, content-centric, visual-centric and conversation-centric
(https://researcher.watson.ibm.com/researcher/view_group.php?id=8426). Chen et al.
[4] distinguished between task-oriented and non-task oriented dialogue systems, but
did not detail how this brings to bear on the interaction design of such systems.
3 Research objective
In response to the lack in support for classifying chatbots, in particular for the purpose
of supporting interaction design, three objectives were explicated for the presented
work. First, to propose a chatbot typology in compliance with established criteria for
typology development [1]. Second, to explore how this typology could be helpful in
analyzing different high-level approaches to interaction design. Third, to review a
sample of notable chatbots to investigate the potential usefulness of the typology for
analysis and classification.
By meeting the research objectives, the presented work should be useful as a start-
ing point for future research on differentiating chatbots and approaches to chatbot
interaction design.
4 Research method
The research method consisted of a four-step process.
Step 1: First, a set of chatbots of some prominence was gathered. We took as start-
ing point the listings of recommendable chatbots from four relevant blogs and news
websites (Chatbot Magazine, Wired, Forbes, and the Norwegian Din Side), as well as
the Chatbottle Award 2017. Also, we included chatbots mentioned by two or more par-
ticipants in a survey on chatbot users [2]. In total, 57 chatbots were included in the set.
Step 2: On the basis of reviewing this initial set of chatbots, dimensions differenti-
ating these were suggested in an explorative manner. Possible dimensions included,
for example, application domain (e.g. consumer goods, finance, games and entertain-
ment, health and fitness, media and publishers), purposes (e.g. marketing and ecom-
merce, news and factual media content, social chatter and connections, customer sup-
port, personal assistant), platform (e.g. Facebook Messenger, Slack, Skype, Kik), or
user group (e.g. children, youth, professional workers, elderly). However, the dimen-
sions seemingly most promising were more generic, characterizing the intended dura-
tion of the relation and the locus of control for the dialogue. The typology was then
detailed, following the recommendations of Collier et al. [4].
Step 3: High-level interaction designs for four chatbot purposes were analysed with
a starting point in the proposed typology. These purposes were intended to reflect key
areas of interest: customer service, personal assistants, content curation, and coaching.
Step 4: The initial set of chatbots were coded in accordance with the typology. The
typology was critically discussed based on the four steps of the research method.
5 Chatbot typology
The initial set of chatbots identified as basis for establishing the typology, belonged to
domains such as consumer goods, health, finance, media, food and beverage, travel,
social, general utilities. The chatbots served purposes such as social connections and
chatter, customer support, marketing and ecommerce, entertainment, news and factual
content, and personal assistant.
Within this broad variation, we noted that the chatbot interaction designs could be
structured according to two high-level dimensions. We refer to these as: Locus of
Control and Duration of Interaction. These dimensions comply with key requirements
for typology classification [4], where the types should be mutually exclusive while
covering the area of interest in a comprehensive manner. In the following we briefly
describe these two dimensions, before we detail the resulting four chatbot types.
5.1 Dimension 1: Locus of Control
Dialogue between humans typically is characterized by reciprocity, where the dia-
logue partners are expected to drive the dialogue in relatively equal measures. This in
contrast to chatbot dialogue, where different chatbots display markedly different ap-
proaches to which of the dialogue partners that are given the role as leaders of the
dialogue. In particular, we distinguish between chatbot-driven dialogue and user-
driven dialogue.
Chatbot-driven dialogue. Some chatbots provide a highly predefined interaction
design; that is, the interaction is to a high degree driven or controlled by the chatbot.
This is typically seen in scripted chatbots where the scripts include only limited op-
tions for branching or alternative paths. Or chatbots providing their users a small
number of choices for standardized content, for examples through the use of menus,
tiles, or carousels. Examples of such chatbots include content curating chatbots such
as the chatbot of the Wall Street Journal, chatbots serving as coaches or guides, such
as Woebot, and chatbots for marketing, such as the chatbot for Kia Motors America.
User-driven. Some chatbots are set up to enable more flexibility in the possible
input users may make, and to be more responsive to variations in user input. This
arguably is more challenging, both technologically and in terms of the needed breadth
and volume of content. The chatbot will need to identify the users' intent, both on the
level of the individual messages and overall for the interaction or parts of the interac-
tion, and also to be able to respond adequately on this intent. In consequence, for
some user-driven chatbots, interaction sequences are typically relatively brief. This is
for example often the case in customer support chatbots, such as Alibaba's chatbots
for first tier response, or personal assistants, such as Google Assistant.
However, chatbots that has social small-talk as their main objective, so called chat-
terbots, are examples of chatbots that are user-driven and that also may enable longer
conversational sequences. Much because social chatter may have an associative char-
acter where answers are not easily classified as correct or not, chatbots for small-talk
may be set up with the sole purpose of keeping the conversation going. Well-known
examples of chatterbots include Mitsuku and Cleverbot.
5.2 Dimension 2: Duration of Relation
Human-chatbot relations may, from the service provider point of view, be intended as
either short-term or long-term relations. A short-term relation is characterized by a
user engaging with the chatbot once, without user profile information being gathered
or stored. A long-term relation is characterized by the chatbot drawing on user profile
information for strengthening user experience across visits.
Short-term relation. Chatbots for short-term engagement are typically set up to
provide users with one-off interactions, without an aim for a sustained relation. Chat-
bots for short-term interaction may possibly be characteristic of this period in chatbot
development still being in early phase. Many companies are still at a level of chatbot
maturity at which they are just trying out chatbots without seeing this as a prioritized
platform. Hence, the need to generate sustained relations is limited. This does not
mean that chatbots for short-term relations are only used once by the same users;
users may visit the chatbot several times. However, they are then treated as a new-
comer on each visit. Examples of short-term chatbots include content curating chat-
bots such as those run by CNN and Washington Post and marketing chatbots such as
those of Burberry and Kayak.
Long-term relation. Chatbots for long-term engagement to a greater degree ex-
ploit the potential in user profile information to provide a personalized interaction.
Examples of long-term engagements include content curation chatbots that offer re-
curring updates, chatbots for small-talk that remember what you chatted about in pre-
vious interaction sessions, and fitness chatbots that provide your fitness or workout
history and schedule. Chatbots situated in messaging platforms such as Facebook
Messenger and Kik has a good starting point for establishing long-term relations. The
messaging platform provide the chatbot service provider with access to user profile
information as well as facilities for easy reengagement.
Some long-term chatbots exploit the duration of the relation to gradually present a
rich set of content, such as a complex story or a game, or to gradually build skills and
capabilities in the user, such as in educational, fitness or therapy chatbots. Examples
of long-term chatbots include content providers with subscription functionality, such
as Poncho the weather cat and TechCrunch, educational and coaching chatbots such
as Atlas Fitness, Woebot, and St. Panda, and social chatbots such as Replika.
5.3 A two-dimensional typology
With basis in the two typology dimensions, a two-dimensional typology may be estab-
lished. The typology provides four mutually exclusive categories of chatbots, which
arguably overlap well with some of the main chatbot purposes.
Four main chatbot purposes, and their main location in the chatbot typology, is il-
lustrated in Fig. 1. It should however be noted that our placing of the example chatbot
purposes in the typology is not definite. That is, customer support chatbots may also
be rigged for long-term duration; however, due to current limitations in such chatbots
and lack of integration with
customer relationship man-
agement (CRM) systems
such chatbots, as of now,
typically are in the upper
left-hand corner. Likewise,
while content curation chat-
bots often reside in the upper
right-hand corner, some aim
at long-term relations with
their users, for example in
the form of daily updates.
The chatbot typology
may be used for the analyses
and presentation of chatbot
interaction design, as is seen
for the examples chatbot
purposes below.
6 Analysing interaction design on the basis of the typology
The proposed typology was used as basis for identifying high-level approaches to
interaction design for chatbots reflecting the four chatbot types. We briefly present
these for four example chatbot purposes.
6.1 Chatbots for customer support
By customer support we mean the provision of help or advice to customers or clients,
provided by a company, government body, or non-profit organization. Customer sup-
port is typically user-driven, that is, the user engages with customer support with a
particular question or concern in mind. The role of the service provider is to identify
the customers root problem and provide possible solutions.
Depending on user and service context, interactions may be one-off engagement
(e.g. in the case of general enquiries from a prospective customer) or part of a long-
term engagement with an existing customer. Hence, chatbots for customer support
typically may be classified as having a user-driven Locus of Control. Current chatbots
Fig. 1. A typology of chatbots with four example chatbot
purposes located within the typology dimensions
Short-term
Long-term
User-
driven Chatbot-
driven
Customer
support Content
curation
Personal
assistant Coach
DURATION OF RELATION
LOCUS OF CONTROL
for customer support typically have a short-term Duration of Relation. However, as
CRM integrations improve, such chatbots may increasingly be used for building long-
term engagement.
The user-driven character of customer support chatbots, typically lead designers to
make it easy and efficient for customers to enter their questions or concerns. Often in
the form of free-text, which the
chatbot uses as basis for identifying
topic and intent. The customer then
may confirm or critique the re-
sponse.
For example, in Alibaba's cus-
tomer support chatbot (Anna), the
first customer action is to enter the
query. However, the chatbot also
provides a short menu of frequently
asked question categories.
The main drivers of the dialogue
are the user questions, efficient
chatbot responses, and an oppor-
tunity for the user to provide feed-
back to query for additional in-
formation or provide response of
relevance to the quality of the
answer.
6.2 Personal assistant chatbots
Personal assistant chatbots are chatbots designed to serve a user continuously, on the
fly, in the users daily tasks. Such as help to look up information, find and present
content (typically music or movies), or control the environment through internet of
things applications (e.g. turn on/off lights).
Personal assistant chatbots are highly
user-driven, that is, the chatbot may re-
spond to a wide range of requests made by
the user. The role of the personal assistant
is to efficiently and effectively interpret
and deliver. The personal assistant is fur-
ther intended for long-term relations, with
high levels of personalization. Personal
assistance chatbots may therefore be classi-
fied as having a user-driven Locus of Con-
trol and long-term Duration of Relation.
In response to the personal assistants'
user-driven, interaction design typically
aim to make it easy and efficient for cus-
Fig. 2. High-level approach for interaction design in
customer service chatbots (user-driven, short-term)
Fig. 3. High-level approach for interaction
design in personal assistant chatbots (user-
driven, long-term)
tomers to enter their questions or concerns. Often in the form of free-text, which the
chatbot uses as basis for identifying topic and intent. The customer may then confirm
or critique the response. This is much similar to current customer support chatbots as
discussed above.
However, in contrast to typical customer support chatbots, the personal assistant is
highly integrated in the personal digital universe of the user, often cross-platform.
Hence, the personal assistant may be called from a wide range of contexts within the
user's digital universe. When called, the aim of the chatbot is to efficiently lead the
user to the desired goal, a goal which is often reached outside the chatbot dialogue.
Hence, the chatbot may help the user achieve the goal even without other feedback
than the goal being achieved (e.g. turning off the light, or starting to play a desired
song.). In cases of choice alternatives or uncertainty, the dialogue may be extended.
However, the goal typically is to leave the chat dialogue as soon as the goal is
achieved, or the path towards the goal is laid out.
6.3 Content curation chatbots
A wide range of content curation chatbots exist in the market, for access to news,
entertainment, and useful information such as weather forecasts or flight information.
Content curation chatbots are designed to serve as a point of access to a set of con-
tent, either owned by the service provider (e.g. CNN news content) or accessed by the
service provider (e.g. weather forecasts). The chatbot hence needs to be set up so as to
display and suggest available content
to the user. In consequence, content
curation chatbots typically have a
chatbot-driven Locus of Control
where the user initiative is limited to
accepting or rejecting content offers,
or requesting specific content types,
serving to filter the presented content
selection.
Current content curation chatbots
often address a one-off use-case,
where a user without a previous histo-
ry engages with the chatbot. Howev-
er, increasingly content curation
chatbots invite users to form long-
term relations with the chatbot as a
regular-basis content provider.
Hence, current content curation chat-
bots often have a short-term Duration
of Relation, though this seems to
change towards long-term relation as
chatbots mature.
Fig. 4. High-level approach for interaction design
in content curation chatbots (chatbot-driven, short-
term)
Opposed to the user-driven chatbots seen for customer support and personal assis-
tant chatbots, content curation chatbots actively guides users to recommended content
rather than aiming for the user to freely chose and select. This is, in part, due to limi-
tations in the dialogue interface, where browsing and search are less well supported
that in regular web-pages or apps. Hence, promoting and recommending relevant and
interesting content is critical.
Content typically is promoted through menus or present options, often including
visuals to strengthen user experience and engagement.
6.4 Chatbots for coaching
An increasing number of chatbots appear that aim to serve as coaches or guides for
users, to help out with a specific challenge or task over time. For example education,
therapy, or exercise.
Such coaching chatbots are designed to establish and maintain a long-term relation
with the user. A relation which provides value to the user through, for example, learn-
ing new skills or mastering existing challenges. Examples of coaching chatbots are
therapy chatbots such as Woebot, or guiding chatbots providing reminders and sup-
port to prospective students on their way towards college enrollment [10]. The aim of
the chatbot needs to a able to take the user stepwise through a therapeutic or educa-
tional program, where the user increasingly gains the means necessary to learn the
desired skill or master a specific challenge. Hence, coaching chatbots often have a
chatbot-centred Locus of Control and a long-term Duration of Relation.
Coaching chatbots are characterized by
taking the user through a predefined pro-
gram through brief sessions on a recurrent
basis, typically involving a few minutes of
interaction every day. Each session typically
builds on the next, with the aim of gradually
increasing the users knowledge or skill. The
interactions within each of the sessions are
scripted, where the users may choose be-
tween a small number of paths, depending
on individual skill level or preference.
Likewise, the order of the sessions may to
some degree be reorganized to reflect the
preferences or needs of the user. Also, some
session elements may be recurring. For ex-
ample, a therapy chatbot may have session
elements that may be triggered at different
reported states in the user. For example, a
user reporting to feel down or depressed
may trigger a specific session element ad-
dressing this reported state.
Fig. 5. High-level approach for interaction
design in coaching chatbots (chatbot-
driven, long-term)
7 Classifying a larger set of chatbots
To explore the usefulness of the proposed typology for analysis and classification
purposes, it was applied in an analysis of the 57 chatbots identified in the first step of
the presented study.
The basic functionality of each chatbots was explored through interaction by the
first author. The chatbots were tried on the platforms of their location, including Fa-
cebook Messenger (44), dedicated webpage (7), device (4), Slack (1), and a
smartphone app (1). The chatbot was then categorized in terms of Locus of Control
(user-driven or chatbot-driven) and Duration of Relation ( short-term or long-term).
The chatbots' distribution in terms of the typology is presented in Table 1.
Table 1. Distribution of the 57 chatbots included in the analysis, across the dimensions Dura-
tion of Relation (short-term or long-term) and Locus of Control (chatbot-driven or user-driven)
User-driven interaction
Chatbot-driven interaction
Sum
Short-
term
relation
8 chatbots
Examples: DNB (customer
support), Zo (chatter)
24 chatbots
Examples: Whole Foods
(marketing), CNN (news)
32
Long-
term
relation
5 chatbots
Examples: Google Assistant
(assistant), Mitsuku (chatter)
20 chatbots
Examples: BBC Politics
(news), Atlas Fitness
(health)
25
Sum
13
44
57
Note that the sample of chatbots in no way purports to be a representative sample
across all available chatbots. The sample is only intended as a set of chatbots that
have received some note. The analysis nevertheless provide some interesting insights.
First, chatbot-driven chatbots are prominent among the chatbots that have received
some note. This may be seen as a reflection of the relative immaturity in underlying
technologies and content, making it challenging for chatbot providers to allow the
user to take more control of the interaction.
Second, short-term relations are common. And for quite a few of the long-term rela-
tion chatbots (10 of 16), the chatbots merely provided subscriptions to notifications
rather than building and extending the relationship with the user. This hints at the oppor-
tunities for relationship building through chatbots are not yet fully exploited.
Third, providers within the same market may make different choices in terms of
duration of the relation users are expected to have with the chatbot. For example,
within news and content provision providers such as CNN and BBC make different
choices with regard to whether they want users make a long-term relation with the
chatbot, e.g. in terms of subscriptions to daily briefs. This difference may likely be
attributed to chatbots for content curation being a relatively new and immature market
where different actors use different strategies to try our engagement with chatbots.
8 Discussion
We have in this paper proposed a typology for chatbots, exemplified how the typolo-
gy can be used as basis for guidance on high-level analysis and guidance on interac-
tion design for chatbot purposes such as content curation, customer support, coaching,
and personal assistance.
The chatbot typology has been shown exhaustive (that is, the typology dimensions
could be used to categorize all analysed chatbots) and with exclusive types (that is, all
analysed chatbots fitted only one type). The dimensions for classification furthermore
were found to be sufficiently general and relevant so as to identify meaningful differ-
ences between chatbots as seen in the analysis of high-level interaction design for the
example chatbot purposes.
The typology dimensions further seems to provide a novel take on chatbot classifi-
cation as compared to earlier attempts, such as the distinction of four kinds of interac-
tional styles in conversational systems discussed by the IBM's research group on con-
versational UX design, and the proposed dichotomy of Chen et al. [3] between task
oriented and non-task oriented conversational systems. Regarding the former classifi-
cation, the interaction styles presented may to some degree be seen as a consequence
of chatbot type. For example the visual-centric interaction style is frequently seen in
chatbots classified as chatbot-driven. Regarding the latter dichotomy, it may be noted
that the line between task-oriented and non-task oriented chatbots may be blurry as
seen from the availability of chatbots that supports both task support and non-task
oriented features, such as marketing chatbots that are intended to engage experiential-
ly while at the same time aiming to promote a product or service, or news and factual
chatbots supporting both pleasant exploration and task-oriented fact-finding and up-
dates.
The presented work clearly illustrates that chatbots still is an emerging technol-
ogy which service providers have mainly taken up for exploratory use. This is for
example seen in the way different service providers in the same market set up their
chatbots differently in terms of Duration of Relation. As chatbot technology, chatbot
content, and market uptake of chatbots mature, it may be expected that the distribu-
tion of chatbots across the typology dimensions will change. Possibly, towards longer
durations of the user relation and more user-driven chatbots. As such, the proposed
typology may serve to help service providers set goals for their chatbot developments,
for example where service providers could set up goals for more long-term user en-
gagement through chatbots and exploit the assumed potential of chatbots as a rela-
tionship-building technology. Such goal-setting will have implications for chatbot
interaction design, as well as for requirements regarding the underlying technology
and content available through the chatbot.
Chatbots are only emerging as an interactive technology, and their potential uses
and purposes are only beginning to be seen. We hope the presented work may serve
as a step towards strengthening the usefulness and user experience of chatbots.
Acknowledgement
This work was supported by the Research Council of Norway grant no. 270940.
References
1. Bailey, K. D.: Typologies and taxonomies: an introduction to classification techniques
(Vol. 102). Sage, Thousand Oaks, CA (1994).
2. Brandtzaeg, P. B., Følstad, A.: Why people use chatbots. In Proceedings of the Interna-
tional Conference on Internet Science. LNCS, vol. 10673, pp. 377-392. Springer, Cham,
Switzerland (2017). doi: 10.1007/978-3-319-70284-1_30
3. Chen, H., Liu, X., Yin, D., Tang, J.: A survey on dialogue systems: Recent advances and
new frontiers. ACM SIGKDD Explorations Newsletter 19(2), 25-35 (2017). doi:
10.1145/3166054.3166058
4. Collier, D., LaPorte, J., Seawright, J.: Putting typologies to work: Concept formation,
measurement, and analytic rigor. Political Research Quarterly 65(1), 217-232 (2012). doi:
10.1177/1065912912437162
5. Eide, A. W., Pickering, J. B., Yasseri, T., Bravos, G., Følstad, A., Engen, V., Tsvetkova,
M., Meyer, E. T., Walland, P., Lüders, M.: Human-machine networks: Towards a typolo-
gy and profiling framework. In Proceedings of International Conference on Human-
Computer Interaction, pp. 11-22. Springer, Cham, Switzerland (2016). doi: 10.1007/978-3-
319-39510-4_2
6. Fitzpatrick, K. K., Darcy, A., Vierhile, M.: Delivering cognitive behavior therapy to young
adults with symptoms of depression and anxiety using a fully automated conversational
agent (Woebot): a randomized controlled trial. JMIR Mental Health 4(2) (2017). doi:
10.2196/mental.7785
7. Følstad, A., Brandtzæg, P. B.: Chatbots and the new world of HCI. Interactions 24(4), 38-
42 (2017). doi: 10.1145/3085558
8. Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., Sherlock, Z. (2017). Stimulating and
sustaining interest in a language course: An experimental comparison of Chatbot and Hu-
man task partners. Computers in Human Behavior 75, 461-468 (2017). doi:
10.1016/j.chb.2017.05.045
9. Hall, E.: Conversational design. A Book Apart, New York, NY (2018).
10. Page, L, Gehlbach, H.: How an artificially intelligent virtual assistant helps students navi-
gate the road to college. AERA Open 3(4). doi: 10.1177/2332858417749220.
11. Weizenbaum, J.: ELIZAa computer program for the study of natural language commu-
nication between man and machine. Communications of the ACM, 9(1), 36-45 (1966). doi:
10.1145/365153.365168
12. Xu, A., Liu, Z., Guo, Y., Sinha, V., Akkiraju, R.: A new chatbot for customer service on
social media. In Proceedings of CHI' 17, pp. 3506-3510. ACM, New York, NY (2017).
doi: 10.1145/3025453.3025496
... 35 Within the public service category, a warning bot pertains to the category of content curation chatbots, which "have a chatbot-driven Locus of Control where the user initiative is limited to accepting or rejecting content offers, or requesting specific content types, serving to filter the presented content selection". 36 These are increasingly developing from one-off interactions to longer-term relationships with regular content provision. Because browsing and searching are less well-suited to dialogue interfaces, these types of chatbots "actively guide users to recommended content" by presenting options, often by using visual elements. ...
... Because browsing and searching are less well-suited to dialogue interfaces, these types of chatbots "actively guide users to recommended content" by presenting options, often by using visual elements. 36 Klopfenstein et al. 25 identify that bots are increasingly run within IM apps to serve as functional replacements of mobile applications, e.g. as news bots instead of news apps. Such "botplications" offer a set of advantages for users and programmers: They are instantly available inside IM apps, require little data traffic, their push notifications appear as messages, the messages can be easily shared within the app and beyond, most messaging apps are independent of the operating system and the hosting platform offers user authentication. ...
Article
Full-text available
In crisis situations, citizens’ situational awareness is paramount for effective response. While warning apps offer location-based alerts, their usage is relatively low. We propose a personalised messaging app channel as an alternative, presenting a warning bot that may lower adoption barriers. We employ the design science research process to define user requirements and iteratively evaluate and improve the bot’s usability and usefulness. The results showcase high usability, with over 40 % expressing an interest in utilising such a warning channel, stressing as reasons the added value of proactive warnings for personalised locations while not requiring a separate app. The derived requirements and design solutions, such as graphically enhanced user interface elements as guardrails for effective and error-free communication, demonstrate that a suitable warning chatbot does not necessarily require complex language processing capabilities. Additionally, our findings facilitate further research on accessibility via conversational design in the realm of crisis warnings.
... Dialogue systems, such as customer support chatbots on corporate websites, virtual assistants and so on, are becoming part of daily life. Along with the development of relevant technologies, dialogue systems are predicted to build longer-term and closer personal relationships with users [1]. Dialogue systems are expected to build relationships with humans as partners. ...
Article
Full-text available
Featured Application By switching the settings of interaction structures, dialogue systems can change their behavior, adapt to users, and maintain trust through interaction. Abstract As the linguistic capabilities of dialogue systems improve, the importance of how they interact with humans and build trustworthy relationships is increasing. This study investigated the effect of interaction structures in a generative AI-driven dialogue system to improve relationships through interactions. The dialogue system communicated with subjects in natural language via voice and included a facial expression function. The settings of dyadic and triadic interaction structures were applied to the system. The one-to-one dyadic interaction and triadic interaction with joint attention to a topic were designed following the developmental stages of children’s social communication ability. Subjective evaluations of the dialogues and the system were conducted through a questionnaire. As a result, positive evaluations were based on well-constructed structures. The system’s inappropriate behavior under failed structures reduced the quality of the dialogues and worsened the evaluation of the system. The interaction structures in the system settings needed to match the structures intended by the subjects, whether the structures were dyadic or triadic. Under the matching and successful construction, the system fully demonstrated its dialogue capability and behaved pleasantly with the subjects. By switching interaction structures to adapt to users’ demands, system behavior becomes more appropriate for users.
... In recent years, various human-computer dialogue systems, commonly called chatbots, have been put to practical use. With the development of relevant technologies, dialogue systems are predicted to build longer-term and closer personal relationships with users [1]. They are also expected to act as actuators that influence and assist users through conversation in the development of the next AI agents [2], including everyday issues such as healthcare, education, or whole lifestyles [3][4][5]. ...
Article
Full-text available
Featured Application A design approach that includes individual preferences could make human-like dialogue systems more comfortable for users. Abstract As the linguistic capabilities of AI-based dialogue systems improve, their human-likeness is increasing, and their behavior no longer receives a universal evaluation. To better adapt to users, the consideration of individual preferences is required. In this study, the relationships between the properties of a human-like dialogue system and dialogue evaluations were investigated using hierarchical cluster analysis for individual subjects. The dialogue system driven by generative AI communicated with subjects in natural language via voice-based communication and featured a facial expression function. Subjective evaluations of the system and dialogues were conducted through a questionnaire. Based on the analysis results, the system properties were classified into two types: generally and individually relational to a positive evaluation of the dialogue. The former included inspiration, a sense of security, and collaboration, while the latter included a sense of distance, personality, and seriousness. Equipping the former properties is expected to improve dialogues for most users. The latter properties should be adjusted to individuals since they are evaluated based on individual preferences. A design approach in accordance with individuality could be useful for making human-like dialogue systems more comfortable for users.
... Just like a catcher accompanying a pitcher in practice, or a sparring partner for a boxer, these accompanying learning companions do not necessarily possess high-level coaching skills. Instead, they highlight that dialogue interaction can be driven by either the learner or the learning companion (Følstad et al., 2019). However, past studies focusing on learning companions facilitating learner reflection have mostly revolved around verifying the effectiveness of learning companions in encouraging students to reflect on their cognitive or skill learning processes, particularly in terms of promoting learning (Weber et al., 2021;Winkler et al., 2021), yet no study has attempted to use learning companions as companions for reflecting on affective learning, such as ethical values. ...
Article
Full-text available
Flipped classroom pedagogy, primarily focused on in-person classroom settings, emphasizes pre-class independent learning where students engage asynchronously with online materials and self-assessments. However, traditional pre-class learning methods, such as watching pre-recorded instructional videos and completing multiple-choice self-assessments, often fail to prepare students for in-person peer debates, which are crucial in business ethics education for developing higher-order thinking and ethical reasoning. This study addresses this issue by developing a generative AI learning companion system tailored for an asynchronous business ethics course. The system enables students to practice ethical debates with AI partners during the pre-class phase, improving their readiness and confidence for in-person peer debates. We implemented this system with sixty-two third-year students from a national university in Taiwan, divided into experimental and control groups. Our findings demonstrate that the AI-supported debate method significantly enhanced students’ pre-class preparation and ethical learning confidence without increasing their autonomous learning task load. This approach offers a novel solution for flipped classroom pedagogy, particularly in courses requiring intensive peer interaction such as business ethics.
Article
Ensuring data privacy is a major challenge for software developers, especially in chatbots, where balancing privacy protection with response quality is key, given the need for conversation-driven development and data protection regulations. This research identifies privacy requirements and techniques for chatbot development through a literature review, privacy policy analysis, and a practitioner survey. The methodology includes a Systematic Literature Review (SLR), an adapted Gray Literature Review (GLR), privacy requirement formulation, and validation via a survey. Based on the SLR and GLR, eight privacy requirements are proposed, covering personal information protection, user authentication, access control, secure communication, database safety, user rights empowerment, decentralized storage, and reliable infrastructure. Survey results highlight foundational measures like secure communication and scalable infrastructures as priorities, while advanced measures such as decentralized storage or privacy rights implementation scored lower due to complexity and cost. Practitioners also stressed clarity and verifiability, citing gaps in definitions, examples, and validation criteria as challenges to adoption.
Article
This study explores the adoption of artificial intelligence (AI)-enabled trip-planning assistants, focusing on user interaction dynamics and preferences. Although users appreciate the quick access to basic information offered by AI-powered travel platforms, challenges such as information accuracy and adaptability create a gap between the potential and realized advantages of these technologies. Drawing on affordance-actualization theory, this research analyzes user preferences and their drivers regarding the characteristics of AI-enabled trip planners. The data collected through a stated-choice experiment were analyzed with discrete choice modeling to measure users’ preferences and heterogeneity in their interactions with AI-assisted trip-planning platforms. Findings reveal that preferences are influenced by contextual elements, such as trip purpose, activity type, and the perceived importance of information quality. The systematic investigation of user preferences contributes significantly to the understanding of AI adoption and interaction dynamics in the digital age, offering insights for improving platform design and user satisfaction.
Chapter
Full-text available
In recent years, the integration of chatbots into the education sector has emerged as a transformative force, redefining how educational institutions approach both leadership and learning. Their potential to revolutionize education is grounded in their ability to provide real-time support, personalized learning experiences, and streamlined administrative functions, ultimately leading to more dynamic and responsive educational practices.The application of chatbots in education serves as a catalyst for enhanced leadership by streamlining communication channels 322 between educators, students, and administrators. Traditional methods of managing educational tasks and responding to inquiries can often be cumbersome and slow. Chatbots can address this issue by offering instant, automated responses to frequently asked questions. For educational leaders, chatbots represent a tool that can improve operational efficiency, enabling them to better allocate resources, monitor performance metrics, and implement data-driven decision-making processes.
Article
Full-text available
Deep reinforcement learning using convolutional neural networks is the technology behind autonomous vehicles. Could this same technology facilitate the road to college? During the summer between high school and college, college-related tasks that students must navigate can hinder successful matriculation. We employ conversational artificial intelligence (AI) to efficiently support thousands of would-be college freshmen by providing personalized, text message–based outreach and guidance for each task where they needed support. We implemented and tested this system through a field experiment with Georgia State University (GSU). GSU-committed students assigned to treatment exhibited greater success with pre-enrollment requirements and were 3.3 percentage points more likely to enroll on time. Enrollment impacts are comparable to those in prior interventions but with substantially reduced burden on university staff. Given the capacity for AI to learn over time, this intervention has promise for scaling personalized college transition guidance.
Conference Paper
Full-text available
There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Article
Full-text available
Background Web-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time. Objective The objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression. Methods In an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, “Depression in College Students,” as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2). Results Participants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F1,54= 9.24; P=.004). Participants’ comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy. Conclusions Conversational agents appear to be a feasible, engaging, and effective way to deliver CBT.
Article
Full-text available
Novel technology can be a powerful tool for enhancing students’ interest in many learning domains. However, the sustainability and overall impact of such interest is unclear. This study tests the longer-term effects of technology on students’ task and course interest. The experimental study was conducted with students in foreign language classes (n=122): a 12-week experimental trial that included pre- and post-course interest, and a sequence of task interest measures. Employing a counterbalanced design, at three week intervals students engaged in separate speaking tasks with each of a Human and “Chatbot” partner. Students’ interest in successive tasks and in the course (pre-post), were used to assess differential partner effects and course interest development trajectories. Comparisons of task interest under different partner conditions over time indicated a significant drop in students’ task interest with the Chatbot but not Human partner. After accounting for initial course interest, Structural Equation Modelling indicated that only task interest with the Human partner contributed to developing course interest. While Human partner task interest predicted future course interest, task interest under Chatbot partner conditions did not. Under Chatbot partner conditions there was a drop in task interest after the first task: a novelty effect. Implications for theory and practice are discussed. FREE COPY-----> https://authors.elsevier.com/a/1V8wS2f~UW0xXp
Conference Paper
Full-text available
Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.
Conference Paper
Full-text available
In this paper we outline an initial typology and framework for the purpose of profiling human-machine networks, that is, collective structures where humans and machines interact to produce synergistic effects. Profiling a human-machine network along the dimensions of the typology is intended to facilitate access to relevant design knowledge and experience. In this way the profiling of an envisioned or existing human-machine network will both facilitate relevant design discussions and, more importantly, serve to identify the network type. We present experiences and results from two case trials: a crisis management system and a peer-to-peer reselling network. Based on the lessons learnt from the case trials we suggest potential benefits and challenges, and point out needed future work.
Article
Full-text available
Typologies are a well-established analytic tool in the social sciences. Working with typologies contributes decisively to forming concepts, exploring dimensionality, establishing measurement categories, and grouping cases. Yet some critics – basing their arguments on what they believe to be relevant norms of quantitative measurement – consider typologies to be an old-fashioned and unsophisticated mode of analysis. We show that this critique is methodologically unsound. The use of typologies can and should proceed according to high standards of rigor. We offer a basic template for constructing typologies and show how they can be “put to work” in refining concepts and measurement, examining underlying dimensions, and organizing explanatory claims and causal inference. The conclusion presents guidelines for careful work with typologies.
Article
Dialogue systems have attracted more and more attention. Recent advances on dialogue systems are overwhelmingly contributed by deep learning techniques, which have been employed to enhance a wide range of big data applications such as computer vision, natural language processing, and recommender systems. For dialogue systems, deep learning can leverage a massive amount of data to learn meaningful feature representations and response generation strategies, while requiring a minimum amount of hand-crafting. In this article, we give an overview to these recent advances on dialogue systems from various perspectives and discuss some possible research directions. In particular, we generally di- vide existing dialogue systems into task-oriented and non- task-oriented models, then detail how deep learning techniques help them with representative algorithms and finally discuss some appealing research directions that can bring the dialogue system research into a new frontier.