Conference PaperPDF Available

Why People Use Chatbots

Authors:

Abstract and Figures

There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Content may be subject to copyright.
Why people use chatbots
Petter Bae Brandtzaeg1 and Asbjørn Følstad 2
1, 2 SINTEF,
Forskningsveien 1, 0314 Oslo, Norway
1petter.b.brandtzag@sintef.no
Abstract. There is a growing interest in chatbots, which are machine agents serv-
ing as natural language user interfaces for data and service providers. However,
no studies have empirically investigated people’s motivations for using chatbots.
In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55
years) from the US to report their reasons for using chatbots. The study identifies
key motivational factors driving chatbot use. The most frequently reported moti-
vational factor is productivity; chatbots help users to obtain timely and efficient
assistance or information. Chatbot users also reported motivations pertaining to
entertainment, social and relational factors, and curiosity about what they view
as a novel phenomenon. The findings are discussed in terms of the uses and grat-
ifications theory, and they provide insight into why people choose to interact with
automated agents online. The findings can help developers facilitate better hu-
manchatbot interaction experiences in the future. Possible design guidelines are
suggested, reflecting different chatbot user motivations.
Keywords: Chatbots, motivations, uses and gratifications.
1 Introduction
Chatbots represent a potential shift in how people interact with data and services online.
While there is currently a surge of interest in chatbot design and development, we lack
knowledge about why people use chatbots.
Chatbots are machine agents that serve as natural language user interfaces for data
and service providers [1]. Currently, chatbots are typically designed and developed for
mobile messaging applications [2].
The current interest in chatbots is spurred by recent developments in artificial intel-
ligence (AI) and machine learning. Major Internet companies such as Google, Face-
book, and Microsoft see chatbots as the next popular technology; Microsoft CEO Satya
Nadella said,Chatbots are the new apps” [3]. In Spring 2016, Facebook and Microsoft
provided resources for creating chatbots to be integrated into their respective messaging
platforms, Messenger and Skype. One year later, more than 30,000 chatbots have been
launched on Facebook Messenger. Other messaging platforms have also seen a sub-
stantial increase in chatbots, including Slack, Kik, and Viber. Chatbots are seen as a
means for direct user or customer engagement through text messaging for customer
Preprint of paper to be published in the Proceedings of the 4th International
Conference on Internet Science, 22-24 November, 2017, Thessaloniki, Greece
2
service or marketing purposes [4], bypassing the need for special-purpose apps or
webpages.
However, it is not simple to transition from established user interfaces, such as web
pages and apps, to chatbots as a common means of interacting with data and services.
For example, there is a lack of knowledge regarding how customers react to the substi-
tution of human customer service personnel with chatbots or how the presence of chat-
bots in online social networks affects multi-party conversations and the spread of in-
formation [5].
Since the initial optimism regarding the launch of chatbots by Microsoft and Face-
book, a number of commentators have noted that users’ adoption of available chatbots
is less substantial than hoped [6]. This could be explained by the fact that most available
chatbots fail to fill users’ needs due to unclear purposes, nonsensical responses, or in-
sufficient usability [7].
Designing a new interactive technology such as a chatbot requires in-depth
knowledge of usersmotivations for using the technology, which allows the designer
to overcome challenges regarding the adoption of the technology [8]. More general
knowledge is also needed to understand humanchatbot relationships. To our
knowledge, no studies to date have investigated usersmotivations for interacting with
chatbots.
As a first step towards bridging this knowledge gap, we perform a study addressing
the following research question:
RQ: Why do people use chatbots?
The study contributes new knowledge regarding individuals’ motivations for using
chatbots based on an online questionnaire completed by US chatbot users. The ques-
tionnaire includes an open question regarding the participantsmain motivations for
using chatbots. The findings obtained using this approach can inform future designs
intended to improve humanchatbot interactions.
Before we present the findings, we will first describe the relevant background for
our study. We then present the method and findings of the study. In the discussion, we
address the implications of the study’s findings for the design and development of chat-
bots.
2 Background
2.1 Chatbots and natural language user interfaces
Although the last few years have seen increased interest in chatbots, natural language
interfaces are not new in the fields of computer science and Internet studies. In the
1960s, Weizenbaum published an innovative study on natural language interaction with
ELIZA, a computer program developed to mimic the responses of a psychotherapist in
a therapy session [9].
3
Dale discussesthe return of chatbotsin recent years and discusses how the current
interest in this technology is rooted in previous work on natural language user interfaces
[1]. In particular, Dale notes the impact of the Loebner Prize, which has driven natural
language user interfaces to be more human-like since 1991, and Pandorabots, a chatbot
platform that includes more than 200,000 bot developers as of 2016. The best-known
chatbots are Cleverbot, which was publicly launched in 1997; A.L.I.C.E., the winner of
the Loebner Prize in 2000, 2001, and 2004; and Mitsuku, the winner of the Loebner
Prize in 2013 and 2016.
The current interest in chatbots is likely related to substantial advances in computing
technology and the wide adoption of mobile messaging applications.
First, recent advances in artificial intelligence and machine learning have led to the
recent interest in chatbots. These advances promise vast improvements in natural lan-
guage interpretation and prediction capabilities, including improvements in machine
translation [10]. In addition, progress in conversational modeling suggests that predic-
tions based on recurrent neural networks and sequence-to-sequence models will out-
perform the rule-based conversational modeling typically applied to traditional chatbots
[11].
Second, the increased adoption of mobile Internet and messaging platforms have
driven the adoption of chatbots [2]. Through mobile messaging platforms, chatbots are
able to reach a large part of the online population. According to Business Insider (2016),
about 3 billion people worldwide use mobile messaging applications such as Facebook
Messenger, WeChat, Skype, Telegram, Slack, Viber, and Kik. For many users of these
services, natural language is expected in online interactions, making automated mar-
keting and customer service using natural language a promising business opportunity.
2.2 Chatbot applications
Chatbots may serve a number of purposes, such as customer service, social and emo-
tional support, information, entertainment, and ties the user to other people or machines.
The great variety of chatbots is exemplified in the BotList (https://botlist.co/), a website
on which people can find chatbots for a broad range of purposes available on multiple
messaging platforms.
In particular, chatbots are seen as a promising alternative to traditional customer ser-
vice [4]. For customers, conversations with these bots may feel more natural and effi-
cient than interacting with a mobile app as they can obtain answers to questions, receive
suggestions for purchases, place orders, and keep updated on shipping through a natural
language interface.
A range of chatbots serve as virtual assistants or stewards, helping users to perform
specific tasks. The Indian chatbot Nikibot can help users with, for example, booking a
taxi and ordering food for delivery [12]. In addition, Do Not Pay, based in the UK,
helps users file complaints when they receive parking tickets and Babylon Health’s
chatbot interface provides medical advice. In such scenarios, chatbots may be prefera-
ble to other means of assistance, such as a phone call or online search, due to their
convenience and immediacy.
4
Chatbots can also help people explore online content or services. For example, Mi-
crosoft launched Heston Bot to help users explore food and cooking opportunities Ad-
ditionally, the global fashion and clothing company H&M launched a chatbot to pro-
vide personal fashion advice based on photos uploaded by users [13].
"Smalltalk" orientated chatbots such as Mitsuku and Jessie Humani can also fulfill
people’s need for entertainment and social interaction.
2.3 User behavior and experience
Although little is known about what motivates people to use chatbots, there is a sub-
stantial body of research on usersbehavior and experience with chatbots.
Usersinteractions with chatbots often mimic interactions between humans, but
there are differences. In a study comparing human–human interactions to human–chat-
bot interactions, Hill et al. found that humanchatbot interactions tend to last longer
than humanhuman interactions between strangers and involve shorter messages, less
complicated vocabulary, and more profanity [14].
Corti and Gillespie investigated whether users seek to repair misunderstandings in
conversations with natural language user interfaces, which is important in any type of
dialogue [15]. They found that, for chatbots perceived as human, users made more of
an effort to repair misunderstandings than did users that perceived the chatbots as au-
tomated [15].
Several studies have investigated usersexperiences with chatbots. For example,
Holtgraves et al. explored how users perceive chatbots’ personalities [16], and De An-
geli et al. studied how the implied anthropomorphism of chatbots may elicit negative
responses among users [17]. Comparing the conversational ability of a chatbot based
on the original ELIZA program to newer chatbots, a study [10] found that the partici-
pants were able to systematically differentiate the conversational quality of different
chatbots. In addition, different demographic groups tended to rate the chatbots’ conver-
sational quality differently; specifically, younger users and female users rated the con-
versations more favorably [10].
It may be important for chatbots to engage emotionally with users. A recent study
by Xu et al. on customer service chatbots found that about 40% of user requests to
customer service are emotional rather than seeking specific information [3]. Without
the ability to relate to these customers emotionally, a customer service chatbot risks
failure.
2.4 Uses and gratifications a theoretical framework
As a theoretical basis for understanding peoples motivation for using chatbots, we ap-
ply the well-established uses and gratifications theory (U&G) [18]. U&G explains why
and how people use specific media to fulfill specific needs; the specific use of a medium
depends on the expected and experienced gratification it will provide. U&G has typi-
cally been oriented towards consumers’ use of media that is not related to work [19].
5
The theory assumes that the user is goal-driven in his or her selection and use of a
particular medium based on social and psychological needs or gratifications.
In a complex media landscape or so called high-choice media environments, where
users can choose to achieve their goals through a number of different media, such as
webpages, apps, and chatbots, U&G assumes that the user takes an active stance to-
wards which medium that best suits the purpose. Rubin describes U&G as a highly
compatible approach to understanding the uses and effects of electronic media in the
current media landscape [18].
The framework classifies users based on the gratification (or motivation) received
from a particular medium, and it assumes that media users actively choose a medium
depending on what they see as fit to satisfy a particular need [20]. Specifically, uses
and gratificationsrefer to the motivation for use of a specific medium and the satis-
faction people gain from use [21]. A wide range of gratifications have been suggested
as motivators of media use [22], such as the need for information, entertainment, social
interaction, and self-expression.
While the fragmentation of the media landscape due to the adoption of mobile Inter-
net may change usersmotivations for choosing certain media, Sundar and Limperos
concluded that the gratifications for use of Internet technologies are similar to the grat-
ifications for use of other media [22]. However, there are substantial variations between
media contexts, which means that it is important to identify gratifications that are rele-
vant to the context of the medium of interest. Identifying the gratifications that are im-
portant to chatbot users will help guide the development and design of new and existing
chatbots.
Chatbots are a new technology, and as such, are mostly used by innovators and early
adopters. These users might have different needs and gratifications than the rest of the
population. The theory of diffusion of innovations explains how such innovations are
adopted by a population. One of the insights that might be useful in combinations with
U&G theory is the understanding the various user needs and gratifications among dif-
ferent user segments in the population, suggested by Rogers: Innovators: 2.5% Early
Adopters: 13.5%, Early majority: 34%, Late majority 34%, Laggards 16%. Early adap-
tors are usually more risk-oriented and curious about new technologies, while the early
and late majorities and laggards are more conservative and risk-averse [23].
3 Method
3.1 Study design and materials
To explore why people use chatbots and reach a sufficiently broad sample of chatbot
users, a questionnaire was used. The questionnaire included 17 questions regarding
chatbot use, including motivations for and experiences with chatbots, and respondents’
demographics (age, gender, and state of residence).
In line with the exploratory aim of the study, the question addressing respondents’
motivations for chatbot use was open: What is your main reason for using chatbots?
The participants were asked to answer this open question freely. The question was
6
adapted from a study by Brandtzæg and Heim [24] focusing on motivations for using
social networking sites, as it encouraged participants in that study to provide personal
descriptions of their motivations for using certain media.
Other key questions regarding chatbot use explored how often participants used chat-
bots, how long they had used chatbots, and the messaging platforms on which they used
chatbots.
3.2 Participant recruitment and filtering
To understand usersmotivations for chatbot usage, our target group in this study con-
sists of only chatbot users. However, due to the newness of chatbots, most people in
the mass market likely have not had any experience with chatbots, and many may not
even know what chatbots are. Furthermore, no statistics regarding chatbot usage are
available globally or for specific countries. Hence, recruiting participants for this study
was challenging, as we had to not only identify relevant participants but also filter out
non-relevant participants.
We decided to target chatbot users in the US as the technology companies that pri-
oritize chatbotsGoogle, Facebook, Kik, and Slackare all focused on the US market.
We also decided to target a relatively young user group (those aged 16–55 years). We
consider users within this age group to be more likely to be early adopters of chatbots
than older users (e.g. [23]) as the former more frequently use messaging applications.
Data were collected in April 2017 by Survata, an independent US-based research
company. Survata collect research samples by partnering with online publishers, which
allow visitors to take a Survata survey to unlock premium content (e.g., premium arti-
cles, e-books, and videos). To identify and avoid invalid survey responses, Survata’s
technology analyzes respondents’ response time, response pattern, and other metadata.
To identify relevant participants, the following screening question was applied:
Chatbots are automated online services that you interact with in text-based conversa-
tions, typically in instant messaging platforms such as Facebook Messenger, Kik, Slack,
and Telegram. Have you used such chatbots? Only those who responded positively to
this question were allowed to take the survey.
As the screening question may not have been sufficient to filter out all non-chatbot
users, we also analyzed users’ responses to the open question. Specifically, we scruti-
nized the answers for indication that the participants discussed general use of messag-
ing platforms rather than interaction with chatbots. This process identified 155 of the
301 participants as non-chatbot users. We are confident that the remaining 146 partici-
pants are actually within the target group of chatbot users.
3.3 Data analysis
Data obtained from responses to the preset questions were analyzed through descriptive
statistics using the SPSS 24 statistical package.
Qualitative data regarding participants’ open answers underwent content analysis
based on the coding categories established through an initial thematic analysis. Content
7
analysis has proven to be useful for describing and making inferences about respond-
ents’ communications and patterns of usage as well as the consequences of communi-
cation [24].
The two authors collaboratively coded the open answer data in order to develop and
apply the categories of motivation. Participants’ responses to the open question could
include more than one such category. These categories were then used by one of the
authors to code the entire data set. In total, 16% of the responses were coded as address-
ing two or more themes. To ensure reliability, the other author reviewed the coding and
made suggestions when necessary. The suggestions were reviewed by both authors, and
then the final coding was performed. In total, 21% of the initial codes were updated.
4 Results
In total, 146 valid responses were gathered. Of these, 94 were written by females, and
52 were written by males. The mean age of the participants was 30 years (min = 16,
max = 55, SD = 9.2). As shown in Figure 1, the participants reported use of chatbots
on various platforms, with Facebook Messenger being the most common, in line with
the broad adoption of this platform in the US.
Figure 1. Proportion of participants reporting use of different messaging platforms for chatbot
interactions (N = 146).
In general, the participants were fairly new to using chatbots; 64% reported using chat-
bots for two years or less. Thirty-four percent of the participants reported using Google
Assistant, a chatbot assistant available for select Android operating systems as well as
on the Allo messaging platform.
84%
55%
44%
15% 10% 8%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Facebook
Messenger
Skype Kik Viber Slack Telegram
8
In the open question regarding participantsmotivations for using chatbots, all par-
ticipants provided an answer. The thoroughness of the answers varied substantially,
with the longest answer being 390 characters and the shortest being 5 (mean = 76; SD
= 66).
The thematic analysis of the answers revealed four main categories of motivation.
An overview of the categories is provided in Table 1. In the remainder of the results
section, we present the detailed findings for each category.
Table 1. Categories of motivation for chatbot use (N = 146). Note: 16% of the responses were
coded as addressing two or more themes.
Category
Description
Frequency
Productivity The comment concerns the convenience of using chatbots
(whether they are easy or fast to use). Participants typically
report using them to obtain assistance or information.
100
Entertainment The comment concerns the entertainment value of using
chatbots (whether they are fun to use). Some report that
they use chatbots when bored to kill time.
29
Social/relational The comment concerns the use of chatbots for social or re-
lational purposes. Typically, chatbots are seen as a per-
sonal, human means of interaction that may have social
value. Some also use chatbots to strengthen social interac-
tions with other people.
18
Novelty/
Curiosity The comment concerns the use of chatbots out of curiosity
or because they are a novelty. Often, the stated aim is to in-
vestigate chatbotscapabilities.
15
Other The comment concerns motivations that do not fit in the
above categories and are not sufficiently frequent to justify
a separate category.
12
4.1 Chatbots for productivity
The vast majority of participants (68%) reported productivity to be the main reason for
using chatbots. These participants highlighted the ease, speed, and convenience of us-
ing chatbots. Also, they noted that chatbots provide assistance and access to infor-
mation.
Ease, speed, and convenience
Forty-two percent of participants reported ease of use, speed, and convenience as their
main reasons for using chatbots. These motivations are exemplified in the following
statements:
To get answers quickly. Its important. (P40)
9
It saves me the hassle of having to place a call, have to wait to speak to a person,
then trying to get the information from that person. It also saves time in having to
look through tons of text to find answers. (P58)
They are fast and almost as fast as searching on the internet. (P252)
As reflected in the above examples, the participantsresponses suggested that they
highly appreciated when chatbots helped them save time or made it easier and faster to
obtain help or information, such as by providing efficient assistance in a customer sup-
port situation or by point to an easy-to-use manual or FAQ. Five percent of the partici-
pants explicitly stated that they use chatbots to avoid waiting for assistance, such as the
following participant:
There is no wait time to talk to a representative to find out basic info Im looking
for. The chatbot can answer basic questions and is ready whenever I need an answer.
(P83)
Another five percent reported that they preferred to obtain help from a chatbot rather
than a human assistant, such as in the following example:
Chatbots are important because you wont feel stupid asking important questions.
Sometimes talking to someone can be a bit intimidating. Talking to a chatbot makes
that a lot easier! (P82)
To obtain help and information
Many participants (41%) also reported that the ease of obtaining help or accessing in-
formation were main motivations for using chatbots. These people perceived chatbots
as useful in their daily online activities, as exemplified by the following statements:
I use chatbots instead of a search engine to help with daily tasks. (P67)
I like to travel and find many different places to visits and chatbots gives me great
advise. (P113)
Finding information about weather and news stories. (P251)
Eighteen percent of participants reported ease, speed, and convenience as well as help
and information as motivation for using chatbots, including the following participant:
I believe the main reason for using chatbots is to give you help if you have any ques-
tions anytime when you need it, because the thing is when you ask a friend for help
on kik or any time of messaging app they wont reply as quick as chatbots do. (P151)
Three participants reported that they appreciated being able to configure chatbots to
receive better help with their personal tasks, as in the following statement:
10
I use chatbots because I can tailor them to find things, styles, weather, or orders that
are specific and unique to me, so I can shop, research, or just chat with ease. The
more input I put into them about me, the more they will serve efficiently. (P176)
4.2 Chatbots for entertainment
A substantial proportion of participants (20%) reported using chatbots for entertain-
ment. Though much smaller than the proportion of participants that use chatbots for
increased productivity, this motivation category was the second most frequent.
Most of the participants reporting use of chatbots for entertainment value (14% of
the total sample) described this as a positive value, with chatbots perceived as fun
andentertaining”:
It’s fun and entertaining. I like chatbots that have funny things to say. (P99)
Usually to ask a question and be entertained with an answer. (P301)
Others reported using chatbots for entertainment in a more negative fashion, to kill
time:
I usually use chatbots when I am bored or have nothing to do so I use them to waste
some time. (P61)
I am bored and want to talk to someone. (P232)
About one-third of the participants within this category (7% of the total participant
sample) reported both productivity and entertainment as motivations to use chatbots, as
reflected in the following statement:
Chatbots can help me with simple tasks on the app Im using. Also, they give me fun
tips and make my experience a lot better. It gives me something different to do when
I’m bored. (P199)
4.3 Chatbots for social and relational purposes
The third most frequently reported reason for using chatbots is the potential social and
relational benefits they can provide. This category of motivation was reported by 12%
of participants.
It is noteworthy that, while chatbots can enhance interactions between humans, most
of the participants addressing social and relational motivations commented on the social
experience of interacting with the chatbot (10% of the total participant sample). For
example, the chatbot is perceived as a way to avoid loneliness or fulfill a desire for
socialization:
11
At the time i was bored and i didnt have anyone to talk to and i feel like sometimes
their good to make friends with if your lonely and just want a chat with someone
else. (P141)
I use them when I feel bored or rather when I feel down and have no one else to go
to, it just relaxes me in a way. Gives me someone to vent to without getting judged,
I know they arent real but it feels like it is. (P264)
Always open to talk and I live in country and not many people around me to talk too.
(P234)
Some participants (3% of the total sample) reported that chatbots enhanced their social
experiences with others, such as when including a chatbot in a group chat, using a chat-
bot with a child, or to improve one’s own conversational skills:
To have a little extra fun in the chats Im in. (P268)
Normally to get information, but I also like using them for my kids to talk to charac-
ters. (P60)
To build conversation skill. (P80)
More than one-third of the participants reporting social and relational motivations (5%
of the total participant sample) also mentioned productivity. The ability for chatbots to
meet one’s social and relational needs and improve productivity is seen as a benefit, as
in the examples below:
So I can have someone to talk to AND it searches for me. Without me using the net.
(P243)
They are like personal assistants and easy to use because they are built into the apps
themselves. I dont need to download extra apps I just talk to them directly on the
app that I'm using, i.e. Facebook Messenger. I like that because its a hassle using
so many different apps. Also theres a sense of talking to someone when I use them.
It’s almost like you are talking to a real person. (P116)
4.4 The novelty of chatbots
The fourth main category of motivation for using chatbots is the novelty of chatbots,
reported by 10% of participants. These participants typically stated that they are curious
to explore chatbots and the limits of their abilities:
Trying something new. (P104)
[…] I’m also curious to see what theyll say or how realistic they seem. (P59)
12
[…] It’s interesting to see what people can come up with, how lifelike they will be-
come. Sadly, very few pass the test. They are all repetitive in some way. (P88)
Some of these participants seem to be attracted to the fact that chatbots are still in an
early phase of development, suggesting that they enjoy being early adopters of technol-
ogy:
Theyre new and intriguing. (P66)
Others were skeptical of the new interactive technology:
Curiosity, mostly, because I have skepticism about the privacy of it and the evidence
based knowledge that it is assuming is accurate when answering a question. (P69)
4.5 Other motivations
A small proportion of the participants discussed motivations that did not fit into the
four main categories presented above and were not sufficiently frequent to justify sep-
arate categories.
Examples of such motivations (all of which were reported by only one participant
each) include the following:
It is easier to talk to a chatbot than to talk to people about important issues.
Chatbots can provide automatic responses when others are not available.
One can more easily identify an account as a bot and subsequently block it.
Chatbots can be a default method of customer support.
Three of the comments coded as other motivationswere difficult to comprehend.
5 Discussion
We have provided an overview of why people use chatbots and listed participants’ re-
sponses to an open question regarding their motivations for using chatbots. In this sec-
tion, we will discuss the findings in terms of U&G. We will then consider the implica-
tions of our findings for the future design of chatbots. Finally, we will discuss the
study’s limitations and possible avenues for future research.
5.1 Productivity is important
Productivity was the most frequently reported motivation; thus, the majority of chatbot
users seek quick and consistent feedback when searching for information or assistance.
This finding might reflect the use of chatbots in the customer service domain. This
13
finding may also reflect a general trend for users to gravitate toward immediate com-
munication channels. The broad adoption of private messaging platforms such as Face-
book Messenger, WhatsApp, and Snapchat reflects users’ interest in more instrumental
or goal-directed communication with fewer interruptions compared to regular commu-
nication on Facebook and Twitter.
Information has been recognized as an important category of gratification in previ-
ous U&G studies, [22]. Yet, the typical chatbot user’s need for information may require
more immediacy and interactivity than the information needs associated with other me-
dia. This hypothesis is in line with recent research identifying young social media users’
need for instant gratification. For example, Brandtzaeg [25] suggest that youths com-
municating with organizations through social media crave immediate feedback and di-
alogue and action-oriented engagement in order to achieve a clear goal.
Other studies have highlighted the fact that many people, particularly those from
Western cultures, seek to spend time productively and may feel guilty when they waste
time [26]. Similarly, users in this study often referred to the quick response and produc-
tivity of chatbots as key motivations for using them. The need for productivity might
be specific to certain cultures, and so it may be worth investigating in this user group.
Instant need for informational feedback may also be related to the concept of useful-
ness. Usefulness concerns the extent to which a service is perceived as beneficial by
performing a specific task quickly and reliably [27]. For chatbots to be successful in
the studied user group, they must help users resolve a task or achieve a concrete goal
in an effective and efficient manner; in other words, they need to be easy, fast, and
convenient. Also, they need to fulfill a valued productivity goal, such as getting help or
access to information on the fly.
5.2 Entertainment and social motivations motivate fewer people but are
important to some
Entertainment and fun are important aspects of social relations between humans. Like-
wise, entertainment and socialization may be seen as aspects of the relationship be-
tween humans and chatbots. The need for entertainment and a sense of social relation-
ship is also highlighted in U&G and recent U&G studies on social media in particular
[24] and online media in general [22].
Many activities in our daily life involve socialization and entertainment. Conse-
quently, Thackara [28]. argues that systems should provide users with a social platform
or sense of community to generate good user experiences. Similarly, Monk suggests
that interactive systems should be designed to support enjoyable social interactions
[29]. Sensitivity to this need for entertainment and social relations might be even more
important in the context of chatbot design because chatbots are more humanlike than
other interactive systems. Thus, users may expect chatbots to be entertaining or social.
Yet, it should be noted that entertainment and social motivations do not exclude
productivity motivations. On the contrary, more than one-third of the participants re-
porting entertainment or social motivations also reported productivity motivations.
People want to get their jobs done, but many prefer to do so in a social and enjoyable
manner.
14
5.3 Novelty is a motivator for some
Curiosity as a motivation related to news consumption or information-seeking behavior
has been identified as a key gratification in previous U&G studies. For example,
McQuail [30] argues that satisfying curiosity and general interest(p. 87) is a key
gratification associated with media use. Such U&G studies have, however, focused on
gratifications related to information and content in the context of older mass media such
as television and newspapers, not on motivations related to the novelty of interactive
technologies such as chatbots.
As discussed in the background of this paper, the curiosity and sense of novelty as-
sociated with new technologies and features may be relevant at least for early adopters
or innovators, and perhaps specific to these groups. The rest of the population will often
view trying out novel technologies as a higher risk and therefore require assurance from
trusted peers. This is thoroughly discussed in literature on theory of diffusion of inno-
vations [23], but not in the literature on U&G.
According to Rogers, early adopters and innovators are risk-takers because trying
new things may result in failure [23]. Many are interested in novel technologies because
of personal entertainment, but early adopters and innovators are more interested in new
experiences and learning things before others. For chatbot users, the perceived novelty
of chatbots may drive some to use and experiment them. However, to establish a sus-
tainable pattern of usage, chatbots must increase productivity for the late majority for
this group to adopt chatbots as a preferred means of interaction.
5.4 Implications for chatbot design
A main challenge of user research on this topic is the rapid change in technological
developments and user preferences [24]. related to chatbots [24]. However, some of the
main motivations to use chatbots may be stable over time because they reflect basic
needs, such as productivity and social interaction.
Our main findings relate to the key gratifications identified in earlier U&G literature:
productivity, entertainment, social and relational purposes, and novelty. The im-
portance of productivity as a motivation for chatbot use is striking, particularly because
chatbots for socializing and small talk, such as A.L.I.C.E, Cleverbot, and Mitsuku, have
been available for longer than the productivity-oriented chatbots on messaging plat-
forms. Hence, chatbot designers should focus on designing and developing chatbots
that are perceived as useful because they provide necessary help or information in an
effective and efficient manner. To do so, chatbot designers must identify cases in which
chatbots fulfill users’ need for productivity more efficiently than what is possible
through other methods of interaction. The success of chatbots as personal assistants and
health advisors exemplifies the need to design for productivity.
In addition to considering productivity, the chatbot interaction experience can be
significantly strengthened by catering to entertainment and social or relational motiva-
tions. For example, a productivity-oriented chatbot may benefit from a friendly or em-
pathic appearance. Leading chatbot platforms like Google's Api.ai, include components
to support small talk, such as to start a conversation. While the overall purpose of a
15
chatbot may be productivity-oriented, including socialization or entertainment as a fea-
ture will be appreciated by a substantial proportion of chatbot users.
The need to balance productivity with entertainment and relational aspects indicates
that the relationship between humans and chatbots may be different than the relation-
ships between humans and other tools, such as dishwashers or refrigerators. Thus, chat-
bots may need to be designed as a tool, toy, and friend.
5.5 Limitations and future work
The present study is subject to limitations. First, the chatbot users that participated in
this study were both self-selected and filtered by some initial questions. They are,
therefore, not representative of the population at large. The participants consisted only
of chatbot users, and should therefore be regarded as early adopters, comprising about
14 percent) of population which, is first to try new ideas, technologies, and services.
This may explain why a lot of respondents perceived chatbots to be helpful and effi-
cient. Early adopters may be fundamentally optimistic about future technologies.
Hence, this part of the population may focus on the future potential chatbots rather on
the current limitations. This also highlight the importance to include a broader part of
the population in future studies. However, a strength of the present study is that the
sample was large and included users from all over the US. Future studies may benefit
from including chatbot users from other countries to determine how users’ motivations
change across cultures.
Second, the present study involves only a preliminary analysis of the presented data
set. We plan to expand the results of this study with additional data collection and anal-
ysis in future work. We also plan to investigate how different motivational patterns are
linked to age and gender and different chat platforms, as well as analysis of specific
chatbots being used. Further, future work should analyze other aspects related to moti-
vations and end-user loyalty, such as why people reduce or stop their use of chatbots.
6 Conclusions
Chatbots potentially represent a new paradigm in how people will interact with data
and services in the future. Currently, there is a lack of empirical investigations into why
people use chatbots. This study provides needed insight into the motivational factors
related to use of conversational interfaces. Its results can guide future research on this
topic, which may provide new insights and guide future design and development of
chatbots.
Acknowledgment
This study is funded by the research project Human-Chatbot Interaction Design, sup-
ported by the Research Council of Norway, IKTPLUSS (p.nr 270940).
16
References
1. Dale, R.: The Return of the Chatbots. Nat. Lang. Eng. 22(5), 811–817 (2016)
2. Følstad, A. Brandtzaeg, P.B.: (in press, 2017). Chatbots the new world of HCI.
ACM Interactions.
3. USA Today: Microsoft CEO Nadella: “Bots are the new apps,” https://www.usato-
day.com/story/tech/news/2016/03/30/microsof-ceo-nadella-bots-new-
apps/82431672/. (2016)
4. Xu, A., Liu, Z., Guo, Y., Sinha, V., Akkiraju, R.: A New Chatbot for Customer
Service on Social Media. In: Proceedings of the ACM Conference on Human Fac-
tors in Computing Systems (2017)
5. Ferrara, E., Varol, O., Davis, C., Menczer, F., Flammini, A.: The Rise of Social
Bots. arXiv preprint arXiv:1407.5225. (2014)
6. Simonite, T.: Facebook’s Perfect, Impossible Chatbot. MIT Technology Review.
https://www.technologyreview.com/s/604117/facebooks-perfect-impossible-chat-
bot/ (2017)
7. Coniam, D.: The Linguistic Accuracy of Chatbots: Usability From an ESL Per-
spective. Text Talk 34(5), 545567 (2014)
8. Malhotra, Y., Galletta, D.F., Kirsch, L.J.: How Endogenous Motivations Influence
User Intentions: Beyond the Dichotomy of Extrinsic and Intrinsic User Motiva-
tions. J. Manag. Inform. Syst. 25(1), 267300 (2008)
9. Weizenbaum, J.: ELIZAA Computer Program for the Study of Natural Lan-
guage Communication Between Man and Machine. Commun. ACM 9(1), 3645
(1966)
10. Shah, H., Warwick, K., Vallverdú, J., Wu, D.: Can Machines Talk? Comparison
of ELIZA with Modern Dialogue Systems. Comput. Hum. Behav. 58, 278–295
(2016)
11. Vinyals, O., Le, Q.: A Neural Conversational Model. arXiv preprint
arXiv:1506.05869 (2015)
12. Venturebeat: https://venturebeat.com/2016/08/25/niki-ais-new-messenger-bot-
lets-you-hail-cabs-and-order-snacks-in-india/ (2016)
13. James, G.: A Complete Guide to Chatbots. http://www.garethjames.net/complete-
guide-chatbots/ (2016)
17
14. Hill, J., Ford, W.R., Farreras, I.G.: Real Conversations with Artificial Intelligence:
A Comparison Between HumanHuman Online Conversations and HumanChat-
bot Conversations. Comput. Hum. Behav. 49, 245–250 (2015)
15. Corti, K., Gillespie, A.: Co-Constructing Intersubjectivity with Artificial Conver-
sational Agents: People are More Likely to Initiate Repairs of Misunderstandings
with Agents Represented as Human. Comput. Hum. Behav. 58, 431442 (2016)
16. Holtgraves, T.M., Ross, S.J., Weywadt, C.R., Han, T.L.: Perceiving Artificial So-
cial Agents. Comput. Hum. Behav. 23(5), 21632174 (2007)
17. De Angeli, A., Johnson, G.I., Coventry, L.: The Unfriendly User: Exploring Social
Reactions to Chatterbots. In: Proceedings of The International Conference on Af-
fective Human Factors Design, pp. 467–474. London (2001)
18. Rubin, A.M.: Uses and Gratifications. In: Nabi, R.L., Oliver, M.B. (eds.) The
SAGE Handbook of Media Processes and Effects, pp. 147–159. Sage, Washington,
D.C. (2009)
19. Stafford, T.F., Stafford, M.R., Schkade, L.L.: Determining Uses and Gratifications
for the Internet. Decision Sci. 35(2), 259288 (2004)
20. Katz, E., Blumler, J.G., Gurevitch, M.: Utilization of Mass Communication by the
Individual. In: Blumler, J.G., Katz E. (eds.) The Uses of Mass Communications:
Current Perspectives on Gratifications Research, pp. 1932. Sage, Beverly Hills
(1974)
21. Joinson, A.N.: Looking At, Looking Up or Keeping Up With People?: Motives and
Use of Facebook. In: Proceedings of the SIGCHI conference on Human Factors in
Computing Systems, pp. 1027–1036. ACM press. (2008)
22. Sundar, S.S., Limperos, A.M.: Uses and Grats 2.0: New Gratifications for New
Media. J. Broadcast. Electron. 57(4), 504525 (2013)
23. Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, New York (2003)
24. Brandtzaeg, P.B., Heim, J.: Why People Use Social Networking Sites. In: Interna-
tional Conference on Online Communities and Social Computing, pp. 143–152.
Springer, Berlin Heidelberg (2009)
25. Brandtzaeg, P. B., Haugestveit, I.M., Lüders, M., Følstad, A. How Should Organ-
izations Adapt to Youth Civic Engagement in Social Media? A Lead User Ap-
proach. Interacting with Computers, 28(5), 664-679 (2016)
18
26. Foley, C.: The Art of Wasting Time: Sociability, Friendship, Community and Hol-
idays. Leisure Stud. 36(1), 120 (2017)
27. Tsakonas, G., Papatheodorou, C.: Exploring Usefulness and Usability in the Eval-
uation of Open Access Digital Libraries. Inf. Process. Manag. 44(3), 12341250
(2008)
28. Thackara, J.: The Design Challenge of Pervasive Computing. CHI.
http://www.doorsofperception.com/projects/chi/ (2000)
29. Monk, A.F.: User-Centred Design: The Home Use Challenge. In: Sloane, A., van
Rijn, F. (eds.) Home Informatics and Telematics: Information Technology and So-
ciety, pp. 181–190. Kluwer Academic Publishers, Boston (2000)
30. McQuail, D. :Mass Communication Theory: An Introduction (2nd edn.). London:
Sage (1987).
View publication statsView publication stats
... ChatCharlie is not a customer support agent, but rather a conversational agent designed to collect and securely store textual responses to a series of contextually relevant questions. ChatCharlie has been designed to reduce perceptions of unresponsiveness by timing messages optimally [39][40][41] , ensuring witnesses are not overwhelmed by information and recall requests, but that response lag does not appear so unhuman like as to feel uncomfortable and/or frustrating. ChatCharlie offers users a human like name, which can increase perceptions of an authentic interaction 42 especially when paired with an informal communication style). ...
Article
Full-text available
Initial account interviews (IAi) offer eyewitnesses more immediate opportunities to answer a series of brief questions about their experiences prior to an in-depth, more formal investigative interview. An IAi is typically elicited in-person near/at the scene of a crime using broadly systematic questioning. Retrieval practice can improve subsequent recall in some contexts, but there is a dearth of research centred on the potential costs and benefits of a quick IAi. Furthermore, where an in-person IAi is impossible, no alternative quick provision exists. Given the systematic nature of the IAi protocol, we developed a conversational chatbot as a potential alternative. Using a mock-witness paradigm, we investigated the memory performance of adults from the general population during in-depth in-person interviews one week after having provided an IAi 10 min post event either (1) in person, (2) via the ChatCharlie chatbot, or (3) no IAi (control). IAi conditions leveraged significantly improved event recall during later investigative interviews versus the Control. Accounts were more accurate and complete, and more correct information was remembered without increased errors indicating the potential of digital agents for IAi purposes Findings concur with predictions from theoretical understanding of episodic memory consolidation and the empirical eyewitness literature regarding the benefits of practice in some contexts.
... Furthermore, researchers have emphasized the importance of AI chatbots in providing solutions that influence customer behavior and encourage technology adoption [13,76]. AI chatbots enhance users' experience by delivering quick answers and facilitating more efficient service usage [77]. These chatbots offer an additional functionality layer to resolve customer problems, significantly shaping their perceptions of new technologies. ...
Article
Full-text available
With the rapid advancement of artificial intelligence (AI), chatbots represent a transformative tool in digital customer engagement, reshaping customer–brand relationships. This paper explores AI chatbots on customer–brand interactions by analyzing key features, such as interaction, perceived enjoyment, customization, and problem-solving. Based on the Technology Acceptance Model (TAM), the research investigates how these attributes influence perceived ease of use, perceived usefulness, customer attitudes, and ultimately, customer–brand relationships. Adopting a mixed-methods approach, this study begins with qualitative interviews to identify key engagement factors, which then inform the design of a structured quantitative survey. The findings reveal that AI chatbot features significantly enhance customer perceptions, with ease of use and usefulness in shaping positive attitudes and strengthening brand connections. The research further underscores the role of AI-driven personalization in delivering sustainable customer engagement by optimizing digital interactions, reducing resource-intensive human support, and promoting long-term brand loyalty. By integrating TAM with customer–brand relationship theories, this study contributes to AI and sustainability research by highlighting how intelligent chatbots can facilitate responsible business practices, enhance operational efficiency, and promote digital sustainability through automation and resource optimization. The findings provide strategic insights for businesses seeking to design AI-driven chatbot systems that improve customer experience and align with sustainable digital transformation efforts.
... Satisfaction is closely related to loyalty, as a consumer is more likely to be loyal if their satisfaction level is high (Hutabarat & Prabawani, 2020). Optimising customer experience, expectations, and satisfaction can form loyalty (Brandtzaeg & Følstad, 2017). ...
Chapter
Advances in technology and the proliferation of information have led to an increase in the number of Internet users. Fixed broadband services have become a popular choice for accessing the Internet. The growing number of Internet users has facilitated quick and easy communication, which has led to the widespread promotion of electronic word-of-mouth (eWOM). This study aims to investigate the impact of eWOM on purchase intention and customer loyalty of fixed broadband services in Jakarta. eWOM consists of multiple dimensions, including information quality, information quantity, and information credibility. A total of 361 respondents participated in the survey conducted using a purposive sampling technique via Google Forms. Structural equation modeling (SEM) and partial least squares (PLS) methods were used for data analysis. The results showed that information quality, information quantity, and information credibility all significantly affected purchase intention, while purchase intention significantly affected customer loyalty. Companies must prioritize the quality and credibility of information to attract consumers' purchase intention and ultimately cultivate customer loyalty.
... The expectation that digital tools like the SVA could help answer relatively simple questions and decrease barriers related to shame, for instance, is consistent with previous research. 43 Yet, as participants pointed out, asking a question is not always easy. Some people will need help formulating a question even before asking the SVA. ...
Article
Full-text available
People in vulnerable positions who need support in their daily lives often face challenges in receiving timely access to care; for instance, due to disabilities or individual and situational vulnerabilities. There has been an increasing turn to technology-mediated ways to improve access to care, which has raised ethical questions about the appropriateness and inclusiveness of digitalising care requests. Specifically, for people in vulnerable positions, digitalisation is meant to facilitate requests for access to healthcare resources and to simplify the process of navigating the healthcare system. In a multidisciplinary research project, we examined the use and value of a ‘sensitive’ virtual assistant that can accommodate different needs of target groups through inclusive design, adaptive technology and artificial intelligence. This paper presents empirical findings from focus groups with care recipients and caregivers about the sensitive virtual assistant and relates the findings to five larger ethical issues associated with the use of virtual assistants in healthcare settings and care practices more generally. It highlights the risk that, even with the inclusion of target groups in the design of digitalised care assistants, some people may benefit significantly less than others.
Article
Full-text available
Günümüzde yapay zekâ, veri analizi, kişiselleştirme, içerik üretimi ve müşteri hizmetleri gibi dijital pazarlama süreçlerinde verimliliği artıran ve rekabet avantajı sağlayan bir araç haline gelmiştir. Bu araştırma, Türkiye’de dijital pazarlama faaliyetlerinde yapay zekânın nasıl kullanıldığını, karşılaşılan zorlukları ve gelecekteki etkilerini anlamak amacıyla yapılmıştır. Araştırmada yarı yapılandırılmış görüşme yöntemi kullanılarak, beş farklı dijital pazarlama reklam ajansında çalışan profesyonellerle derinlemesine mülakatlar gerçekleştirilmiştir. Elde edilen bulgulara göre, yapay zekânın özellikle veri analizi, kişiselleştirme, içerik üretimi ve reklam optimizasyonu alanlarında önemli bir rol oynadığı, kampanyaların hızını artırıp maliyetleri düşürdüğü belirtilmiştir. Yapay zekânın müşteri hizmetlerinde chatbotlar ve sanal asistanlarla süreçleri hızlandırdığı ve müşteri memnuniyetini artırdığı vurgulanmıştır. Ancak, dil bariyeri, veri güvenilirliği ve algoritmik önyargılar gibi konular, yapay zekânın entegrasyonunda önemli zorluklar olarak ortaya çıkmıştır. Araştırma, Türkiye’de yapay zekânın pazarlama alanındaki etkisinin artmaya devam edeceğini ve rekabet avantajı sağlayacağını öngörmektedir, ancak bu sürecin başarılı olabilmesi için daha fazla eğitim, yatırım ve yerel dil desteğine ihtiyaç duyulmaktadır.
Preprint
Full-text available
Loneliness is a pressing global health issue, yet traditional interventions often fall short due to scalability limitations and the individualized experiences of loneliness. The rise of generative artificial intelligence (AI) has enabled synthetic relationships (SRs)—ongoing associations with AI companions designed to simulate human-like social bonds. SRs offer, among other aspects, constant availability, adaptability, and emotional responsiveness, which potentially address loneliness. However, their growing integration into social life raises critical psychological, ethical, and societal questions. This paper examines the opportunities and risks of SRs through the lens of relationship science, psychology, and AI companionship research. We first highlight how existing loneliness interventions face the challenges of availability, scalability, and personalization. We then outline how SRs present a novel alternative to overcoming these challenges. Drawing mainly on social penetration, attachment, and interdependence theory, we analyze how SRs may foster companionship, reduce social anxiety, and improve interpersonal skills, potentially mitigating loneliness. However, we also identify significant risks, including emotional over-reliance, distorted social expectations, and privacy concerns. The widespread adoption of SRs may reshape human-human relationships, altering norms of intimacy and social connection. To navigate these challenges, we outline a research agenda promoting interdisciplinary theory development longitudinal studies, drawing on representative samples to address the ethical concerns of SRs. We argue that SRs hold promise as a social intervention when ensuring they complement rather than replace human relationships. By integrating interdisciplinary insights, this paper provides a foundation for understanding and guiding the responsible design of SRs for addressing loneliness.
Conference Paper
Full-text available
As digital transformation reshapes public services, the integration of AI-driven chatbots on government websites has emerged as a promising solution to streamline user support and enhance accessibility. This paper presents a comprehensive framework for implementing an AI chatbot on the Department of Health service (DHS) website, targeting the efficient resolution of diverse user needs while addressing the unique security challenges inherent in government digital infrastructure. The proposed framework covers critical aspects of chatbot development, including the configuration of natural language processing (NLP) models, user intent mapping, and data protection strategies tailored to sensitive government information. Additionally, it explores strategies to ensure user engagement and accessibility, considering the diverse demographics of DHS website visitors. This paper examines potential security risks, such as data breaches and unauthorized access and mentions best practices to deal with these issues. The framework offers a pathway to balance the dual objectives of responsive service and robust cybersecurity, ultimately advancing the role of AI in enhancing the accessibility and security of government services. The deployment of AI chatbots in public service contexts requires meticulous planning to align with legal and regulatory standards, particularly in managing sensitive health data. This study emphasizes the importance of ongoing monitoring and adaptive learning mechanisms to keep the chatbot updated with evolving public policies and user needs.
Article
Full-text available
The contemporary work landscape is undergoing significant transformation due to megatrends like digitalization, individualization, and events such as the COVID-19 pandemic. Consequently, new work paradigms are emerging. Workation, blending work with vacation, is one such novel arrangement. Employees relocate their workplace to a holiday destination for a specified period, enjoying travel benefits while fulfilling work responsibilities. However, Workation's effectiveness is still under scrutiny. Empirical investigations show it as an appealing prospect, with many expressing interest. Yet, challenges persist. Apart from organizing the stay, factors like access to work equipment and employer supervision significantly impact the experience. For the hospitality industry, Workation presents an opportunity to extend the tourist season and boost less frequented destinations. Tailored package offerings can cater to both individual employees and companies, expanding the Workation services market.
Article
Increasing numbers of people tend to seek relationships with artificial intelligence (AI) because of its comprehensive mastery of knowledge. This research reveals that ChatGPT has become a temporary refuge for individuals encountering real-life emotional crises. The communicative affordance of human–AI conversation is featured by logos-centrism, pursuing the identity of language and essence between humans and technology. Grounded in Lacanian discourse theory, this research argues that the psychological and discursive dynamics behind human–AI dialogue entail a structure of the hysteric’s discourse, which stems from the questioning users fantasizing and interrogating the GPT as a master of knowledge. Also, capitalist discourse is revealed during the repetitive AI companionship that empowers users with the agency of controlling the dialogue and the user’s primordial lack of desire is transformed into a demand that can be resolved through consumption. GPT may become a consumer product of technological capital, while the emotional needs of individuals are commodified within these discursive structures.
Conference Paper
Full-text available
Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.
Article
Full-text available
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what's new, and what's different this time? And is there an opportunity here to improve how our industry does technology transfer?
Article
Full-text available
This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
Article
Full-text available
Organizations aiming to foster civic engagement, such as government bodies, news outlets, political parties and non-governmental organizations, struggle to purposefully use social media to engage young people. To meet this challenge and to inform future design, we interviewed 17 innovators in engaging youth, that is, frontrunners in using social media to engage young people in organizations. Also, we conducted four group interviews with 21 youth, 16–26 years, about their experiences of and barriers to online civic engagement. Our paper contributes to identifying specific factors and strategies to support young people future online civic engagement. Findings suggest how organizations should involve and collaborate with young people. Immediate feedback and dialog combined with clearly stated goals and action-oriented engagement are important. In future design, visual communication and design for use on mobile devices are an imperative, as well as concepts that connect the online and the offline world. Finally, our paper contributes to an extension of the lead user innovation approach.
Article
Full-text available
http://www.tandfonline.com/eprint/pjYxzdGWgFVuVruViBxC/full Slow tourism is motivated by the desire for personal and communal well-being. It emerged as an antidote to the fast-paced imperatives of global capitalism that urge the entrepreneurial self to speed up and work harder to achieve and demonstrate desired social status. The entrepreneurial self can be understood in the contexts of neoliberalism and the class- and gender-based histories of time-thrift and rational recreation; the entrepreneurial self uses leisure time purposively in the pursuit of status, avoids idle pursuits and has restricted capacity to experience leisurely social relationships. In this article, it is argued that leisurely social relations can be reclaimed by letting go, even temporarily, of time-thrift and the compulsion to use leisure time purposively. Data drawn from in-depth interviews with repeat visitors at two Australian caravan parks revealed that for the period of their holiday the tourists relax, refuse to be driven by schedules, socialise with other tourists and feel no compulsion to use time purposively. The key reasons the tourists return to the parks each year were for the friendships and the sense of community they experience as part of the holiday. Slow tourism by its very nature rejects time-thrift, however, as the movement is harnessed by global capitalism, slow tourism risks becoming a source of conspicuous consumption. The findings of this study suggest that friendship and community thrive more readily in conditions where the need to achieve and demonstrate social status is discarded along with time-thrift. http://www.tandfonline.com/eprint/pjYxzdGWgFVuVruViBxC/full
Article
Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary suggest that, despite optimizing the wrong objective function, the model is able to extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.