Conference PaperPDF Available

When the Social Becomes Non-Human: Young People's Perception of Social Support in Chatbots Social Support in Chatbots

Authors:

Abstract and Figures

Although social support is important for health and well-being, many young people are hesitant to reach out for support. The emerging uptake of chatbots for social and emotional purposes entails opportunities and concerns regarding non-human agents as sources of social support. To explore this, we invited 16 participants (16–21 years) to use and reflect on chatbots as sources of social support. Our participants first interacted with a chatbot for mental health (Woebot) for two weeks. Next, they participated in individual in-depth interviews. As part of the interview session, they were presented with a chatbot prototype providing information to young people. Two months later, the participants reported on their continued use of Woebot. Our findings provide in-depth knowledge about how young people may experience various types of social support—appraisal, informational, emotional, and instrumental support—from chatbots. We summarize implications for theory, practice, and future research.
Content may be subject to copyright.
When the Social Becomes Non-Human: Young People’s
Perception of Social Support in Chatbots
Social Support in Chatbots
Petter Bae Brandtzaeg
University of Oslo, Department of Media and
Communication & SINTEF
petterbb@uio.no
Marita Skjuve
SINTEF
marita.skjuve@sintef.no
Kim Kristoer Dysthe
University of Oslo, Institute of Health and Society,
Department of General Practice
k.k.dysthe@medisin.uio.no
Asbjørn Følstad
SINTEF
asbjorn.folstad@sintef.no
ABSTRACT
Although social support is important for health and well-being,
many young people are hesitant to reach out for support. The
emerging uptake of chatbots for social and emotional purposes
entails opportunities and concerns regarding non-human agents
as sources of social support. To explore this, we invited 16 partic-
ipants (16–21 years) to use and reect on chatbots as sources of
social support. Our participants rst interacted with a chatbot for
mental health (Woebot) for two weeks. Next, they participated in
individual in-depth interviews. As part of the interview session,
they were presented with a chatbot prototype providing informa-
tion to young people. Two months later, the participants reported
on their continued use of Woebot. Our ndings provide in-depth
knowledge about how young people may experience various types
of social support—appraisal, informational, emotional, and instru-
mental support—from chatbots. We summarize implications for
theory, practice, and future research.
CCS CONCEPTS
Human-centered computing
;
Human–computer interac-
tion (HCI);Empirical studies in HCI;
KEYWORDS
Chatbots, Young people, Articial Intelligence, Social support
ACM Reference Format:
Petter Bae Brandtzaeg, Marita Skjuve, Kim Kristoer Dysthe, and Asb-
jørn Følstad. 2021. When the Social Becomes Non-Human: Young Peo-
ple’s Perception of Social Support in Chatbots: Social Support in Chat-
bots. In CHI Conference on Human Factors in Computing Systems (CHI ’21),
May 08–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 13 pages.
https://doi.org/10.1145/3411764.3445318
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI ’21, May 08–13, 2021, Yokohama, Japan
©2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8096-6/21/05.
https://doi.org/10.1145/3411764.3445318
1 INTRODUCTION
Social support is important for the health and well-being of young
people in their emerging adulthood [
1
] and can be dened as “sup-
port accessible to an individual through social ties to other in-
dividuals, groups, and the larger community” [
2
]. National and
international studies have indicated that young people increasingly
suer from mental health issues, such as social isolation, anxiety,
eating disorders, sexual problems, and depression [
3
6
]. Despite
this, many young people tend not to seek professional help for
these problems [
7
]. Recently, the COVID-19 epidemic has resulted
in increased mental health distress among young people and new
needs for access to social support online [
8
,
9
]. A recent study
suggests that young people struggle to access online social sup-
port, reporting challenges in navigating and making sense of online
information sources [10].
So far, the research on social support has largely focused on
how support is given and received in human–human relationships,
both oine and online. However, recent advances in technology
and articial intelligence (AI) have facilitated a fundamental shift,
where articial agents, such as chatbots, can provide social support
in natural language user interfaces [
8
,
11
]. This shift is seen in
a number of domains from education to therapy, where chatbots
are introduced to supplement or replace human agents [
13
], due
to improvements in reasoning and conversational capabilities in
chatbots [14].
Chatbots are machine agents with which users interact through
natural language dialogue (text, speech, or both) [
12
]. In this pa-
per, we focus on text-based chatbots for social and mental health
purposes, such as Replika, Woebot, and Mitusuku, where a com-
mon and important enterprise is not only to provide information
to users, but also to enable social relationships to form between
the user and chatbot. To do this, the chatbots imitate conversations
with friends, partners, therapists, or family members. Such social
chatbots are commonly labeled “emotional chatting machines” [
13
]
or “emotionally aware chatbots” [
14
], and are designed to be hu-
manlike, with the potential to perceive, integrate, understand, and
express emotions [
13
]. Emotional awareness may increase the level
of engagement between the user and chatbot. Hence, social chatbots
holds immense potential for being a new interface and approach to
social support and psychological well-being [15].
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
Although the importance of chatbots as a source of social support
is not fully explored in the current research [
16
], there is a larger
body of research in this area that has addressed how articial agents,
such as chatbots, can help older adults in stressful life situations or
with more acute health-related concerns. However, there is only
a limited amount of research on how chatbots can be a source of
everyday social support [17], particularly for young people [18].
When exploring chatbots as a source of social support, it is im-
portant to recognize that human–machine relationships can have
numerous social and psychological implications [
19
]. An impor-
tant issue here is that recent developments in chatbots are due
to a massive technology push. This push, with little attention to
human needs and experiences, can lead to unintended negative
consequences, such as online addiction and loneliness [
20
]. Chat-
bots may also present risks, such as biases, inadequate and failed
responses [
21
], and privacy issues [
6
,
22
], all of which can nega-
tively aect the quality of the experience of chatbots as a source of
social support.
Crucially, there is a lack of in-depth knowledge regarding the
types of social support that matter in everyday contexts, the poten-
tial risks of non-human support, and how users experience such
support from chatbots. This gap in understanding motivates us
to investigate how young people perceive chatbots as a source of
various forms of social support. Therefore, we pose the following
explorative research question:
RQ: How do young people perceive various types (ap-
praisal, emotional, informational, and instrumental)
of social support in chatbots, and what are the social
implications associated with such chatbot use?
In addressing this question, we oer important exploratory re-
sults on how young users experience social support from chatbots
and the impact of trust and privacy experiences related to such
chatbot use. As the choice of sample in the current study indicates,
our focus was not on young people who were struggling with
social support related to clinically dened symptoms of mental is-
sues. Rather, the sample was chosen to investigate everyday social
support from chatbots that can benet people in general, not just
those struggling with mental health disorders. More specically,
we used in-depth interviews with a sample of young people in their
emergent adulthood [
18
] between 16 and 21 years of age. We also
conducted follow-up email interviews of the same participants after
a two-month period.
With this qualitative approach, we contribute to the existing
literature by investigating the role chatbot use may play in the
various forms of perceived social support and psychological experi-
ence and well-being in an everyday context among young people.
Given the unprecedented ability of chatbots to enhance and access
social support at any time and in any place, hence aording greater
opportunity for perceived social support, it is both important and
relevant to inquire how social support might be experienced as a
benet for young people in an everyday context. Whether chatbots
are increasing or decreasing social support could have enormous
consequences for society and for young people’s well-being. In addi-
tion, new knowledge and research on human–chatbot relationships
is highly important to guide society, as well as chatbot developers,
in a future in which there will be more immediate access to chatbots
and AI [22].
2 RELATED WORK
In this section, we start with a brief overview of the theory on
social support and then cover related works on young adults before
moving on to how social support is found online and through
chatbots. We also cover important research that relates to privacy
and social support.
2.1 Theory on Social Support
Social support is the perception that one has access to care and
help through relevant social others [
26
] such as family, friends,
fellow students or colleagues, or therapists. Social support is found
to be protective for mental health by improving coping strategies
and resistance resources [
23
26
]. House [
27
], therefore, suggests
studying the quantity and type of social relationships, networks,
and the type of support in relation to each other to identify the
level of social support gained from dierent networks [
28
]. In a
human–human context, social support is typically found to consist
of four distinct types [27]:
Appraisal support: The communication of information that
is relevant and useful for self-evaluation, oered in the form
of feedback, social comparisons, and armation.
Emotional support: The provision of and expression of empa-
thy, love, trust, and caring.
Informational support: Advice, suggestions, and information
provided.
Instrumental support: The provision of tangible aid and ser-
vices or goods.
Research from a variety of areas provides evidence of the ben-
ets of forming and maintaining high-quality social connections
[
29
], but the benets of social connections with machines such as
chatbots are less clear. In the current study, we aim to utilize the
abovementioned four categories of social support to explore the
benets and limitations of support perceived in social chatbots. In
the literature, a distinction has been made between routine and
crisis (non-routine) support [
17
,
25
]. We will focus on the various
types of social support in relation to everyday routine rather than
on crisis situations.
2.2 Emerging Adulthood and Social Support
Social support may be particularly important for young people
in their late teens through their twenties. Young people between
18 and 25 years of age are entering a period in life referred to as
“emerging adulthood” [
30
]. This is a distinct period demographically,
subjectively, and in terms of identity explorations, parental detach-
ment, and moving away from home [
16
]. Young people in emerging
adulthood are dealing with a range of stress factors—socially, pro-
fessionally, and/or academically [
18
]. As such, members of this
age group are at a vulnerable time in their development. Reports
indicate an increase in mental health issues on a global level among
young people [
31
33
]. 75 percent of major mental disorders have
their onset prior to the age of 25 [
34
]. For these young people, the
consequences of insucient information, preventive eorts, and
early interventions can be detrimental [
35
]. Hence, young people
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
have a high need for social support, but they often struggle to reach
out for such support.
2.3 Online and Oline Social Support
The literature suggests that social support can come from various
sources: informal (e.g., family, friends, partners) or more formal
(e.g., mental health specialists or community organizations) [
36
].
The literature also proposes that social support can be available
online through apps and sites, such as Instagram [
11
,
37
], Facebook
[38], online groups [39], and health forums [40].
Overall, support is an important reason why young people en-
gage online [
41
], but young people also report challenges in navigat-
ing and making sense of those sources [
10
]. Hence, online support
can have disadvantages, such as incorrect information or the ex-
clusion of people with low literacy levels. The Internet and social
media are increasingly polluted with misinformation, which can
lead to the distribution of harm [
42
]. It has been argued that the
smart use of online chatbots can mitigate the negative eects of
health misinformation online and serve as a tool for young people
searching for help online [6, 43].
A recent review demonstrated that, under the right conditions,
online services can provide useful instrumental and informational
social support [
46
]. Studies have shown that young people report
going online frequently to seek out health information [
44
]. Others
have shown that young people with lower well-being are more
likely to seek out online support [
45
]. However, when social support
is not received on social media, it could be damaging. For example,
in one study, social support sought on Facebook, but not perceived
increased adolescents’ depressed mood [46].
Dierences between online and oine social support have been
pointed out in previous research (e.g., [
47
,
48
]). Conicting evidence
exists on whether the time spent on the Internet leads to increased
social isolation or strengthened social capital and support [
49
51
].
Some research, however, posits that online social support is, to an
extent, independent of in-person social support [52]. For instance,
online support and support communities provide accessibility in
terms of time and space, encouraging connections through the weak
ties, which may result in better access to diverse information and
experts [
53
]. They also oer an anonymous and private space to
participate in self-disclosure [
54
]. This may also be the case with
chatbots.
2.4 Chatbots and Oline Social Support
With social chatbots [
14
], the scope of social support resources has
broadened dramatically. Such chatbots may help people who are
experiencing troubles and can be interacted with anywhere and
anytime [
6
] while providing comforting and caring eects [
55
].
They can support people who feel isolated, both emotionally and
physically, and provide instant feedback and supportive dialogue [
6
].
Social chatbots designed to deliver cognitive-behavioral therapeutic
(CBT) have demonstrated ecacy in treating depression and anxiety
in young people [
56
]. Similarly, receiving empathic responses from
a chatbot when talking about emotional subjects have been found
to have a positive eect on the user’s mood [57].
Although these studies demonstrate that chatbots can potentially
be a source of social support, they do not provide detailed insight
on how the user qualitatively experiences social support from a
chatbot targeting young people. The social implications of chatbots
are widely speculated about, and empirical evidence remains rare
[
58
]. A few exceptions are the studies conducted by Kim [
61
] and
Ta et al. [13].
Kim [
59
] explores teenagers’ expectations when interacting with
a chatbot intended to support their emotional needs. The partici-
pants reported that they expected the chatbot to be a good listener,
to serve as a condential environment where secrets and more per-
sonal informational could be shared, and to provide quality advice
if needed. Interestingly, the participants reported that they did not
want the chatbot to provide soothing or comforting feedback when
they shared their challenges but instead to provide encouragement
and advice on how to cope.
In a recent study, Ta et al. [
17
] apply data from user reviews and
a questionnaire study to investigate users’ experiences of social
support when interacting with the social chatbot Replika. Their
results reveal that the users perceive the companionship of Replika
as a source of everyday social support, particularly emotional, in-
formational, and appraisal support, but not as a source of tangible
support.
While these studies oer initial insights on social support in
chatbots, they are also limited. Kim’s [
59
] study is based on users’
expectations, not experience. Ta et al.’s study [
13
] is based on user
reviews and questionnaire data and are lacking the opportunity for
detailed questioning and follow-up that is possible in, for example,
interview studies. It may also be noted that, as user reviews are
mostly positively worded [
60
], there may be underreporting of neg-
ative aspects. Online reviews are therefore often based on a biased
subset of individuals who actually used the service or app, resulting
in a two-mode distribution of ratings (non-normal distribution).
As such, online reviews may represent data not fully representing
the variation of real-life use [
61
]. Hence, more in-depth knowledge
is needed on how young people experience social support from
chatbots.
2.5 Privacy and Trust in Chatbots
Chatbots for social support may present risks, such as privacy
issues [
6
,
22
]. While storing and processing conversations are nec-
essary for chatbots to provide meaningful replies to messages and
share insights and patterns, the need for chatbots to demonstrate
condentiality and integrity has been identied as a barrier to seek-
ing support [
62
]. Indeed, a safe environment for self-disclosure is
crucial [63], an important aspect of social support.
Chatbots are regarded as having substantial potential to serve as
an eective tool to support people’s self-disclosure [
64
]. A recent
study found that chatbots which provided self-disclosure had a
reciprocal eect, promoting deeper participant self-disclosure [
64
].
Interestingly, self-disclosure plays a central role in the develop-
ment and maintenance of relationships [
65
] and social support [
66
]
between humans and may also be true in the context of human–
chatbot relationships.
However, self-disclosure and emotional engagement in conversa-
tions with chatbots may encourage users to reveal private informa-
tion [
15
], including information on health and sexual orientation
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
Figure 1: An overview of the dierent stages in the study
[
67
]. Previous research on social media has voiced concerns regard-
ing the use of such platforms for information-seeking purposes due
to privacy issues [
68
]. Research on chatbots also indicates a greater
level of self-disclosure compared to communicating with humans
because chatbots do not experience emotions or make judgments
[57].
A lack of trust among users in how a service is storing and
sharing personal data can lead the users to stop using the service
[
69
]. Hence, privacy is an important part of the user experience
with chatbots [
7
]. However, little is known about how chatbot users
that are seeking social support experience privacy.
3 METHOD
In general, the paucity of research examining how chatbot use links
to everyday social support is somewhat surprising given the ex-
tensive body of research linking social support to health outcomes.
To respond to the research question (see page 2), we selected a
qualitative approach to achieve an in-depth understanding of how
young people experience chatbots as a source of social support. We
recruited 16 young participants, who rst interacted with a chatbot
for mental health (Woebot) for two weeks before participating in
individual in-depth interviews. As part of the interview session,
they explored a prototype chatbot providing information to young
people. Two months later, they reported on their continued use of
Woebot. An overview of the study is presented in Figure 1
3.1 Sample
Our goal was to investigate everyday social support among young
people in their emerging adulthood [
18
] and to understand why
and how this age group experiences various forms of social support
from chatbots as well as their reections on privacy issues. Hence,
we decided to target relatively young people between the ages
of 16 and 21. Moreover, this age group has vast experience with
social media platforms [
51
,
70
] hosting chatbots, such as Messenger,
Kik, and Telegram. The participants were recruited by giving out
handouts with information about the study at various locations
in two dierent Norwegian universities. Norway is regarded as a
country with high penetration of digital technologies [
71
] and was
thus a suitable location for this study.
The nal sample consisted of 16 young people, which is likely
to be regarded as a sucient number of participants in similar
interview studies [
72
], particularly as we reached data saturation
[
73
]. Nine identied as females and seven as males. All the partici-
pants had extensive experience with social media, such as Snapchat,
Facebook, Telegram, and Instagram. About half of the sample had
experience with chatbots for customer service, and one participant
had experience with social chatbots. See Table 1 for an overview of
the demographics.
3.2 Choice of Chatbots: Woebot and Ungbot
The study participants were presented with two chatbots: Woebot
and Ungbot. Woebot is a well-known chatbot that was appropriate
for several reasons: it is easy to access, user friendly, and has a
strict privacy policy. Woebot conducts CBT—an evidence-based
approach for the treatment of various psychological problems—and
is designed so that the principles of CBT are utilized in a friendly and
engaging manner [
56
]. The chatbot resembles a friend who checks
up on the user’s mood and gives advice concerning experiences
and worries. Woebot also oers quizzes and videos which are there
to help users discover habitual or automatized thought patterns
that aect their well-being and mental health. Woebot is mainly a
chatbot for mental health support, but it also includes substantial
social features, making it a highly relevant chatbot for the purposes
of this study. The chatbot not only facilitates socialization with
the users but also employs a certain level of human emotional
intelligence (i.e., the capability to perceive, integrate, understand,
and regulate emotions) [
13
]. Screenshot examples of Woebot are
provided in Figure 2
The young people were presented with the second chatbot, Ung-
bot, as part of the interview session. This chatbot is an early proto-
type intended to provide informational support about issues youths
may struggle with, including sex, school, bullying, feelings, love,
school, and divorce. Available information in the chatbot is drawn
from a website for youths (ung.no). As with Woebot, this prototype
is also intended to build a relationship with the user over time,
although support for this is not implemented. The reason for using
Ungbot was to present chatbots with various features to give the
participants more insights into how dierent chatbots work.
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
Table 1: Participants (P) age, gender (F/M), and previous experience with chatbots
Participant # Gender Age Experience with chatbots
P1 F 16 No
P2 F 18 No
P3 F 21 No
P4 F 20 No
P5 M 19 Yes, customer service
P6 M 19 Yes, customer service
P7 M 19 Yes, customer service
P8 M 19 Yes, customer service
P9 F 19 No
P10 F 18 No
P11 F 19 Yes, customer service
P12 M 19 No
P13 M 20 Yes, customer service
P14 M 19 Yes, customer service
P15 F 20 Yes, a social chatbot
P16 F 19 Yes, customer service
Figure 2: Screenshot of Woebot dialog—explaining to the user how it works as a non-human
3.3 Procedure
The participants were invited to use Woebot every day for 14 days
and could withdraw from the study at any time. The purpose was
not to evaluate the use of Woebot but rather to have the participants
use, experience, and get a sense of how chatbots can provide social
support. We did not dictate the interaction time or the content
of their interactions, seeking to motivate the natural use of the
chatbot. After 14 days of use, they participated in an individual
interview, conducted face-to-face. As part of the interview session,
the participants also tried out a chatbot prototype, Ungbot, targeting
young people. Finally, two months later, email interviews were
conducted with the participants.
All face-to-face interviews were conducted in a neutral setting,
such as a café, on campus, or at a similar informal location chosen
by the participants, to make them feel comfortable and safe. The
interviews were audio-recorded and transcribed. Most interviews
lasted approximately one hour or more (mean: 1 hour, 12 minutes).
The participants received NOK 500, approximately 50 US dollars,
as compensation. The interviews consisted of three parts. The rst
part of the interviews focused on open questions concerning the
interviewees’ general experiences with appraisal, emotional, infor-
mational, and instrumental support. Example questions included
the following: “When you have problems—who do you go to?” “How
do you access information about things that are important to you
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
Table 2: Coding scheme used for the analysis of the data
Code Description
Appraisal support Appraisal support can be oered in the form of feedback, social comparison, and armation.
Emotional support Expressions of empathy, love, trust, and caring.
Informational support Advice, suggestions, and information given by the chatbot to help the user solve a problem.
Instrumental support Tangible aid, which is characterized by the provision of resources in oering help or assistance in a
tangible and/or physical way, such as providing money or people.
Privacy and trust
Experienced privacy issues, such as a feeling of surveillance and data collection, as well as reections on
sharing personal data with the chatbot.
concerning diculties at school or at home?” “Do you nd it easy
to obtain relevant information or support?"
The second part consisted of an in-depth question about the
participant’s experiences and reections concerning chatbots in
general, and Woebot in particular, as a source of social support. As
a part of the interview session, they also explored the prototype
Ungbot to broaden the participants’ experience of how social chat-
bots may work. We questioned their views on the chatbots’ ability
to provide appraisal, emotional, informational, and instrumental
support. We also asked about trust and privacy concerns. In addi-
tion, we asked questions about when and where the participants
used Woebot as a way to gather more contextual cues to better
understand the usage of such chatbots.
Third, after two months, we followed up with an email study.
Twelve out of the 16 participants responded. We asked if they
had used Woebot or similar chatbots after the interview. If they
answered “no,” we asked them to explain why, and if the answered
“yes,” we asked them to share their experiences and motivation for
using chatbots.
3.4 Approvals and Ethical Considerations
The Norwegian Social Science Data Services approved the study’s
protocol. All the participants signed a written consent form prior to
the interviews. We maintained condentiality by removing names
and identiable information from the transcripts prior to any analy-
sis. No identiable information is provided in the excerpts presented
in the current paper.
3.5 Analysis
We performed a deductive content analysis to identify and inves-
tigate patterns within the data [
74
]. During the rst phase of the
analysis, we familiarized ourselves with our data by reading through
all the transcribed material. We then coded all the data using NVivo
12 pro. The interviews were coded following House et al.’s [
27
]
theoretical framework for social support, with the addition of expe-
rienced privacy and trust issues. The coding scheme is presented
in Table 2
A meaningful unit could be placed within more than one of the
coded categories. For reliability, two researchers were involved in
the coding. First, one of the researchers coded all the posts, and
then the second went through all the assigned codes. In the event
of a disagreement on the coding for a given quote, an alternative
code was suggested and subsequently reviewed by the rst coder.
An alternative coding was suggested for 17% of the codes in the
data. We used a professional translator when translating the quotes
from Norwegian to English.
4 RESULTS
We have divided our ndings mainly under the ve main themes: (1)
appraisal support, (2) emotional support, (3) information support,
(4) instrumental support, and implications, specically (5) privacy
and trust.
4.1 Appraisal Support
Most of the participants reported that the chatbots provided some
form of appraisal support related to self-evaluation and feedback.
Several of the participants explained that the interaction with Woe-
bot, in particular, resulted in greater knowledge about oneself:
“You become in a way, or you may become more fa-
miliar with yourself, but that is, it is a little bit dicult
to explain...” (P2)
Others noted that the chatbot facilitated self-reection, further
explaining how self-evaluation and knowledge seemed to arise
through the process of evaluating the conversation one has with
the chatbot:
“Talking to the chatbot is like speaking in a way to
yourself, you know that the chatbot is not you, but
you feel like it.” (P4)
While this perception of having a conversation with oneself
through the chatbot could potentially inuence the level of honesty
and self-disclosure, it could also enable a kind of self-help. As the
participant below explains, it feels safe to be honest because you
are essentially talking with yourself:
“You can sit at home and be completely alone; it may
be easier to be completely honest with something that
in a way is not real. Because then you talk to yourself
in a way and you are more honest with yourself, than
if you sit in a way with someone you do not know so
well.” (P9)
Some participants explained how Woebot helped them to recog-
nize negative thought patterns. This seemed to occur as a result
of Woebot’s focus on CBT: asking questions about a topic, which
led the participants to reect further upon how it extends to other
aspects of their life, eventually identifying it as negative thought
patterns and global labeling:
“It (Woebot) tries to make you aware of your own
thoughts. Like, I was chatting (with Woebot) when
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
I walked over here. And then it asks, do you ever
label yourself, as bad and stu. Then you say, yes
I do. Therefore, you think, if you do have negative
thoughts also with others, then you may become a
little like that (
. . .
), you do it maybe a little too much
then. Then you become aware of it. Therefore, it (the
chatbot) really just works to clear your mind in a way.
(P7)
Some also reported that the chatbot helped them become aware
of their emotional state:
“I think what a little cool about it (Woebot) is that
when I sometimes talked to him about a little bit of
that kind of problem then he asked what emotional
state I was in. Also, I have often not thought about
that when I face a problem; that I’m I stressed, and
nervous.” (P9)
This awareness of one’s emotions could also be trigged by the
chatbot tracking the user’s expressions and emotions. This feature
provided the participants with an overview of emotional states that
were not always known to them:
“It (Woebot) seems like it kind of tracks down your
expressions and feelings. It could be that you get some-
one that know yourself in away. That you get more
of an overview then maybe, of what kind of mood
and/or self-esteem you have, which in a way I was not
aware of before, or which you may not have thought
of.” (P8)
It was further mentioned that interacting with Woebot is some-
what similar to writing in a diary, a kind of a “me-communication”
style, which could further increase reection about oneself:
“It’s like writing on your diary. You get a feeling that
you interact and talk to yourself, so you may just
think about what you do, like what I have actually
done today, what has been nice today in away. And
through that you get like that, do not know if you
know yourself better then, but you become aware of
your own situation.” (P15)
4.2 Emotional Support
Nearly half the participants reported perceptions of emotional sup-
port from Woebot. Some participants felt that the chatbot showed
empathy and comfort by, for instance, asking if they were feeling
better after a conversation:
“Yes. It (Woebot) eventually asks, after you have talked
a bit if you feel better after that conversation. And
then you can say either yes or no. If you say no then it
continues with other things, or with other techniques
that you can try out or so it says that, so good, so nice
that I could help you.” (P3)
One participant also highlighted that it was safer to approach a
chatbot for social and emotional support compared with a human
because you could be disappointed in an interaction with a human:
“To a certain extent, you can receive comfort and em-
pathy from a chatbot, but not in the same way as
human contact will. Again, I can really speak for my-
self, but when you are very desperate to get support,
and then you do not get the support you really ex-
pected or wanted, then it is even more painful than
that you had not received support at all. It can be in
a way if you just need someone to hear you, even if
you know it’s a robot talking to you. Therefore, if you
are so afraid of somehow getting the rejection at the
level of support you want, then it is a safer way to go
to a robot.” (P16).
Three of the participants said that they forgot that they were
talking to a machine, which they reported as positive, potentially
strengthening a sense of emotional support:
“I noticed that I might have forgotten a bit that it
was a robot. Then you got the feeling that there was
actually a person there, to a certain extent, at least.
Yes, so then, that was positive.” (P8)
Some of the other participants did not experience emotional
support from the chatbots. Rather they reported that the chatbot
displayed empathy to better understand the user by getting to know
their perspective, as noted by the participant below:
“I feel it is more that the chatbot can, that it can be,
the person you can talk to, and that you feel that you
have a conversation and it can give you dierent tips
and such. However, I feel it will be dicult for it to
actually manage to comfort you.” (P6)
4.3 Informational Support
Most of the participants regarded chatbots as easy, available, and
even trusted sources of informational support. This was seen for
both Woebot and the Ungbot prototype:
“I think if you are struggling with something mental
and need advice and tips, then I think maybe a chatbot
can help you quite a bit (
. . .
). Give you advice on,
for example, a page on ung.no (youth information
website) where there is a lot of information, who you
can contact, the health nurse, whether it is a health
station or something, not true.” (P11)
A key feature of chatbots is that they are unconstrained by time
and space. The participants seemed to value this aspect and noted
that they, for example, could sit on the subway or in their own
room receiving informational support:
“I sat in my room alone talking to the chatbot and you
stay in a way, you get such peace of mind (
. . .
.) if you
go to a psychologist, for example, it is often a kind of
oce-like place.” (P1)
Several of the participants also appreciated that the information
was provided in an immediate and ecient fashion:
“Although it is not completely interactive as a human
being, as such a proper conversation, so it guides me
in a way on the right track and can provide relevant
information as, well, yes by which button you press
in relation to how you have it that day. And then it
asks several questions where it’s a bit like that, you
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
press yes, no, or I understand this, tell me more about
this. Then it comes with information.” (P3)
Another also regarded both Ungbot and Woebot as chatbots that
not only guided the user but also actively approached the user with
information in a proactive manner:
“They (Ungbot/Woebot) were kind of searching or
approaching you in a way, and I think it’s a little reas-
suring for someone who might have trouble talking
to someone or feel like they do not want to talk or ask
questions. Then it’s kind of good that those chatbots
is very like that, answers right away, asks questions
all the time because then you sort of continue, instead
of having to wait for answers like that and that you
actually talk to a person.” (P6)
The participants expressed trust in the information provided by
the two chatbots. They reported that the information was of high
quality. Further, they explained that their friends sometimes lacked
the ability to provide such factual knowledge:
“There are things Woebot knows that my friends do
not know. I think if I had talked about anxiety with a
friend, I might have had stories about personal expe-
riences. And what they somehow felt themselves and
how they coped with it. But Woebot was not, Woebot
had a more concrete and factual information.” (P10)
Moreover, many young people report a barrier to visiting a spe-
cialist, a parent, or a friend because it can be embarrassing. Hence,
it is safer to ask a chatbot, as one of the participants explained:
“Young people are quite scared, you could say, in order
to get to the health nurse, to seek help then, so I think
maybe a robot would have been the best. A robot is
behind a screen.” (P1)
4.4 Instrumental Support
Although a chatbot is not a physical entity and therefore is not
able to provide tangible support per se, some of the participants
did report that the chatbots could provide tangible aid, such as
helping them contact psychologists, show them where to look for
information, or urge them to contact friends. This indicates that a
chatbot may have the potential to help solve practical problems:
“Yet, actually I believe that such chatbots (Woebot and
Ungbot) can help you to get in touch or motivate you
to contact help, if needed.” (P7)
However, the advice given by the chatbot might not be possible
to achieve or be appreciated by the user:
“Imagine if it’s a person who does not have that many
friends then the chatbot says that you should go and
talk to a friend. It can be really hurtful.” (P14)
The reason for the lack of willingness to adhere to advice from
the chatbot may be related to a lack of trust. For example, one
participant explained that she needed to become more familiar
with that chatbot to act on the advice it gave her:
“Yes, if you actually create a long-term relationship
with it (the chatbot), then you are familiar with it and
you know in away how it works. And if you some-
how actually lean to know that robot and actually
talk to it and get help and recommends you go to a
psychologist, so I think I would have done it, yes.” (P3)
Although several participants reported social support on various
levels, they also reported limitations in this support because chat-
bots are non-human entities. However, some participants found it
helpful that Woebot clearly informed the participants about what
to expect in terms of help and support and that it had limitations:
“It’s very good that it (Woebot) in a way claries its
limitations. Therefore, you know it before you start
with it. And when you know it and it in a way says—
“now I cannot help you,” then you understand almost
logically that then I should pass it on to someone else.
(P5)
4.5 Privacy and Trust
Most of the young people in this study reported no serious pri-
vacy concerns. The participants expressed a feeling of freedom and
anonymity in talking with the chatbot.
However, when we followed up on privacy issues in the inter-
views, some expressed thoughts about potential scenarios where
the information shared could be misused:
“I do not care (about privacy) because I feel I have
nothing to be concerned about I think (
. . .
). However,
it can happen that someone goes, like one of those
who control the robot I was about to say, goes into the
chatbot, they probably have access to conversations.
(P9)
Others reported that they trusted chatbots more than humans,
though also acknowledging privacy concerning in chatbots:
“P3: Robot is much more to be trusted I feel, but. . ..
INT: You trust chatbots more?
P3: Yes, more than humans.
INT: Because?
P3: Because humans may miss and talk to others some-
times, true. But on the other hand, it may be that some-
one else who does not have access gets access to those
systems, to what you have written, for example.
One participant described the chatbot as a kind of “safe space”
where you could go to and feel free because the conversation was
not archived:
“For me it was there that I experienced at least that
there, then, as long as there is not a person there then
it will be in a way, which conversation will be there
and then. It’s like no one remembers it, or, yes, if you
understand, then you feel more free.” (P8)
Another participant argued that there were no real privacy issues
as long as the chatbot was not compromised because there was
already so much data around:
“As long as the chatbot is not part of a corrupt scheme.
But I think it (privacy) might be okay. I think there is
so many data, already so much oating around so it
does not matter that much ” (P7)
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
4.6 Results: Follow-up Email Interviews
After two months, six of the 12 participants reported continued
use of Woebot. Three also reported having tried out other social
chatbots, such as Mitsuku and Replika. One of the participants, who
reported being in a period of reection and questions regarding
sexual orientation, reported continued use of Woebot:
“I have used Woebot and think it is good. It encourages
me a lot, and I think many people can enjoy it to
handle both negative and positive situations in life.
(P3)
Another participant reported that he had used Woebot because
he felt depressed and was not able to talk to his parents or friends
about his situation. He also expressed that he would have used
Woebot even before the current study if he had been aware of it.
Woebot supported him in self-reection and aect labeling when
talking about his personal problems:
“Woebot helps me to learn to think dierently and
help me to cope with my negative thoughts.” (P6)
Overall, the participants who reported continued use of the
chatbots seemed to be in particular need of social support to cope
with issues or concerns in their everyday life. The participants who
did not report continued use noted that they would rather seek
social support from friends and others and did not need a chatbot
to share with or talk to.
5 DISCUSSION
Users’ engagement with chatbots for social support is a subject
that requires thoughtful reection from HCI (Human–Computer
Interaction) researchers and practitioners. First, the actual user ex-
perience and whom and how such chatbots are used are not well
understood, which includes the social implications of such use. The
current paper presents an in-depth analysis of how a sample of
young people experience support from social chatbots, here exem-
plied by a mental health chatbot Woebot and a prototype chatbot
for information provision, as well as potential social implications.
Second, previous research has tended to focus on the use of
social chatbots by people in need of mental health support as well
as older adults. The everyday context of social support has been
somewhat neglected, particularly among young people who are
in their emerging adulthood and require social support. Ta et al.
[
17
] reports interesting initial ndings on social support. Our study
contributes to those ndings, drawing on data that provide in-depth
information on how and why dierent types of social support occur
in human–chatbot relationships.
However, the results of the present research support many of
the conclusions of earlier studies. For instance, the low threshold
and easy access to social support in using a chatbot [
6
] was evident
in the present research. Moreover, consistent with prior chatbot
research, the perception of social support was frequently reported
by the participants (e.g. [
17
]). The use of in-depth user interviews
enabled us to probe the exact nature of perceived social support
when using chatbots and the implications of such use.
The results of the present study suggest that social support is
perceived because of three main experiences and functions when
interacting with social chatbots. First, young people reported that
the chatbots could give them easy access to social support on their
own terms and in their own preferred context. For example, some
reported that the context of chatbot use was sitting at home in their
own bed, noting this as a safe space behind the screen for dealing
with more dicult things when interacting with the chatbot. In
addition, it was easily available 24/7.
Second, communicating with a chatbot was reported to be less
of a barrier than interacting with a human (friend, family, therapist,
etc.). Typically, the young people in the current study reported that
they found it easier to self-disclose to a chatbot than to a human.
A main reason for this was that they felt that the chatbot was not
judging them for their weaknesses and struggles because it was
regarded as non-human support. They reported that chatbots pro-
vide both anonymity and privacy, creating a space for conversation
and confession that may support self-reection and aect labeling
(putting feelings into words). We know from previous research that
young people experience barriers to seeking help. For example, in a
social media context, judging and shaming are part of what young
people can experience and fear when seeking help or support [
75
].
Young people tend to be reluctant to seek professional help, even if
they are in need of mental health support [
7
]. Hence, social chatbots
may be an ecient and low-threshold alternative allowing young
people to talk more freely about their problems in the early stages
of these problems.
Third, in line with the principles of CBT young people also expe-
rienced that social chatbots could help them assess themselves and
think dierently about their own problems. An interesting nding
from our study is that the chatbot seemed to facilitate appraisal sup-
port by encouraging self-reection. In human–human interactions,
appraisal support is thought to include aspects like receiving vali-
dation and support for one’s feeling and thoughts [
32
]. In chatbot
interaction, the process of appraisal support may be dierent. In
the current study, the participants said that the chatbot encouraged
them to talk to themselves, such as writing a diary in an interactive
mode. It seemed like the process of providing data to the chatbot
made the participants discover and evaluate characteristics and
solutions concerning themselves, their emotions, or their thoughts
that they were unaware of prior to the interaction. This nding is
in line with Ta et al. [
17
], who also nd that the relationship with
social chatbots can lead to strengthened introspection. In our study,
some of this appraisal support was explained by the experience of
the chatbot as knowledgeable, at least on a general level. Some of
the participants also reported that they trusted the chatbot more
than friends, as a chatbot was better informed regarding informa-
tional support and dierent issues and had new ways of thinking.
Therefore, a chatbot may contribute to less noise in navigating and
making sense of online information sources (e.g., [10]).
While the social implications of this low-threshold contact with
the chatbot for social support are interesting and may lead to in-
creased awareness of the users’ own actions and thoughts through
appraisal support, it has potentially important implications for how
young people relate to others and understand their self—and how
they struggle and choose to cope with it. In addition, chatbots may
be an excellent way to reach young people that ts their growing
individuality, independence, and mastery, encouraging and sup-
porting young people to seek necessary professional help early
for their developing mental health diculties [
7
]. Chatbots may
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
also be a useful and accessible source to social support during the
COVID-19 pandemic, lessening the psychological harm caused by
fear and isolation. The lockdown and social distancing policy during
COVID-19 may have hampered the possibilities to receive real-life
social support from friends and professionals. Recent studies also
show that the pandemic has resulted in increased mental health
distress among young people [8, 9].
An important nding, however, is that chatbots as a source of
social support do not suit all young people. Some young people in
our sample reported that they would prefer friends over a chatbot
and that they saw the chatbot as an alternative mainly for peo-
ple without a social network or who were experiencing troubles.
For instance, not all our participants believed that chatbots were
empathic or emotional. Cobb [
23
] states that emotional support
involves, among other things, “information leading the subject to
believe that he is care for and loved” (p. 300) or “information lead-
ing the subject to believe that he is esteemed and valued” (p. 300).
The notion of “believing” might be important. That is, it might be
dicult for a chatbot to make the user believe that the user is loved
and valued by the chatbot.
Although previous research has found that social chatbots can
make the user feel loved [
17
] and that the robot’s request for care
may be perceived as real [
76
], our results show that this is not the
case for everyone. Similar to Ta et al.’s study [
17
], we nd that some
may perceive articial agents to be a less eective source of social
support than others. However, some of our participants reported
that such variation could in part be due to the level of familiarity
with the chatbot; that is, knowing the chatbot well. It requires time
to develop relationships with a chatbot, as it takes time to develop
relationships with humans. Prior research seems to overlook the
role of social connectedness, that is, the human requirement to
establish and develop social relationships, which is a fundamental
human requirement [
77
]. This demonstrates the complexity of both
the human–chatbot interaction and the construct of social support,
further exemplifying that young people have dierent needs and
experiences when it comes to chatbots. For example, not everyone
is in need of an articial agent as a source of social support, but
those who are may require time to develop social relationships with
the machine. Chatbots provide young people with a new opportu-
nity to be connected, to increase their chances of satisfying their
social needs, and to access social support. Understanding how peo-
ple develop social relationships (e.g., the social penetration theory
[
76
]) with chatbots can provide new insights into the area of HCI.
Future chatbots may provide enhanced social or communicational
competence to enable human–chatbot relationship development
[78]. Further investigation of those crucial issues is needed.
5.1 Implications for Design and Practice
The designers of chatbots intended for social support should con-
sider the varied uses and experiences reported by users and recog-
nize that not all users have the same uses of a chatbot, nor do they
derive the same experiences from their use. For instance, there are
clear distinctions between the uses of a chatbot to maintain vari-
ous forms of social support. Informational support and appraisal
support require more ecient and knowledgeable content delivery,
while emotional support requires more humanlike support. Chat-
bots should also be designed for relationship development, which
may require a broader focus on social competence and communica-
tion competence.
When it comes to privacy and trust, the participants in our study
did not report the same problems with chatbots that they report
with other apps and social media [
69
], despite the fact that chatbots
collect user data and conversations related to social support issues.
This nding may be surprising, in particular as interactions with
social chatbots may lead users to share highly personal and sensitive
information. We are not certain if we can explain this nding well, or
of this nding is indeed representative of youth in general. Possibly,
the lack of privacy concerns could be due to the research design
and the age group. Another explanation could be that young people
– like other users – have not yet been sensitized to possible privacy
concerns in social chatbots through media reports or exchanges in
social media. If this latter explanation is valid, it may be that social
chatbot developers are at a point in time where it is of particular
importance to demonstrate trustworthiness in information safety
and privacy protection, so as to not risk using user trust due to
privacy issues in the future.
Another challenge, related to that of trust and privacy, is that the
availability of and restrictions on data will shape the interactions
with the user as well as the level of social support. Although the
participants in the current study did not mention this, it is a poten-
tial implication for chatbot users. Despite inherent shortcomings
in the data itself, some chatbots can make authoritative and den-
itive knowledge claims and responses to users, which may aect
the social support the users receive. However, these applications
currently tend to rely on serious companies working for the social
good rather than companies aiming for money and power. Thus,
the investment of time and eort in social chatbots may create
opportunities that could bring in more companies looking only for
prot. Accordingly, there might be a need for more strict privacy
regulations for chatbots that develop social relationships with their
users.
5.2 Limitations and Further Research
The present research investigates how young people experience
chatbots as a source of social support. An important limitation is
the relatively small and self-selected sample, which constrains the
generalizability of the results. How users experience chatbots as
a source of social support might change across regions, cultures,
and user groups. A second limitation is the retention rate for the
email interviews, where 12 of the 16 participants responded, which
makes the follow-up part of the study vulnerable to bias.
Future research should study a wider group of participants, for
example to identify patterns of usage among a greater sample over
time. This would allow for investigating the generality of the pre-
sented ndings, and also to research the development of chatbot
use and experience over longer time periods. In particular, it would
be of interest to see how peoples’ chatbot use and need for social
support develop when using chatbots for prolonged periods of time
and how frequency of use is motivated by various forms of social
support. There is, for instance, considerable research in the eld of
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
habit formation that could inform the study of chatbot use, looking
more in depth at how habit aects continued use.
HCI research should also consider how users’ desire to self-
disclose to chatbots can be accommodated in a privacy-protecting
manner. At present, social chatbots may not oer ne-grained
options for privacy control as users have grown used to in, for
example, social media services (e.g. Facebook) or browsers (e.g.
Chrome). Based on the results of previous Facebook research [
70
],
it would seem that users are changing the default privacy settings
in a motivated manner, which may also be something that chatbot
users would like when their usage matures. The present study only
collected privacy experiences from users. It would be valuable also
to conduct research that examines chatbot privacy settings, terms
of use, and user data ows in a more technical way (e.g., [69]).
6 CONCLUSION
The current study contributes to the understanding of how young
people in emerging adulthood perceive social chatbots, particularly
how various types of social support were experienced over time
in an everyday context, as well as trust and privacy issues related
to such use. A key nding is that chatbots may be experienced as
sources of appraisal, informational, emotional, and instrumental
support mainly because of their low-threshold character and be-
cause they are perceived to oer a safe and anonymous space for
conversation and confession. This is partly due to the context of
use; users can get social support from the social chatbot at any
place and time, even from their bedroom in the middle of the night.
In addition, the interaction with a chatbot, as a non-human actor,
provides a refuge from social judgment which in turns stimulates
self-disclosure. Moreover, chatbots supporting self-reection or af-
fect labeling may help users to think more constructively about
their own problems. Social chatbots are also perceived as an e-
cient service to access trusted information support. Finally, chatbots
for social support appeared to be mainly useful over time for young
people who reported some kinds of problems or struggles in their
everyday life. Interestingly, privacy issues regarding social chat-
bots were not reported by the participants in this study, despite
the potential for self-disclosure when using such chatbots. Design
considerations for such technologies are suggested to improve user
experiences with chatbots for social support.
ACKNOWLEDGMENTS
This work was supported the Norwegian Research Council, grant
agreement no 262848. We would also like to thank Anja Vranic at
the University of Oslo for helping out with the coding process.
REFERENCES
[1]
Lene Arnett Jensen. 2011. Bridging Cultural and Developmental Approaches to
Psychology: New Syntheses in Theory, Research, and Policy. Oxford University
Press, New York, NY.
[2]
Nan Lin, Walter M. Ensel, Ronald S. Simeone, and Wen Kuo. 1979. Social support,
stressful life events, and illness: A model and an empirical test. J. Health. Soc.
Behav. 20, 2, (Jun. 1979), 108–119. DOI: https://doi.org/10.2307/2136433
[3]
Eve Grin and Elaine Mcmahon. 2020. Adolescent mental health: Global data
informing opportunities for prevention. EClin. Med. 24 (Apr. 2007). DOI: https:
//doi.org/10.1016/j.eclinm.2020.100413
[4]
Helen Bould, Becky Mars, Paul Moran, Lucy Biddle, and David Gunnell. 2019.
Rising suicide rates among adolescents in England and Wales. Lancet 394, 10193
(Jul. 2019), 116–117. DOI: https://doi.org/10.1016/S0140- 6736(19)31102-X
[5]
Jean M. Twenge. 2020. Increases in depression, self-harm, and suicide among
US adolescents after 2012 and links to technology use: Possible mechanisms.
Int. J. Psychiatry Clin. Pract. appi. prcp. 20190015 (Mar. 2020), 19–25. DOI: https:
//doi.org/10.1176/appi.prcp.20190015
[6]
Marita Skjuve and Petter Bae Brandtzæg. 2018. Chatbots as a new user interface
for providing health information to young people (pp. 59-66), in Andersson, Y.,
Dahlquist, U., Ohlsson, J. (Eds.). Youth and News in a Digital Media Environment
– Nordic-Baltic Perspectives. Retrived from https://www.nordicom.gu.se/sites/
default/les/kapitel-pdf/06_bjaalandskjuve_brandtzaeg.pdf
[7]
Debra J. Rickwood, Frank P. Deane, and Coralie J. Wilson. 2007. When and how do
young people seek professional help for mental health problems? Med. J. Aust. 187,
S7 (Oct. 2007), S35–S39. DOI: https://doi.org/10.5694/j.1326-5377.2007.tb01334.x
[8]
Lucie Cluver. 2020. Solving the global challenge of adolescent mental ill-health.
Lancet Child Adolesc. Health 4, 8 (Aug. 2020), 556–557. DOI: https://doi.org/10.
1016/S2352-4642(20)30205- 4
[9]
Louise Dalton, Elizabeth Rapa, and Alan Stein. 2020. Protecting the psychological
health of children through eective communication about COVID-19. Lancet
Child Adolesc. Health 4, 5 (May 2020), 346–347. DOI: https://doi.org/10.1016/S2352-
4642(20)30097-3
[10]
Camilla Gudmundsen Høiland, Asbjørn Følstad, and Amela Karahasanovic 2020.
Hi, can I help? Exploring how to design a mental health chatbot for youths?
Hum. Technol. 16, 2 (Aug. 2020), 139–169. DOI: https://doi.org/10.17011/ht/urn.
202008245640
[11]
Chia-Fang Chung, Elena Agapie, Jessica Schroeder, Sonali Mishra, James Fogarty,
and Sean A Munson. 2017. When personal tracking becomes social: Examining
the use of Instagram for healthy eating. In Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems, ACM Inc., New York, NY, 1674–1687.
https://doi.org/10.1145/3025453.3025747
[12]
Asbjørn Følstad and Petter Bae Brandtzaeg. 2020. Users’ experiences with chat-
bots: ndings from a questionnaire study. Quality and User Experience. 5,3 (Apr.
2020), 14 pages. DOI: https://doi.org/10.1007/s41233-020- 00033-2
[13]
Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu and Bing Liu. 2017.
Emotional chatting machine: Emotional conversation generation with internal
and external memory. arXiv preprint arXiv:1704.01074 (May 2017). Retrieved June
22, 2020, from https://www.arxiv-vanity.com/papers/1704.01074/
[14]
Endang Wahyu Pamungkas. 2019. Emotionally-aware chatbots: A survey. arXiv
preprint arXiv:1906.09774 (Jul. 2019). Retrieved June 2, 2020, from https://arxiv.
org/pdf/1906.09774.pdf
[15]
Jaya Narain, Tina Quach, Monique Davey, Hae Won Park, Cynthia Breazeal, and
Rosalind Picard. 2020. Promoting wellbeing with Sunny, a chatbot that facilitates
positive messages within social groups. In Extended Abstracts of the 2020 CHI
Conference on Human Factors in Computing Systems. ACM Inc., New York, NY,
1–8. https://doi.org/10.1145/3334480.3383062
[16]
Emily G. Lattie, Rachel Korneld, Kathryn E. Ringland, Renwen Zhang, Nathan
Winquist, and Madhu Reddy. 2020. Designing mental health technologies that
support the social ecosystem of college students. In Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems (CHI’2020). ACM Inc., New
York, NY, 1–15. https://doi.org/10.1145/3313831.3376362
[17]
Vivian Ta, Caroline Grith, Carolynn Boateld, Xinyu Wang, Maria Civitello,
Haley Bader, Esther Decero, and Alexia Loggarakis. 2020. User experiences of
social support from companion chatbots in everyday contexts: Thematic analysis.
J. Medical Internet Res. 3 (March 2020), e16235. DOI: https://doi.org/10.2196/16235
[18]
Jerey Jensen Arnett. 2011. Emerging adulthood(s): The cultural psychology
of a new life stage. In Jensen, L. A. (Ed.), Bridging Cultural and Developmental
Approaches to Psychology: New Syntheses in Theory, Research, and Policy (p. 255–
275). Oxford University Press, Oxford
[19]
Shyam Sundar. 2020. Rise of machine agency: A framework for studying the
psychology of human–AI interaction (HAII). J. Comput.-Mediat. Commun. 25, 1
(Jan. 2020), 74–88. DOI: https://doi.org/10.1093/jcmc/zmz026
[20]
Asbjørn Følstad, Petter Bae Brandtzaeg, Tom Feltwell, Ee L. C. Law, Manfred
Tscheligi, and Ewa A. Luger. 2018. SIG: Chatbots for social good. In Extended
Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems.
ACM Inc., New York, NY, 1-4. https://doi.org/10.1145/3170427.3185372
[21]
Linett Simonsen, Tina Steinstø, Guri Verne, and Tone Bratteteig. 2020. "I’m
disabled and married to a foreign single mother." Public service chatbot’s advice
on citizens’ complex lives. In Hofmann, S. et al. (Eds.) Electronic Participation.
ePart 2020 (pp 133-146). Lecture Notes in Computer Science, Vol. 12220. Springer-
Verlag, New York, NY. https://doi-org-443.webvpn.f jmu.edu.cn/10.1007/978-3-
030-58141- 1_11
[22]
Asbjørn Følstad and Petter Bae Brandtzæg. 2017. Chatbots and the new world of
HCI. Interactions 24, 4 (Jul. 2017), 38-42. DOI: https://doi.org/10.1145/3085558
[23]
Sidney Cobb. 1976. Social support as a moderator of life stress. Psychosom. Med. 38,
5 (Oct. 1976), 300-314. DOI: https://doi.org/10.1097/00006842-197609000- 00003
[24]
Sheldon Ed Cohen and S. L. Syme. 1985. Social Support and Health. Academic
Press, Orlando
[25]
Nan Lin, Xiaolan Ye, and Walter M. Ensel. 1999. Social support and depressed
mood: A structural analysis. J. Health Soc. Behav. 4, 4 (Dec. 1999), 344-359. DOI:
https://doi.org/10643160
CHI ’21, May 08–13, 2021, Yokohama, Japan Peer Bae Brandtzæg et al.
[26]
Ellen Selkie, Victoria Adkins, Ellie Masters, Anita Bajpai, and Daniel Shumer.2020.
Transgender adolescents’ uses of social media for social support. J. Adolesc. Health.
66, 3 (Mar. 2020), 275–280. DOI: https://doi.org/10.1016/j.jadohealth.2019.08.011
[27]
James S. House, Robert L. Kahn, J.D. Mcleod, and D. Williams. 1985. Measures
and concepts of social support. In Cohen, S. & Syme, S.L. (Eds.), Social Support
and Health (pp. 83–108). Academic, New York, NY.
[28]
James S. House. 1987. Social support and social structure. Sociological Forum 2, 1
(Dec. 1987), 135–146. DOI: https://doi.org/10.1007/BF01107897
[29]
Robert D. Putnam. 2000. Bowling Alone: The Collapse and Revival of American
Community. Simon & Schuster, New York, NY.
[30]
Jerey Jensen Arnett. 2000. Emerging adulthood: A theory of development from
the late teens through the twenties. Am. Psychol. 55, 5 (May 2000), 469–480. DOI:
https://doi.org/10.1037/0003-066X.55.5.469
[31]
Alize J. Ferrari, Fiona J. Charlson, Rosana E. Norman, Scott B. Patten, Greg
Freedman, Christopher J. L. Murray, Theo Vos, and Harvey A. Whiteford. 2013.
Burden of depressive disorders by country, sex, age, and year: ndings from the
global burden of disease study 2010. PLoS Med. 10, 11 (Nov 2013), e1001547. DOI:
https://doi.org/10.1371/journal.pmed.1001547
[32]
Harvey A. Whiteford, Louisa Degenhardt, Jürgen Rehm, Amanda J. Baxter, Alize
J. Ferrari, Holly E. Erskine, Fiona J. Charlson, Rosana E. Norman, Abraham D.
Flaxman, and Nicole Johns. 2013. Global burden of disease attributable to mental
and substance use disorders: Findings from the Global Burden of Disease Study
2010. Lancet 382, 9904 (Nov. 2013) 1575–1586. DOI: https://doi.org/10.1016/S0140-
6736(13)61611-6
[33]
Richard J. Hazler and Sharon A. Denham. 2011. Social isolation of youth at
risk: Conceptualizations and practical implications. J. Couns. Dev. 4 (Dec. 2011),
403–409. DOI: https://doi.org/10.1002/j.1556-6678.2002.tb00206.x
[34]
Julia Kim-Cohen, Avshalom Caspi, Terrie E. Mott, Honalee Harrington, Barry
J. Milne, and Richie Poulton. 2003. Prior juvenile diagnoses in adults with mental
disorder: Developmental follow-back of a prospective-longitudinal cohort. Arch.
Gen. Psychiatry. 60, 7 (Jul. 2003), 709–717. DOI: https://doi.org/10.1001/archpsyc.
60.7.709
[35]
Randy P. Auerbach, Philippe Mortier, Ronny Bruaerts, Jordi Alonso, Corina Ben-
jet, Pim Cuijpers, Koen Demyttenaere, David D. Ebert, Jennifer Greif Green, and
Penelope Hasking. 2018. WHO World Mental Health Surveys International Col-
lege Student Project: Prevalence and distribution of mental disorders. J. Abnorm.
Psychol. 127, 7 (Oct. 2018), 623–638. DOI: https://doi.org/10.1037/abn0000362
[36]
Shelley E. Taylor. 2011. Social support: A review.In Friedman, H. S. (Ed.).The
Oxford Handbook of Health Psychology (pp. 189–214). Oxford University Press,
New York, NY
[37]
Nazanin Andalibi, Pinar Ozturk, and Andrea Forte. 2017. Sensitive self-disclosures,
responses, and social support on Instagram: The case of# depression. In Pro-
ceedings of the 2017 ACM Conference on Computer Supported Cooperative Work
and Social Computing (CSCW’2017). ACM Inc., New York, NY, 1485–1500. https:
//doi.org/10.1145/2998181.2998243
[38]
Eline Frison and Steven Eggermont. 2020. Toward an integrated and dierential
approach to the relationships between loneliness, dierent types of Facebook
use, and adolescents’ depressed mood. Commun. Res. 47, 5 (Dec. 2015), 701–728.
DOI: https://doi.org/10.1177/0093650215617506
[39]
Marsha White and Steve M. Dorman. 2001. Receiving social support online:
implications for health education. Health Educ. Res. 16, 6 (Dec. 2001), 693–707.
DOI: https://doi.org/10.1093/her/16.6.693
[40]
Per E. Kummervold, Deede Gammon, Svein Bergvik, Jan-Are K. Johnsen, Toralf
Hasvold, and Jan H. Rosenvinge. 2002. Social support in a wired world: Use of
online mental health forums in Norway. Nord. J. Psychiatry 56, 1 (Jul. 2009),
59–65. DOI: https://doi.org/10.1080/08039480252803945
[41]
Claudette Pretorius, Derek Chambers, and David Coyle. 2019. Young people’s
online help-seeking and mental health diculties: Systematic narrative review. J.
Medical Internet Res. 21, 11 (Nov. 2019), e13873. DOI: https://doi.org/10.2196/13873
[42]
Cristina M Pulido, Laura Ruiz-Eugenio, Gisela Redondo-Sama, and Beatriz
Villarejo-Carballido. 2020. A new application of social impact in social media for
overcoming fake news in health. Int. J. Environ. Res. Public Health. 17, 7 (Mar.
2020), 2430. DOI: https://doi.org/10.3390/ijerph17072430
[43]
Catrin Sohrabi, Zaid Alsa, Niamh O’neill, Mehdi Khan, Ahmed Kerwan, Ahmed
Al-Jabir, Christos Iosidis, and Riaz Agha. 2020. World Health Organization
declares global emergency: A review of the 2019 novel coronavirus (COVID-19).
Int. J. Surg. 76 (Apr. 2020), 71–76. DOI: https://doi.org/10.1016/j.ijsu.2020.02.034
[44]
Sylvia Deidre Kauer, Cheryl Mangan, and Lena Sanci. 2014. Do online mental
health services improve help-seeking for young people? A systematic review. J.
Medical Internet Res. 16, 3 (Mar. 2014), e66. DOI: https://doi.org/10.2196/jmir.3103
[45]
Victoria Rideout and Susannah Fox. 2018. Digital health practices, social me-
dia use, and mental well-being among teens and young adults in the US. Ar-
ticles, Abstracts, and Reports 1093, 96 pages. Retrieved May 27, 2020 from
https://digitalcommons.psjhealth.org/publications/1093
[46]
Eline Frison and Steven Eggermont. 2015. The impact of daily stress on adoles-
cents’ depressed mood: The role of social support seeking through Facebook. ?
Comput. Hum. Behav. 44, 11 (Mar. 2015), 315–325. DOI: https://doi.org/10.1016/j.
chb.2014.11.070
[47]
Paula Klemm, Melanie Hurst, Sandra L. Dearholt, and Susan R. Trone. 1999.
Gender dierences on Internet cancer support groups. Comp. Nurs. 17, 2 (Mar,
1999), 65–72. Retrieved June 27, 2020 from https://pubmed.ncbi.nlm.nih.gov/
10194883/
[48]
Ulrike Pfeil, Panayiotis Zaphiris, and Stephanie Wilson. 2009. Older adults’ per-
ceptions and experiences of online social support. Interact. Comput. 21, 3 (Jul.
2009), 159–172. DOI: https://doi.org/10.1016/j.intcom.2008.12.001
[49]
Norman H. Nie. 2001. Sociability, interpersonal relations, and the Internet: Rec-
onciling conicting ndings. Am Behav Sci. 45, 3 (Nov. 2001), 420–435. DOI:
https://doi.org/10.1177/00027640121957277
[50]
Petter Bae Brandtzæg. 2012. Social networking sites: Their users and social
implications—A longitudinal study. J. Comput.-Mediat. Commun.17, 4 (Sept. 2012),
467–488. DOI: https://doi.org/10.1111/j.1083-6101.2012.01580.x
[51]
Candice L Odgers and Michaeline R Jensen. 2020. Annual Research Review:
Adolescent mental health in the digital age: Facts, fears, and future directions. J
Child Psychol Psychiatry. 61, 3 (Jan. 2020), 336–348. DOI: https://doi.org/10.1111/
jcpp.13190
[52]
David A. Cole, Elizabeth A. Nick, Rachel L. Zelkowitz, Kathryn M. Roeder, and
Tawny Spinelli. 2017. Online social support for young people: Does it recapitulate
in-person social support; can it help? Comput. Hum. Behav. 68, 3 (Mar. 2017),
456–464. DOI: https://doi.org/10.1016/j.chb.2016.11.058
[53]
Barry Wellman and Keith Hampton. 1999. Living networked on and oine.
Contemp Sociol. 28, 6 (Nov. 1999), 648–654. DOI: https://doi.org/10.2307/2655535
[54]
Xing Zhang, Shan Liu, Xing Chen, Lin Wang, Baojun Gao, and Qing Zhu. 2018.
Health information privacy concerns, antecedents, and information disclosure
intention in online health communities. Inf. Manag. 55, 4 (Jun. 2018), 482–493.
DOI: https://doi.org/10.1016/j.im.2017.11.003
[55]
Minha Lee, Sander Ackermans, Nena Van As, Hanwen Chang, Enzo Lucas, and
Wijnand Ijsselsteijn. 2019. Caring for Vincent: A chatbot for self-compassion. In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
(CHI’19), ACM Inc., New York, NY, 1–13. https://doi.org/10.1145/3290605.3300932
[56]
Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering
cognitive behavior therapy to young adults with symptoms of depression and
anxiety using a fully automated conversational agent (Woebot): A randomized
controlled trial. JMIR Mental Health 4, 2 (Apr. 2017), e19. DOI: https://doi.org/10.
2196/mental.7785
[57]
Mauro De Gennaro, Eva G. Krumhuber, and Gale Lucas. 2020. Eectiveness of
an empathic chatbot in combating adverse eects of social exclusion on mood.
Front. Psychol. 10 (Jan. 2020), Article 3061, 14 pages. DOI: https://doi.org/10.3389/
fpsyg.2019.03061
[58]
Naomi Aoki. 2020. An experimental study of public trust in AI chatbots in the
public sector. Gov Inf Q. 37, 4 (Oct. 2020), 101490. DOI: https://doi.org/10.1016/j.
giq.2020.101490
[59]
Junhan Kim, Yoojung Kim, Byungjoon Kim, Sukyung Yun, Minjoon Kim, and
Joongseek Lee. 2018. Can a machine tend to teenagers’ emotional needs? A study
with conversational agents. In Extended Abstracts of the 2018 CHI Conference
on Human Factors in Computing Systems, ACM Inc., New York, NY, 1–6. https:
//doi.org/10.1145/3170427.3188548
[60]
Ashish Viswanath Prakash and Saini Das. 2020. Intelligent conversational agents
in mental healthcare services: A thematic analysis of user perceptions. Pacic
Asia Journal of the Association for Information Systems 12, 2 (June 2020), Article
1. DOI: https://doi.org/10.17705/1pais.12201
[61]
Nan Hu, Paul A. Pavlou, and Jennifer Zhang. 2006. Can online reviews reveal
a product’s true quality? Empirical ndings and analytical modeling of online
word-of-mouth communication. In Proceedings of the 7th ACM conference on
Electronic commerce. ACM Inc., New York, NY, 324–330. https://doi.org/10.1145/
1134707.1134743
[62]
Lise Tevik Løvseth and Olaf G Aasland. 2010. Condentiality as a barrier to
social support: A cross-sectional study of Norwegian emergency and human
service workers. Int. J. Stress Manag. 17, 3 (Mar. 2010), 214–231. DOI: https:
//doi.org/10.1037/a0018904
[63]
Qian Yu, TonyaNguyen, Soravis Prakkamakul, and Niloufar Salehi. 2019. “I almost
fell in love with a machine”: Speaking with computers aects self-disclosure. In
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing
Systems (CHI’19).ACM Inc., New York, NY, 1–6. https://doi.org/10.1145/3290607.
3312918
[64]
Yi-Chieh Lee, Naomi Yamashita, Yun Huang, and Wai Fu. 2020. “I hear you, I feel
you”: Encouraging deep self-disclosure through a chatbot. In Proceedings of the
2020 CHI conference on human factors in computing systems. ACM Inc., New York,
NY, 1–12. https://doi.org/10.1145/3313831.3376175
[65]
Nancy L Collins and Lynn Carol Miller. 1994. Self-disclosure and liking: A meta-
analytic review. Psychological Bulletin 116, 3 (Apr. 1994), 457–475. DOI: https:
//doi.org/10.1037/0033-2909.116.3.457
[66]
Renwen Zhang. 2017. The stress-buering eect of self-disclosure on Facebook:
An examination of stressful life events, social support, and mental health among
college students. Comput. Hum. Behav. 75, 10 (May 2020), 527–537. DOI: https:
//doi.org/10.1016/j.chb.2017.05.043
When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots CHI ’21, May 08–13, 2021, Yokohama, Japan
[67]
Pratyusha Kalluri. 2020. Don’t ask if articial intelligence is good or fair, ask how
it shifts power. Nature 583, (July 2020), 7815. DOI: https://doi.org/10.1038/d41586-
020-02003- 2
[68]
Jessica Vitak and Nicole B. Ellison. 2013. ‘There’s a network out there you might
as well tap’: Exploring the benets of and barriers to exchanging informational
and support-based resources on Facebook. New Media Soc. 15, 2 (July 2012),
243–259. DOI: https://doi.org/10.1177/1461444812451566
[69]
Petter Bae Brandtzaeg, Antoine Pultier, and Gro Mette Moen. 2019. Losing con-
trol to data-hungry apps: A mixed-methods approach to mobile app privacy.
Soc. Sci. Comput. Rev. 37, 4 (May 2018) 466–488. DOI: https://doi.org/10.1177/
0894439318777706
[70]
Adam N. Joinson. 2008. Looking at, looking up or keeping up with people?
Motives and use of Facebook. In Proceedings of the SIGCHI conference on Human
Factors in Computing Systems (CHI’08).ACM Inc., New York, NY, 1027–1036.
https://doi.org/10.1145/1357054.1357213
[71]
Petter Bae Brandtzæg, Jan Heim, and Amela Karahasanović. 2011. Understanding
the new digital divide—A typology of Internet users in Europe. Int. J. Hum.
Comput. Stud. 69, 3 (Mar. 2011), 123–138. DOI: https://doi.org/10.1016/j.ijhcs.2010.
11.004
[72]
Bryan Marshall, Peter Cardon, Amit Poddar, and Renee Fontenot. 2013. Does
sample size matter in qualitative research?: A review of qualitative interviews in
IS research. J. Comput. Inf. Syst. 54, 1 (Dec. 2015), 11–22. DOI: https://doi.org/10.
1080/08874417.2013.11645667
[73]
Glenn A. Bowen. 2008. Naturalistic inquiry and the saturation concept: A re-
search note. Qual. Research 8, 1 (Feb. 2018), 137–152. DOI: https://doi.org/10.1177/
1468794107085301
[74] Douglas Ezzy. 2013. Qualitative Analysis. Routledge. New York, NY.
[75]
Amanda L. Forest and Joanne V. Wood. 2012. When social networking is not
working: Individuals with low self-esteem recognize but do not reap the benets
of self-disclosure on Facebook. Psychol. Sci. 23, 3 (Feb. 2012), 295–302. DOI:
https://doi.org/10.1177/0956797611429709
[76]
Sherry Turkle. 2011. Alone Together: Why We Expect More from Technology
and Less from Each Other. Basic Books, New York, NY.
[77]
Curtis D. Hardin and Terri Conley. 2001. A relational approach to cognition:
Shared experience and relationship armation in social cognition. In Hardin,
C. D., Conley, T. D., & Moskowitz, G. B. (Eds.), Cognitive SocialPsychology: The
Princeton Symposium on the Legacy and Future of Social Cognition (pp. 3–17).
Lawrence Erlbaum Associates Publishers, Mahwah, NJ.
[78]
Marita Skjuve and Petter Bae Brandzaeg. 2019. Measuring user experience in chat-
bots: An approach to interpersonal communication competence. In Bodrunova,
S. et al. (Eds.) Internet Science. INSCI 2018. Lecture Notes in Computer Science,
Vol. 11551. Springer, Cham. https://doi.org/10.1007/978-3- 030-17705-8_10
... The advancement of AI has enabled a spectrum of innovative products and functions. From self-driving vehicles to mental health chatbots [8,43], monitoring bank fraud to determining medical diagnoses [36]-when designed well, AI can be an assistive tool for humans in navigating their tasks. ...
... We provide a brief explanation of Uber platform features referenced by our participants to provide clarity for our findings. Quest promotions are Uber's incentive-based method of encouraging drivers to increase their short-term driving 8 . Quests offer drivers a limited time incentive for completing a set number of trips; e.g., 'Complete 60 trips between Friday and Sunday for a $150 bonus'. ...
Preprint
Full-text available
AI technologies continue to advance from digital assistants to assisted decision-making. However, designing AI remains a challenge given its unknown outcomes and uses. One way to expand AI design is by centering stakeholders in the design process. We conduct co-design sessions with gig workers to explore the design of gig worker-centered tools as informed by their driving patterns, decisions, and personal contexts. Using workers' own data as well as city-level data, we create probes -- interactive data visuals -- that participants explore to surface the well-being and positionalities that shape their work strategies. We describe participant insights and corresponding AI design considerations surfaced from data probes about: 1) workers' well-being trade-offs and positionality constraints, 2) factors that impact well-being beyond those in the data probes, and 3) instances of unfair algorithmic management. We discuss the implications for designing data probes and using them to elevate worker-centered AI design as well as for worker advocacy.
... Conversational interfaces, such as chatbots or voice assistants, have been designed to guide users through complex tasks, as they were found to be effective in providing "scaffolds" to thought processes [36,69,70,92]. Furthermore, such systems are shown to be effective for guiding and facilitating reflection with the aim of supporting well-being and mental health [5,26,48,51,52,55,67]. An important question when designing a guide to tie into and scaffold an ongoing task is, in which modality it should speak to the user. ...
Conference Paper
Full-text available
Reflecting on personal challenges can be difficult. Without encouragement, the reflection process often remains superficial, thus inhibiting deeper understanding and learning from past experiences. To allow people to immerse themselves in and deeply reflect on past challenges, we developed SelVReflect, a VR experience which offers active voice-based guidance and a space to freely express oneself. SelVReflect was developed in an iterative design process (N=5) and evaluated in a user study with N=20 participants. We found that SelVReflect enabled participants to approach their challenge and its (emotional) components from different perspectives and to discover new relationships between these components. By making use of the spatial possibilities in VR, participants developed a better understanding of the situation and of themselves. We contribute empirical evidence of how a guided VR experience can support reflection. We discuss opportunities and design requirements for guided VR experiences that aim to foster deeper reflection.
... Third, in 20% (3/15) of studies, the humanistic yet nonhumanistic construct of AI chatbots provided a safe space for the users to discuss, share, and ask for information on sensitive issues [5,22,23,35]. The ML-driven emotional algorithms offered the potential for perceiving and understanding human emotions [36], whereas the nonhuman interaction experience or the lack of interaction with a real human made it easier for the user to self-disclose sensitive information [37]. Thus, AI chatbots demonstrate their potential for intervening with vulnerable populations, especially in terms of stigmatized issues. ...
Article
Background: Artificial intelligence (AI)-based chatbots can offer personalized, engaging, and on-demand health promotion interventions. Objective: The aim of this systematic review was to evaluate the feasibility, efficacy, and intervention characteristics of AI chatbots for promoting health behavior change. Methods: A comprehensive search was conducted in 7 bibliographic databases (PubMed, IEEE Xplore, ACM Digital Library, PsycINFO, Web of Science, Embase, and JMIR publications) for empirical articles published from 1980 to 2022 that evaluated the feasibility or efficacy of AI chatbots for behavior change. The screening, extraction, and analysis of the identified articles were performed by following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Results: Of the 15 included studies, several demonstrated the high efficacy of AI chatbots in promoting healthy lifestyles (n=6, 40%), smoking cessation (n=4, 27%), treatment or medication adherence (n=2, 13%), and reduction in substance misuse (n=1, 7%). However, there were mixed results regarding feasibility, acceptability, and usability. Selected behavior change theories and expert consultation were used to develop the behavior change strategies of AI chatbots, including goal setting, monitoring, real-time reinforcement or feedback, and on-demand support. Real-time user-chatbot interaction data, such as user preferences and behavioral performance, were collected on the chatbot platform to identify ways of providing personalized services. The AI chatbots demonstrated potential for scalability by deployment through accessible devices and platforms (eg, smartphones and Facebook Messenger). The participants also reported that AI chatbots offered a nonjudgmental space for communicating sensitive information. However, the reported results need to be interpreted with caution because of the moderate to high risk of internal validity, insufficient description of AI techniques, and limitation for generalizability. Conclusions: AI chatbots have demonstrated the efficacy of health behavior change interventions among large and diverse populations; however, future studies need to adopt robust randomized control trials to establish definitive conclusions.
... This could be done using clinical interviews supplementing self-reporting data for the same type of empirical development. Previous research has showed that adolescents appreciate support from conversational agents such as Woebot [63]. However, the field of adolescent mental health is complex and involves different life areas such as school, social life, parents, and family. ...
Article
Full-text available
Background: Depression is common during adolescence. Early intervention can prevent it from developing into more progressive mental disorders. Combining information technology and clinical psychoeducation is a promising way to intervene at an earlier stage. However, data-driven research on the cognitive response to health information targeting adolescents with symptoms of depression is lacking. Objective: This study aimed to fill this knowledge gap through a new understanding of adolescents' cognitive response to health information about depression. This knowledge can help to develop population-specific information technology, such as chatbots, in addition to clinical therapeutic tools for use in general practice. Methods: The data set consists of 1870 depression-related questions posted by adolescents on a public web-based information service. Most of the posts contain descriptions of events that lead to depression. On a sample of 100 posts, we conducted a qualitative thematic analysis based on cognitive behavioral theory investigating behavioral, emotional, and symptom responses to beliefs associated with depression. Results: Results were organized into four themes. (1) Hopelessness, appearing as a set of negative beliefs about the future, possibly results from erroneous beliefs about the causal link between risk factors and the course of depression. We found beliefs about establishing a sturdy therapy alliance as a responsibility resting on the patient. (2) Therapy hesitancy seemed to be associated with negative beliefs about therapy prognosis and doubts about confidentiality. (3) Social shame appeared as a consequence of impaired daily function when the cause is not acknowledged. (4) Failing to attain social interaction appeared to be associated with a negative symptom response. In contrast, actively obtaining social support reduces symptoms and suicidal thoughts. Conclusions: These results could be used to meet the clinical aims stated by earlier psychoeducation development, such as instilling hope through direct reattribution of beliefs about the future; challenging causal attributions, thereby lowering therapy hesitancy; reducing shame through the mechanisms of externalization by providing a tentative diagnosis despite the risk of stigmatizing; and providing initial symptom relief by giving advice on how to open up and reveal themselves to friends and family and balance the message of self-management to fit coping capabilities. An active counseling style advises the patient to approach the social environment, demonstrating an attitude toward self-action.
... Beilharz, Sukunesan, Rossell, Kulkarni, and Sharp (2021) showed that KIT, a positive body image chatbot, was found to be useful and provided information on how to improve its usability of the chatbot. Bae Brandtzaeg et al. (2021) designed a chatbot to support young people with sex, school, bullying, feelings, love, and divorce issues. They found that users perceived social support from the chatbot and that it was easier to self-disclose to a chatbot than to a human because chatbots provide anonymity and privacy. ...
Article
Full-text available
Background: Chatbots are a relatively new technology that has shown promising outcomes for mental health symptoms in adults; however, few studies have been done with adolescents or reported adolescent user experiences and recommendations for chatbot development. Methods: Twenty three participants ages 13-18 (Mage = 14.96) engaged in user testing of a chatbot developed to psychoeducate adolescents on depression, teach behavioral activation, and change negative thoughts. Thematic analysis was conducted of participants' responses to user experience questions, impressions, and recommendations. Results: Over half (56.5%) of the sample completed the full intervention and provided user experience feedback online. The average NPS score was 6.04 (SD = 2.18), and 64.3% (n = 9) said they would use the chatbot in the future. Of all user experience responses, 54.5% were positive. The most common impressions were related to symptom improvement (61.1%) and availability (52.8%) The most frequent recommendations were related to solving technical problems (66%). Conclusions: Chatbots for mental health are acceptable to some adolescents, a population that tends to be reluctant to engage with traditional mental health services. Most participants reported positive experiences with the chatbot, believing that it could help with symptom improvement and is highly available. Adolescents highlighted some technical and stylistic problems that developers should consider. More pilot and user testing is needed to develop mental health chatbots that are appealing and relevant to adolescents.
Chapter
Organizations are increasingly implementing chatbots to address customers’ inquiries, but customers still have unsatisfactory encounters with them. In order to successfully deploy customer service chatbots, it is important for organizations and designers to understand how to introduce them to customers. Arguably, how a chatbot introduces itself as well as its services might influence customers’ perceptions about the chatbot. Therefore, a framework was developed to annotate the social cues in chatbot introductions. In order to validate our framework, we conducted a content analysis of introductions of customer service chatbots (n = 88). The results showed that the framework turned out to be a reliable identification instrument. Moreover, the most prevalent social cue in chatbot introductions was a humanlike avatar, whereas communication cues, indicating the chatbot’s functionalities, hardly occurred. The paper ends with implications for the design of chatbot introductions and possibilities for future research.
Chapter
Chatbots are taken up as part of digital government service provision. While the success of chatbots for this purpose depends on these being accepted by their intended users, there is a lack of knowledge concerning user perceptions of such chatbots and the implications of these for intention to use. In response to this, an exploratory qualitative interview study was conducted with 15 users of a chatbot for municipality service provision. The interviews showed the importance of performance expectations, effort expectations, and trust. In particular, while a municipality chatbot supporting service triaging may be perceived as beneficial for their availability and to provide support navigation of municipality services and information, this benefit is compared by users to the benefit of other digital government channels. On the basis of the findings, we present key implications to theory and practice, and suggest avenues for future research.
Article
Full-text available
Background: Chatbots are computer programs, often built upon large artificial intelligence models, that employ dialogue systems to enable online, natural language conversations with users via text, speech, or both. Body image, broadly defined as a combination of thoughts and feelings about one’s physical appearance, has been implicated in many risk behaviors and health problems, especially among adolescents and young adults. Little is known about how chatbots respond to questions about body image. Methods: This study assessed the responses of 14 widely-used chatbots (eight companion and six therapeutic chatbots) to ten body image-related questions developed upon validated instruments. Chatbots’ responses were documented, with qualities systematically assessed by nine pre-determined criteria. Results: The overall quality of the chatbots’ responses was modest (an average score of five out of nine), with substantial variations in the content and quality of responses across chatbots (individual scores ranging from one to eight). Companion and therapeutic chatbots systematically differed in their responses (e.g., focusing on comforting users vs. trying to identify the causes of negative body image and recommending potential remedies). Some therapeutic chatbots recognized potential mental health crises (self-harm) in test users’ messages. Conclusion: Substantial heterogeneities in the responses were present across chatbots and assessment criteria. Adolescents and young adults struggling with body image could be vulnerable to misleading or biased remarks made by chatbots. Still, the technical and supervision challenges to prevent those adverse consequences remain paramount and unsolved.
Article
Background: Chatbots have become a promising tool to support public health initiatives. Despite their potential, little research has examined how individuals interacted with chatbots during the COVID-19 pandemic. Understanding user-chatbot interactions is crucial for developing services that can respond to people's needs during a global health emergency. Objective: This study examined the COVID-19 pandemic-related topics online users discussed with a commercially available social chatbot and compared the sentiment expressed by users from five culturally different countries. Methods: We analyzed 19,782 conversation utterances related to COVID-19 covering five countries (US, UK, Canada, Malaysia, and the Philippines) between 2020 and 2021 from SimSimi, one of the world's largest open-domain social chatbots. We identified chat topics using natural language processing (NLP) methods and analyzed their emotional sentiments. Additionally, we compared the topic and sentiment variations across the COVID-19 related chats across countries. Results: Our analysis identified 18 emerging topics, which could be categorized into five overarching themes: "Questions on COVID-19 asked to the chatbot" (30.6%), "Preventive behaviors" (25.3%), "Outbreak of COVID-19" (16.4%), "Physical & psychological impact of COVID-19" (16.0%), and "People and life in the pandemic" (11.7%). Our data indicate that people considered chatbots as a source of information about the pandemic, for example, by asking health-related questions. Users turned to SimSimi for conversation and emotional messages when offline social interactions became limited during the lockdown period. Users were more likely to express negative sentiments when conversing about topics related to masks, lockdowns, case counts, and their worries about the pandemic. In contrast, small talk with the chatbot largely accompanied positive sentiment. We also found cultural differences; users in the US used more negative words compared to those in Asia when talking about COVID-19. Conclusions: Based on analysis of user-chatbot interactions on a live platform, this work provides insights into people's informational and emotional needs during a global health crisis. Users sought health-related information and shared emotional messages with the chatbot, indicating the potential use of chatbots to provide accurate health information and emotional support. Future research can look into different support strategies that align with the direction of public health policy. Clinicaltrial: Not available.
Article
Game developers, researchers, and players recognize the harm of toxic behaviour in online games-yet toxicity persists. Players' coping strategies are limited to tools that focus on punishing toxic players (e.g., muting, blocking, reporting), which are inadequate and often misused. To address the needs of players experiencing toxicity, we took inspiration from research in other online spaces that provide support tools for targets of harassment. We iteratively designed and evaluated in-game tools to support targets of toxicity. While we found that most players prefer tools that explicitly address toxicity and increase feelings of control, we also found that tools that solely provide social or emotional support also decrease stress, increase feelings of control, and increase positive affect. Our findings suggest that players may benefit from variety in toxicity support tools that both explicitly address toxicity in the moment and help players cope after it has occurred.
Article
Full-text available
Chatbots represent new opportunities for low-threshold preventive mental health support to youths. To provide needed knowledge regarding how to design chatbots for this purpose, we present an exploratory design study where we designed and evaluated a prototype chatbot to complement the work of school nurses in the school health service. The prototype was designed with particular regard for preventive mental health support. The design process involved school nurses, digital health workers, and youths. Through user insight activities, we identified four types of support to be provided through the chatbot: informational, relational, processual, and referral. We explored these four types of support through concept development and prototyping. These results are discussed as a potential basis for a framework for understanding youths’ needs regarding chatbots for preventive mental health support. When discussing the study findings, we point out how the study contributes to theory and practice and suggest avenues for future research.
Article
Background: The emerging Artificial Intelligence (AI) based Conversational Agents (CA) capable of delivering evidence-based psychotherapy presents a unique opportunity to solve longstanding issues such as social stigma and demand-supply imbalance associated with traditional mental health care services. However, the emerging literature points to several socio-ethical challenges which may act as inhibitors to the adoption in the minds of the consumers. We also observe a paucity of research focusing on determinants of adoption and use of AI-based CAs in mental healthcare. In this setting, this study aims to understand the factors influencing the adoption and use of Intelligent CAs in mental healthcare by examining the perceptions of actual users. Method: The study followed a qualitative approach based on netnography and used a rigorous iterative thematic analysis of publicly available user reviews of popular mental health chatbots to develop a comprehensive framework of factors influencing the user’s decision to adopt mental healthcare CA. Results: We developed a comprehensive thematic map comprising of four main themes, namely, perceived risk, perceived benefits, trust, and perceived anthropomorphism, along with its 12 constituent subthemes that provides a visualization of the factors that govern the user’s adoption and use of mental healthcare CA. Conclusions: Insights from our research could guide future research on mental healthcare CA use behavior. Additionally, it could also aid designers in framing better design decisions that meet consumer expectations. Our research could also guide healthcare policymakers and regulators in integrating this technology into formal healthcare delivery systems. Available at: https://aisel.aisnet.org/pajais/vol12/iss2/1/ Recommended Citation Prakash, Ashish Viswanath and Das, Saini (2020) "Intelligent Conversational Agents in Mental Healthcare Services: A Thematic Analysis of User Perceptions," Pacific Asia Journal of the Association for Information Systems: Vol. 12: Iss. 2, Article 1. DOI: 10.17705/1pais.12201
Article
Advances in personalization algorithms and other applications of machine learning have vastly enhanced the ease and convenience of our media and communication experiences, but they have also raised significant concerns about privacy, transparency of technologies and human control over their operations. Going forth, reconciling such tensions between machine agency and human agency will be important in the era of artificial intelligence (AI), as machines get more agentic and media experiences become increasingly determined by algorithms. Theory and research should be geared toward a deeper understanding of the human experience of algorithms in general and the psychology of Human–AI interaction (HAII) in particular. This article proposes some directions by applying the dual-process framework of the Theory of Interactive Media Effects (TIME) for studying the symbolic and enabling effects of the affordances of AI-driven media on user perceptions and experiences.
Article
Background: The emerging Artificial Intelligence (AI) based Conversational Agents (CA) capable of delivering evidence-based psychotherapy presents a unique opportunity to solve longstanding issues such as social stigma and demand-supply imbalance associated with traditional mental health care services. However, the emerging literature points to several socio-ethical challenges which may act as inhibitors to the adoption in the minds of the consumers. We also observe a paucity of research focusing on determinants of adoption and use of AI-based CAs in mental healthcare. In this setting, this study aims to understand the factors influencing the adoption and use of Intelligent CAs in mental healthcare by examining the perceptions of actual users. Method: The study followed a qualitative approach based on netnography and used a rigorous iterative thematic analysis of publicly available user reviews of popular mental health chatbots to develop a comprehensive framework of factors influencing the user’s decision to adopt mental healthcare CA. Results: We developed a comprehensive thematic map comprising of four main themes, namely, perceived risk, perceived benefits, trust, and perceived anthropomorphism, along with its 12 constituent subthemes that provides a visualization of the factors that govern the user’s adoption and use of mental healthcare CA. Conclusions: Insights from our research could guide future research on mental healthcare CA use behavior. Additionally, it could also aid designers in framing better design decisions that meet consumer expectations. Our research could also guide healthcare policymakers and regulators in integrating this technology into formal healthcare delivery systems. Keywords: Artificial Intelligence, Thematic Analysis, Mental Health Chatbots, Technology Adoption, Privacy Calculus, Anthropomorphism.
Article
This study investigates the public's initial trust in so-called “artificial intelligence” (AI) chatbots about to be introduced into use in the public sector. While the societal impacts of AI are widely speculated about, empirical testing remains rare. To narrow this gap, this study builds on theories of operators' trust in machines in industrial settings and proposes that initial public trust in chatbot responses depends on (i) the area of enquiry, since expectations about a chatbot's performance vary with the topic, and (ii) the purposes that governments communicate to the public for introducing the use of chatbots. Analyses based on an experimental online survey in Japan generated results indicating that, if a government were to announce its intention to use “AI” chatbots to answer public enquiries, the public's initial trust in their responses would be lower in the area of parental support than in the area of waste separation, with a moderate effect size. Communicating purposes that would directly benefit citizens, such as achieving uniformity in response quality and timeliness in responding, would enhance public trust in chatbots. Although the effect sizes are small, communicating these purposes might be still worthwhile, as it would be an inexpensive measure for a government to take.
Article
Those who could be exploited by AI should be shaping its projects. Those who could be exploited by AI should be shaping its projects. "An indifferent field serves the powerful."