Conference PaperPDF Available

The Benefits and Caveats of Personality-Adaptive Conversational Agents in Mental Health Care

Authors:

Abstract and Figures

Artificial intelligence (AI) technologies enable conversational agents (CAs) to perform highly complex tasks in a human-like manner. For example, CAs may help people cope with anxiety and thus can improve mental health and well-being. In order to achieve this and support patients in an authentic way, it is needed to imbue CAs with human-like behavior, such as personality. However, with today's powerful AI capabilities, critical voices regarding AI ethics are becoming increasingly loud to carefully consider potential consequences of designing CAs that appear too human-like. Personality adaptive conversational agents (PACAs) that automatically infer users' personality traits and adapt accordingly to their personality, fall into this category and need to be investigated regarding their benefits and caveats in mental health care. The results of our conducted qualitative study show that PACAs can be beneficial for mental health support, however it also raises concerns among participants about trust and privacy issues.
Content may be subject to copyright.
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
1
The Benefits and Caveats of
Personality-Adaptive Conversational Agents
in Mental Health Care
Completed Research
Rangina Ahmad
TU Braunschweig
rangina.ahmad@tu-bs.de
Dominik Siemon
LUT University
dominik.siemon@lut.fi
Ulrich Gnewuch
Karlsruhe Institute of Technology
ulrich.gnewuch@kit.edu
Susanne Robra-Bissantz
TU Braunschweig
s.robra-bissantz@tu-bs.de
Abstract
Artificial intelligence (AI) technologies enable conversational agents (CAs) to perform highly complex tasks
in a human-like manner. For example, CAs may help people cope with anxiety and thus can improve mental
health and well-being. In order to achieve this and support patients in an authentic way, it is needed to
imbue CAs with human-like behavior, such as personality. However, with today’s powerful AI capabilities,
critical voices regarding AI ethics are becoming increasingly loud to carefully consider potential
consequences of designing CAs that appear too human-like. Personality adaptive conversational agents
(PACAs) that automatically infer users’ personality traits and adapt accordingly to their personality, fall
into this category and need to be investigated regarding their benefits and caveats in mental health care.
The results of our conducted qualitative study show that PACAs can be beneficial for mental health support,
however it also raises concerns among participants about trust and privacy issues.
Keywords
Conversational Agents, Personality-Adaptive Conversational Agents, Mental Health Care
Introduction
In 1966, when computer scientist Joseph Weizenbaum witnessed how his participants opened their hearts
to a supposedly empathetic machine he had created to simulate a psychotherapist, he was not only shocked
by their emotional attachment to the program but also – prompted by this experience – became an ardent
critic of his own creation (Natale 2018; Peters 2013). The technology behind the “empathetic” machine that
Weizenbaum helped building was a simple computer program called ELIZA that was able to communicate
with humans via natural language (Weizenbaum 1966). While the people interacting with the machine were
ascribing human characteristics to it and some psychiatrists saw ELIZA’s potential computer-based therapy
as a “form of psychological treatment” (Kerr 2003, p. 305; Shah et al. 2016), Weizenbaum himself had
misgivings and wanted to “rob ELIZA of the aura of magic to which its application to psychological subject
matter has to some extent contributed(Weizenbaum 1966, p. 43). Fast forward to almost 6 decades later,
more and louder voices have emerged provoking to reassess the potential “dark sides” of AI and the ethical
responsibilities of developers and designers (e.g. Brendel et al. 2021; Porra et al. 2019). Living in a world
where ELIZA’s descendants have become an integral part of people’s lives and have names such as Woebot
and Replika, they have as well evolved in what Porra et al. describe as “digital creatures that express human-
like feelings” (2019, p. 1). More generally termed as conversational agents (CAs) (McTear et al. 2016), these
smart machines have become increasingly capable of handling highly complex tasks with human qualities
such as a higher autonomy of decision-making and thus undoubtedly have diverse impacts on individuals
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
2
and societies (Brendel et al. 2021). Critics specifically stress the caveats of creating and perfecting human-
like CAs with simulated feelings, without considering long-term consequences for human beings such as
deep, emotional attachments as the case of ELIZA demonstrated (Porra et al. 2019; Weizenbaum 1966).
However, the issue of “how human a computer-based human-likeness should appear” (Porra et al. 2019,
p.1) poses a challenge, especially for CA designers: Drawing a line between designing CAs that are capable
of expressing human-like characteristics such as a personality on the one hand and reserving, on the other
hand, the expression of feelings such as empathy for only human interaction because they “are the very
substance of our humanness” (Porra et al. 2019, p. 1) is an area of thin ice. Particularly, if there is an acute
shortage of mental health workforce globally, empathetic CAs may be a promising source of support (Luxton
2020; Ta et al. 2020; WHO 2021): One in four people in the world is affected by mental health disorders
(WHO 2017). Specifically, in the current COVID-19 crisis the necessity for digital health-related services
have increased rapidly, as in-person and face-to-face therapies are at present almost impossible (Torous et
al. 2020; WHO 2021). Access to (human) therapists is therefore limited and waiting times to receive
treatment can profoundly affect a person’s quality of life (Luxton 2014). The resulting shortage in health
care providers, contributing to unmet health care needs, could be addressed through the application of CAs
that are accessible anywhere and available at any time to provide counseling and deliver therapeutic
interventions (Luxton 2020). CAs may help people cope with mental health conditions such as depressive
anxiety and loneliness to enhance mental health and well-being (Ta et al. 2020). CAs have furthermore the
potential to improve health outcomes among care seekers by personalizing their care (Luxton 2014), more
precisely, by capturing their individual dynamic behavior and adapting to their specific personalities. This
type of CA a personality-adaptive conversational agent (PACA) automatically infers users’ personality
traits and adapts accordingly to their personality by using language that is specific to a particular personality
dimension (e.g. extraversion, agreeableness), with the aim to enhance dialogue quality.
The domain of mental health care is a highly patient-centered sphere, where a successful conversation is
dependent on patients’ individual dynamic behavior and the therapist’s ability to adapt to the patient’s
specific needs in order to form a therapeutic relationship (Graham et al. 2020; Luxton 2014). Transferred
to “virtual psychotherapy”, PACAs may be able to establish rapport with the patient to enhance interaction
quality and mental health support. However, in view of the discussed advantages and disadvantages that
can emerge with human-like CAs that cannot express “real” feelings, it is important to take ethical issues
(i.e. trust, privacy, support) into consideration when designing PACAs. Within the scope of this paper, we
therefore identify the benefits and caveats of PACAs in mental health care and pose the following research
question (RQ):
What are the benefits and caveats of personality-adaptive conversational agents (PACAs) in
mental health care?
To address this RQ as well as the research gap of existing PACAs, we followed an explorative research
approach and conducted a qualitative study (Babbie 2020). The results of this study contribute to
understanding PACAs’ overlooked benefits and emerging caveats in mental health care, particularly in
regard to the potential positive and negative aspects of PACAs. The remainder of this paper is structured as
follows: In the theoretical background, we first give a brief overview of selected CAs used in a mental health
care context and elaborate on the concept and functionalities of a PACA. We then explain our method and
how we simulated a conversation between a PACA therapist and a human patient in order to conduct our
qualitative study. We present the results and discuss our RQ.
Theoretical Background
Conversational Agents in Mental Health Care
CAs are software-based systems designed to interact with humans using natural language (Feine et al.
2019). One emerging area in which conversational technologies have the potential to enhance positive
outcomes is in mental health care (Kocaballi et al. 2020; Laranjo et al. 2018). While ELIZA, widely
considered the first functional CA in history (Natale 2018), took on the role of a Rogerian therapist, the
program appeared in a psychotherapeutic context for demonstration purposes only and not for the public
(Weizenbaum 1966). The underlying technology of ELIZA was rather simple: By searching the textual input
of its conversation partner for relevant keywords, the machine produced appropriate responses according
to rules and directions based on hand-crafted scripts by the programmers (Natale 2018; Peters 2013).
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
3
PARRY, another early example of a prototype CA developed in 1979, was in contrast to ELIZA designed to
simulate and behave like a person with paranoid schizophrenia (Shum et al. 2018). The developers’
intention was to find out if other psychiatrists could determine a real paranoid patient from their computer
model (Shah et al. 2016). According to Heiser et al. (1979, p. 159) their approach was not only “valuable to
researchers in computer science and psycopathology” but also helpful for mental health educators. While
PARRY passed the Turing test for the first time in history, it was still a rule-based CA and had a similar
functioning as ELIZA, though with better language understanding capabilities (Shum et al. 2018).
More recent CAs that are developed based on AI technologies such as machine learning and natural
language processing (NLP) have much more powerful capabilities to support mental health care. For
example, there are commercially available mobile phone CA applications that help people to manage
symptoms of anxiety and depression by teaching them self-care and mindfulness techniques (Luxton 2020).
One of these applications, called Woebot, is a CA that is engineered to assess, monitor and respond to users
dealing with mental health issues (Woebot Health 2021). As all patients are individual and symptoms
change, Woebot has a responsive and adaptive way to intervene with in-the-moment help and provide
targeted therapies (Woebot Health 2021). According to their developers, Woebot is a CA that builds a bond
with its users trying to motivate and engage them in a conversation about their mental health (Woebot
Health 2021). Replika, another commercially available CA app, pursues a similar approach as Woebot and
is available for the user as “an AI companion who cares” (Ta et al. 2020, p. 1). Its users report that it is great
to have someone to talk to who doesn't “judge you when you have for example anxiety attacks and a lot of
stress(Replika 2021). Replika is built to resemble natural human communication as much as possible, and
increased interactions with the CA allow it to learn more about the user (Replika 2021; Ta et al. 2020).
However, neither Woebot nor Replika automatically infer personality traits and thus do not adapt
personality according to the users. To the best of our knowledge, the concept of a personality-adaptive
conversational agent does not exist so far and will be described in the following.
Personality-Adaptive Conversational Agents
As shown with Woebot and Replika, a large part of contemporary CAs are designed to reflect specific
characteristics. While these characteristics have been given various names such as “human-like behavior”,
anthropomorphic featuresor “social cues” (Feine et al. 2019), the main reason for them to be incorporated
in CAs are the consistent positive impacts they showed to have on users interaction quality and user
experience (e.g. Ahmad et al. 2021; Gnewuch et al. 2017). Among characteristics such as gender, voice or
facial expressions (just to name a few social cues) (Feine et al. 2019), personality as well has been identified
as one of the key components when designing CAs, especially for the ones that aim for long-term
conversations (Robert et al. 2020). For example, if developers want to design a CA similar to the (typical)
characteristics that a human therapist has, that is a caring and empathetic communication style, they can
choose from a specific set of cues to achieve their goal. If communication is reduced to a verbal level only,
then personality can be manifested by means of language markers. Across a wide range of linguistic levels,
psychologists have documented the existence of personality cues in language by discovering correlations
between a number of linguistic variables and personality traits (Mairesse and Walker 2010). The more one
of a person's extremes encounters a trait, the more consistently that trait will be a factor in their behavior
(Mairesse and Walker 2010). Of the so-called Big Five personality dimensions, two are particularly
meaningful in the context of interpersonal interaction: agreeableness and extraversion (McCrae and Costa
Jr 1997; Nass et al. 1995). For example, extraverts were found to have higher speech rates, speak more,
louder, and more repeatedly, with fewer hesitations and pauses, shorter silences, higher verbal output, and
less formal language, whereas people who are highly agreeable show a lot of empathy, agree, compliment,
use longer words and many insight words and make less personal attacks at their conversation partner
(Mairesse and Walker 2010). These language cues are important if developers intend to induce a specific
personality to a CA in their conversational design.
Even though a lot of research exists on how to endow machines with personality (Robert et al. 2020) and
how to adapt conversation style, most contemporary CAs are still focusing on a “one-size-fits-all”-design.
This means that once a CA is developed with a specific personality for a certain domain, the CA is not
capable of changing its personality when interacting with individual users. However, since the needs of
users can be fundamentally different, CAs need to be user-adaptive and specifically personality-adaptive in
order to be able to accommodate to different user needs. Thanks to technological progress, and specifically
with language-based personality mining tools such as IBM Watson’s Personality Insights, LIWC, GloVe or
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
4
3-Grams, it is possible to design a PACA in a manner that it is capable of automatically inferring personality
traits from an individual’s speech or text (Ferrucci 2012). NLP techniques particularly subserve to the
interpretation of massive volumes of natural language elements by recognizing grammatical rules (e.g.
syntax, context, usage patterns) of a word, sentence or document (Ferrucci 2012). In practice, a PACA would
analyze users’ text data such as their chat histories or social media posts in order to derive the users’
personality traits. Once the (dominant) traits are identified, the PACA adapts and responds accordingly in
written or spoken language. Used in a mental health care context, a PACA’s main task would be to socially
support patients in stressful situations in order to improve their health and well-being. However, since in
this specific domain not only data but also patients can be particularly sensitive, it is crucial to know what
potential trust and privacy concerns a PACA might deal with - especially with regard to the degree of human-
likeness of CAs and the potential dangers of humans becoming emotionally too attached to the machine.
Method
Sample and Data Collection Procedure
To address our research question, we followed an explorative research approach and conducted a
qualitative study (Babbie 2020). We constructed an open questionnaire with the aim to capture
comprehensive opinions about PACAs in the context of mental health care regarding support, trust and
privacy. Our open questionnaire consisted first of an extensive explanation of the functionality and nature
of a PACA to make sure participants from all backgrounds understood the concept. The description of what
a PACA is, how it works and where it can be used was explained in detail (written description) and an
example of a simulated conversation between a PACA therapist and a human patient was provided to the
participants (see section PACA Design). After that, we used several open questions to cover the categories
support, trust and privacy asking for input by the participants. The following table 1 provides an overview
of these open questions.
Category
Question (asked to explain in at least 2-3 sentences)
Support
Do you think the concept of a PACA is useful/ helpful in mental health therapy?
What are reasons that speak against communicating with a PACA? What concerns would
you have in your interaction with the PACA?
Could a PACA pose a danger? Would you be afraid that the PACA could become
manipulative and tell you things that would be rather counterproductive for your mental
health?
Trust
Would you trust a PACA and can you imagine building a relationship with the PACA over a
longer period of time? Would you also maintain the relationship with a PACA?
Privacy
Would you agree to give the PACA access to your data? Would you have privacy concerns?
Table 1. Open Questions for Categories Support, Trust, Privacy
The survey, developed using the platform Limesurvey (version 3.26.0), was distributed via our private
network and the crowdsourcing platform Mechanical Turk (mTurk) and was carried out in December 2020.
Overall 60 people participated in the study, producing more than 6865 words of qualitative data which took
roughly between 25 and 35 minutes to complete (average 28 minutes). Participants (32 male, 28 female)
were between 23 and 71 years old with an average age of 36 years. The participants were also asked whether
they have any experience with mental health issues and/or mental health-related therapies. 23 participants
indicated that they do have any experience, whereas 36 did not and one person abstained from answering.
Moreover, 10 participants stated that they have never used CAs, whereas 32 indicated that they were using
CAs on a regular basis. 18 participants have used CAs before, but not on a regular basis. Only 7 participants
stated that they were not satisfied with the usage of their CA. In order to analyze the data, we followed a
qualitative content analysis by coding the answers of the participants, which consisted mainly of inductive
category forming (Mayring 2014). In the inductive formation of categories, qualitative content is examined
without reference to theories, and recurring aspects and overarching concepts are recorded and designated
as categories. Similar concepts are then grouped together in order to consequently identify all relevant
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
5
concepts (Mayring 2014). The coding process was independently conducted by the authors and whenever
the results differed, a discussion was established until a consensus was reached.
PACA Design
In order to help participants visualize what a conversation between a patient and a PACA might look like,
we created a predefined chat record. For our simulated dialogue, we used the conversational design tool
Botsociety (2020), which allows prototyping and visualizing CAs. The conversation was provided in the
form of a video and followed directly after the detailed description of the PACA and before the participants
were asked to answer questions. The dialogue starts with Jules (human), seeking out Raffi (PACA) to talk
because something is on his/her mind. In the course of the conversation, Jules explains that he/she again
is experiencing anxiety and negative thoughts. Raffi then refers to past conversations between the two,
encourages Jules and reminds him/her of old and new affirmations they went through in the past. In the
end, Raffi suggests starting a meditation app for Jules. To appear supportive and trustworthy, the PACA
was intended to take on an agreeable and extraverted personality. In order to achieve this, we used language
cues that were specific for the Big Five dimensions agreeableness and extraversion. For example, to let Raffi
appear more empathic, we used insight words (e.g. see, think) as well as agreements and compliments (“you
are kind and smart”) (Mairesse and Walker 2010). We intended to implement an extraverted language style
by incorporating a higher verbal output, followed a “thinking out loud”-writing style and focused on
pleasure talk as opposed to problem talk (Mairesse and Walker 2010). To make sure Raffi's language was
perceived as extraverted and agreeable as possible, we undertook a twofold review: First, the conversation
was refined until all authors rated the PACA's words as agreeable/extraverted. Second, we used IBM
Watson’s Personality Insights tool (IBM Watson PI 2020) for verification purposes. The personality mining
service returns percentiles for the Big Five dimensions based on text that is being analyzed. In this context,
percentiles are defined as scores that compare one person to a broader population (IBM Watson PI 2020).
Raffi’s words received a score of 87% (Extraversion) and 73% (Agreeableness), meaning that the PACA is
more extraverted than 87% of the people in the population and more agreeable than 73% of the population.
The total length of the conversation lasted approximately 2:15 minutes. The video with the entire
conversation can be viewed here: https://youtu.be/-sfSNJwCCI0
Results
Overall, we coded 3 categories, 7 subcategories and assigned 410 text segments to the code system. The first
category PACA Support is divided into three subcategories: While Merits elaborate on advantages of the
PACA support in mental health care, Demerits illustrate the concerns the participants had with a PACA in
this specific context. The third subcategory, Limited Merits, includes all the statements of those
respondents who found the support of a PACA only partially helpful. The second category PACA Trust
includes statements about the extent to which participants would trust a PACA and whether they would
build a relationship over a longer period of time with the CA. The two codes that we have derived for this
category are Trustworthy and Untrustworthy. The last and third category, PACA Privacy, is specifically
about data privacy and whether or not the participants would allow access to their data in order for the CA
to be personality-adaptive. Its two subcategories are called Uncritical and Critical. Figure 1 provides an
overview of the categories and their subcategories.
Figure 1: Coding System of the Qualitative Study
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
6
PACA Support
PACA Support contains the most responses, as this category consists of three specific questions altogether
(see Table 1). Support or social support includes mechanisms and activities involving interpersonal
relationships to protect and help people in their daily lives (Ta et al. 2020). One of the demerits mentioned
the most by the participants is the statement that the PACA has limited skills and that this lack of ability
can lead to wrong judgements. One person stated for example: “Potential misinterpretations from the
PACA of what I said could lead to more negative thoughts and make things worse”. Similar to this
statement was that of two other participants: “I would be afraid that not all of the details will be understood
correctly and maybe wrong judgement will come up”, “[…] a PACA is not human and cannot fully
understand the full range of issues a person is dealing with”. The non-humanness of the PACA stated in
the latter is another assertation that has been brought up many times by the participants. These people felt
that a human therapist was necessary, and thus could not imagine interacting with a PACA, either without
giving a specific reason or on the grounds that people are better at helping: “No, I most likely will always
chose a real human therapist”, “People really need an actual human to human interaction in life”.
Another demerit that has been mentioned many times concerned the mental health care context in which
the CA was used for. Participants indicated that a PACA might not be supportive when it specifically comes
to severe cases of complex mental health issues, indicating that it is “probably too hard to solve for todays
AI solutionsin this context. One person elaborated on this aspect by writing that “in mental health they
could do serious damage just by not understanding and addressing user needs”. According to the
participants’ responses, one of the main reasons for such unforeseeable outcomes might be a “negative or
destructive behavior” that a PACA can evoke in patients. Specifically, an “aggressive or dominant
behaviorby the PACA might lead to the patient “completely closing off and losing hope”. In contrast, other
responses mentioned desocialization as a caveat, noting that patients can become “dependent on the PACA
and start to distance from reality and real people”. Other demerits stated by the participants were that
communicating with a PACA would be “creepy” and “odd” for them.
One of the most frequently stated merits mentioned by the participants is the accessibility/availability of
the PACA. While one person thinks the PACA “provides an escape from challenging emotional situations
whenever necessary. […] Raffi can be available when the therapist is not”, another subject states that “it
can be helpful because it functions like a relief cushion for the patient while they wait for a therapist
assigned to them. They feel understood and listened to, no matter how trivial the conversation with the
PACA may be”. Being listened to is another merit that has been named many times. The respondents
indicated the PACA would be like a “penpal” or “like the best friend you have at home”. Further benefits
that have been listed several times include that the PACA can put patients at ease (“it can give out apps to
help soothe the mind”), can memorize past conversations (“[…] it does not forget what has already been
discussed and is not annoyed when the same topic comes up again and again”) and can create a
personalized experience (“[…] it makes you feel like there is some more meaning to sharing it with
something that can at least pretend to care. It can help personalize your experience. This makes people
feel worthy”).
A large proportion of the participants stated that a PACA might be helpful for mental health therapy by
motivating and/or advising the patients, specifically by being a “helpful support in everyday life”. One
person further pointed out that if “developed carefully, and deployed and monitored safely, PACAs have
enormous potential for helping bridge the gap between patients’ needs and the mental health system's
available qualified staff and resources”. Some respondents noted that “[…] if it feels genuine with good
and varied tips/tasks/advice” and “[…] if the AI is so genuine that it’s hard to distinguish from a human”
they can imagine using the PACA as a support system for mental health issues. While part of the participants
stated that they do not fear the PACA could become manipulative or pose a danger, other respondents wrote
that they only partially believe in the merits of the PACA. They specifically noted that a PACA can be
considered as a “short-term supporting system” that “prepares the patients mentally” but that human
therapists should “regularly intervene and supervise”. In fact, the statement that a PACA should be
monitored by human therapists has been mentioned by the majority of participants in one way or another.
A further limited merit that has been brought up several times by the respondents is that the benefits of
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
7
using a PACA strongly depends on how the PACA is designed/skilled, as “communication style, body
language and tone of voice are very important and powerful elements of communication and have a great
impact on others.
PACA Trust
Another important factor identified in the participants’ responses was trust. Trust is commonly understood
as the “willingness of a party to be vulnerable to the actions of another party based on the expectation that
the other will perform a particular action important to the trustor, irrespective of the ability to monitor or
control that other party(Mayer et al. 1995, p. 712). Participants mentioned that “trust is very important
for all the time” and an important precondition for the development and maintenance of a long-term
relationship with the PACA. However, the participants had different views on whether or not a PACA can
build up enough trust to establish a long-term relationship, even after seeing a real” human therapist.
On the one hand, some participants argued that they would stop using the PACA when a human therapist
was available. For example, one participant argued “if I were seeing a real human therapist, I would not
see the need to continue chatting with the PACA”. Trust issues were also associated with the difference
between human and AI in that “a PACA is not human and cannot fully understand the full range of issues
a person is dealing with”. Only when it would be hard to distinguish from a human […] or gives such
good advice”, it could be a partner in a long-term relationship. Participants also stated that the PACA would
have to continuously improve and learn about the user in order to build a trusting relationship. For example,
one participant expected the PACA to get “rid of any flaws and be very helpful in my everyday life for me
to talk to it like a spouse I married. Finally, some participants expressed their general privacy concerns
which would hinder any form of long-term relationship (“I don’t trust any listening device, the privacy
risks are simply too great”).
However, other participants were more open towards establishing trust and maintaining a relationship with
the PACA over a longer period of time. They would feel comfortable talking to it even after seeing a human
therapist”. One important reason was that they would “find it easier instead of constantly calling the
therapist”, particularly because they believe that some of their issues as “just too small to bother someone
with”. These participants could not only imagine themselves trusting the PACA because it can “give
something back and seems to care”, but also because it would be like a friend you have at home”.
Moreover, participants stated that “the more you interact with it the easier it could be” to build trust and
maintain a relationship with the PACA.
PACA Privacy
PACA Privacy captured all concerns of the participants involving the necessity of a PACA to gather and
analyze (sensitive) data in order to assess the personality of a user and adapt accordingly. Privacy refers to
the non-public area in which a person pursues the free development of his or her personality undisturbed
by external influences. Concerns arise when such private information could enter the public domain without
authorization (Phelps et al. 2000). The participants addressed the aspects that they consider to be
particularly critical and those that they consider to be rather uncritical. The most important critical aspects
were the potential invasion of privacy, as participants didn’t feel comfortable sharing personal information
and “feel a little bit under a microscope”. One participant stated that it “sounds alarming to allow a PACA
access to your personal data and communications”, while another participant said that as a user you
always need to be aware of what the information could be used for and vulnerabilities always exist”.
Second most aspect was the critical view on data security and the possibility of data leaks. Participants were
concerned if information would be leaked or stolen or if the system will be hacked or would have
malfunctions. In addition, the company running a PACA needs to be trusted and a clear policy needs to be
constructed. Specific concerns were expressed to the way in which the data is stored, whether it’s on the
device or on the cloud. Overall, the participants expressed their concerns primarily because a PACA deals
with highly personal data, especially in the context of health care, that must be particularly protected and
not fall into the hands of third parties under any circumstances.
On a positive note, the participants mainly agreed upon the fact, that they need to provide data in order for
the PACA to work properly. “Yes, I would allow it to access my data. I would be willing to trust it if it could
help me in the long run” said one participant. To “get the best results”, the participants agreed on providing
data to the PACA so that it can adapt to a user and “help my therapy in a positive way”. Even though many
participants had concerns about the use of sensitive data, they would share their data under certain
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
8
conditions in order to take advantage of the PACA. Above all, the data should be sent and stored in
encrypted form, not passed on to third parties, only the most necessary information should be used, and
the data should be deleted as soon as it is no longer needed. Under these concrete conditions, and if they
are specifically communicated by the PACA, a disclosure of private information was accepted by the
participants. The perceived benefit from the use of the personal information should also be communicated
by the PACA and be visible to the user. Therefore, the design of the PACA and its handling of personal data
is critical. Table 2 summarizes the results for all categories.
Category
Subcategory
Support
Merits
Demerits
Limited Merits
Trust
Trustworthy
Untrustworthy
Privacy
Uncritical
Critical
Table 2. Summary of Generated Codes
Discussion & Conclusion
The objective of this paper was to identify a PACA’s benefits and caveats in the context of mental health
care, as its human-like features - specifically its personality-adaptivity - might raise ethical concerns. The
results of our study shed light on both the negative and positive aspects of PACAs. As expected, the majority
of participants were more critical as opposed to uncritical towards their sensitive data. Although a number
of participants stated that they could imagine building a trustworthy relationship with the PACA, it should
not be ignored that almost as many indicated they did not find the PACA from the example trustworthy.
This suggests that people perceive CAs differently and have varied preferences concerning the
communication style of a CA, highlighting once again the individual differences of people. Corresponding
to findings from previous studies (e.g. Luxton 2014) a PACA may offer helpful support to people in need,
put them at ease, and can be a friend who listens when human therapists are not available – specifically in
light of the current pandemic, this can be considered as an enormous benefit. However, in line with existing
research (e.g. Luxton 2020; Porra et al. 2019), PACAs may also create an unintended (emotional)
dependency which, for example, can lead to further desocializing. Weizenbaum’s caveat of a „Nightmare
Computer“ (1972) thus could indeed come true, if not addressed properly. In the 1960’s, AI capabilities were
limited, and much like her namesake Eliza Doolittle from the play "Pygmalion" (Shaw 2008) Weizenbaum’s
ELIZA had no understanding of the actual conversation but merely simulated a discourse with intelligent
phrasing. Yet, ELIZA simulated her psychotherapeutic conversations so convincingly that people got deeply
and emotionally involved with the program. This demonstrates how “simple” verbal communication can be
used or taken advantage of to achieve positive or negative outcomes. With today’s powerful AI capabilities,
the current critical voices regarding AI ethics are therefore very much justified. The unreflecting enthusiasm
of designing CAs without carefully considering any potential consequences for people’s well-being can
quickly backfire. Although humans do know from a philosophical perspective that machines are not capable
of expressing “real” feelings, they still respond to them emotionally as if they are. A CA’s poor
communication skills and specifically that of a PACA that is able to personalize to the user’s communication
preference on a high level, could aggravate negative health outcomes instead of improving them. Therefore,
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
9
one of the major findings of this study is that special attention must be paid to ethical design principles on
how personality-based language in CAs can be used by developers. Future research may extend our analysis
of the identified benefits and caveats of PACAs using experiments with different types of PACAs and by
taking the perspective of therapists into account.
REFERENCES
Ahmad, R., Siemon, D., and Robra-Bissantz, S. 2021. “Communicating with Machines: Conversational
Agents with Personality and the Role of Extraversion,” in Proceedings of the 54th Hawaii International
Conference on System Sciences, p. 4043.
Araujo, T. 2018. “Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and
Communicative Agency Framing on Conversational Agent and Company Perceptions,” Computers in
Human Behavior (85), pp. 183–189.
Babbie, E. R. 2020. The Practice of Social Research, Cengage learning.
Bassett, C. 2019. “The Computational Therapeutic: Exploring Weizenbaum’s ELIZA as a History of the
Present,” AI & SOCIETY (34:4), pp. 803–812.
Botsociety. 2020. “Design, Preview and Prototype Your next Chatbot or Voice Assistant.”
(https://botsociety.io, accessed February 27, 2020).
Boyd, R. L., and Pennebaker, J. W. 2017. “Language-Based Personality: A New Approach to Personality in
a Digital World,” Current Opinion in Behavioral Sciences (18), pp. 6368.
Brendel, A. B., Mirbabaie, M., Lembcke, T.-B., and Hofeditz, L. 2021. “Ethical Management of Artificial
Intelligence,” Sustainability (13:4), p. 1974.
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. 2019. “A Taxonomy of Social Cues for Conversational
Agents,” International Journal of Human-Computer Studies (132), pp. 138161.
Ferrucci, D. A. 2012. “Introduction to ‘This Is Watson,’” IBM Journal of Research and Development
(56:3.4), pp. 11.
Gnewuch, U., Morana, S., and Maedche, A. 2017. “Towards Designing Cooperative and Social
Conversational Agents for Customer Service,” in ICIS.
Graham, S. A., Lee, E. E., Jeste, D. V., Van Patten, R., Twamley, E. W., Nebeker, C., Yamada, Y., Kim, H.-
C., and Depp, C. A. 2020. “Artificial Intelligence Approaches to Predicting and Detecting Cognitive
Decline in Older Adults: A Conceptual Review,” Psychiatry Research (284), p. 112732.
Grudin, J., and Jacques, R. 2019. “Chatbots, Humbots, and the Quest for Artificial General Intelligence,” in
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, Glasgow,
Scotland Uk: ACM Press, pp. 1–11.
Heiser, J. F., Colby, K. M., Faught, W. S., and Parkison, R. C. 1979. “Can Psychiatrists Distinguish a
Computer Simulation of Paranoia from the Real Thing?: The Limitations of Turing-like Tests as
Measures of the Adequacy of Simulations,” Journal of Psychiatric Research (15:3), pp. 149162.
IBM Watson PI. 2020. “IBM Watson Personality Insights.” (https://personality-insights-
demo.ng.bluemix.net/, accessed February 27, 2020).
Kerr, I. R. 2003. “Bots, Babes and the Californication of Commerce,” U. Ottawa L. & Tech. J. (1), p. 285.
Kocaballi, A. B., Laranjo, L., Quiroz, J., Rezazadegan, D., Kocielnik, R., Clark, L., Liao, V., Park, S., Moore,
R., and Miner, A. 2020. Conversational Agents for Health and Wellbeing.
Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi,
F., Lau, A. Y. S., and Coiera, E. 2018. “Conversational Agents in Healthcare: A Systematic Review,”
Journal of the American Medical Informatics Association (25:9), pp. 1248–1258.
Luxton, D. D. 2014. “Recommendations for the Ethical Use and Design of Artificial Intelligent Care
Providers,” Artificial Intelligence in Medicine (62:1), pp. 110.
Luxton, D. D. 2020. “Ethical Implications of Conversational Agents in Global Public Health,” Bulletin of
the World Health Organization (98:4), p. 285.
Mairesse, F., and Walker, M. A. 2010. “Towards Personality-Based User Adaptation: Psychologically
Informed Stylistic Language Generation,” User Modeling and User-Adapted Interaction (20:3), pp.
227278.
Mayer, R. C., Davis, J. H., and Schoorman, F. D. 1995. “An Integrative Model of Organizational Trust,”
Academy of Management Review (20:3), pp. 709–734.
Benefits and Caveats of Personality-Adaptive Conversational Agents
Twenty-Seventh Americas Conference on Information Systems, Montreal, 2021
10
Mayring, P. 2014. Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software
Solution.
McCrae, R. R., and Costa Jr, P. T. 1997. “Personality Trait Structure as a Human Universal.,” American
Psychologist (52:5), p. 509.
McTear, M., Callejas, Z., and Griol, D. 2016. The Conversational Interface: Talking to Smart Devices,
Springer.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, D. C. 1995. “Can Computer Personalities Be Human
Personalities?,” International Journal of Human-Computer Studies (43:2), pp. 223–239.
Natale, S. 2018. “If Software Is Narrative: Joseph Weizenbaum, Artificial Intelligence and the Biographies
of ELIZA,” New Media & Society (21), p. 146144481880498.
Peters, O. 2013. Critics of Digitalisation: Against the Tide: Warners, Sceptics, Scaremongers,
Apocalypticists: 20 Portraits, Studien Und Berichte Der Arbeitsstelle Fernstudienforschung Der Carl
von Ossietzky Universität Oldenburg, Oldenburg: BIS-Verlag der Carl von Ossietzky Universität
Oldenburg.
Phelps, J., Nowak, G., and Ferrell, E. 2000. “Privacy Concerns and Consumer Willingness to Provide
Personal Information,” Journal of Public Policy & Marketing (19:1), pp. 27–41.
Porra, J., Lacity, M., and Parks, M. S. 2019. “‘Can Computer Based Human-Likeness Endanger
Humanness?’ – A Philosophical and Ethical Perspective on Digital Assistants Expressing Feelings They
Can’t Have”,” Information Systems Frontiers.
Replika. 2021. “Replika.AI.” (https://replika.ai, accessed February 28, 2021).
Robert, L. P., Alahmad, R., Esterwood, C., Kim, S., You, S., and Zhang, Q. 2020. “A Review of Personality
in Human Robot Interactions,” ArXiv Preprint ArXiv:2001.11777.
Shah, H., Warwick, K., Vallverdú, J., and Wu, D. 2016. “Can Machines Talk? Comparison of Eliza with
Modern Dialogue Systems,” Computers in Human Behavior (58), pp. 278–295.
Shaw, G. B. 2008. Pygmalion and Major Barbara, Bantam Classics.
Shum, H., He, X., and Li, D. 2018. “From Eliza to XiaoIce: Challenges and Opportunities with Social
Chatbots,” Frontiers of Information Technology & Electronic Engineering (19:1), pp. 10–26.
Strohmann, T., Siemon, D., and Robra-Bissantz, S. 2019. “Designing Virtual In-Vehicle Assistants: Design
Guidelines for Creating a Convincing User Experience,” AIS Transactions on Human-Computer
Interaction (11:2), pp. 5478.
Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., and Loggarakis, A. 2020.
“User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic
Analysis,” Journal of Medical Internet Research (22:3), p. e16235.
Torous, J., Myrick, K. J., Rauseo-Ricupero, N., and Firth, J. 2020. “Digital Mental Health and COVID-19:
Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow,” JMIR Mental
Health (7:3), p. e18848.
Weizenbaum, J. 1966. “ELIZA—a Computer Program for the Study of Natural Language Communication
between Man and Machine,” Communications of the ACM (9:1), pp. 36–45.
Weizenbaum, J. 1972. “Nightmare Compuer,” Die Zeit.
WHO. 2017. “Depression and Other Common Mental Disorders: Global Health Estimates,” World Health
Organization.
WHO. 2021. “WHO Executive Board Stresses Need for Improved Response to Mental Health Impact of
Public Health Emergencies.” (https://www.who.int/news/item/11-02-2021-who-executive-board-
stresses-need-for-improved-response-to-mental-health-impact-of-public-health-emergencies,
accessed April 20, 2021).
Woebot Health. 2021. “Woebot.” (https://woebothealth.com/products-pipeline/, accessed February 28,
2021).
... However, personalisation of CAs in health care is still rare and often lacks a theoretical framework [60]. Hence, infusing CAs for health support with personality has been highlighted as an opportunity to improve user engagement [2] but to the best of our knowledge the relationship between user personality and preference for CA personality has not been investigated. ...
... Voices have therefore emerged from the research field asking for a reassessment of the potential "dark sides" of AI and the ethical responsibilities of developers and designers (Brendel et al., 2021;Porra et al., 2019). Critics specifically stress the caveats of creating and perfecting human-like CAs with simulated feelings, without considering long-term consequences for human beings, such as deep, emotional attachments (Ahmad et al., 2021a). However, human beings treat computer systems as social entities and ascribe different personality traits to them (Nass et al., 1994); hence, they feel more appreciated and comfortable interacting with the machines when they perceive CAs as more human-like (Moon & Nass, 1996;Nass et al., 1993). ...
Article
Full-text available
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues.
... Processing Modelle wie "GPT-3" das Sprachverständnis sowie den Konversationsfluss von CAs verbessern [16], und dass Künstliche Intelligenz zunehmend genutzt wird, um den Service an die Nutzer*innen und ihre Persönlichkeit anzupassen [17,18]. Darüber hinaus sind CAs dazu in der Lage, langfristige Bindungen mit ihren Anwender*innen aufzubauen [19][20][21]. ...
Article
Full-text available
With artificial intelligence (AI) becoming increasingly capable of handling highly complex tasks, many AI-enabled products and services are granted a higher autonomy of decision-making, potentially exercising diverse influences on individuals and societies. While organizations and researchers have repeatedly shown the blessings of AI for humanity, serious AI-related abuses and incidents have raised pressing ethical concerns. Consequently, researchers from different disciplines widely acknowledge an ethical discourse on AI. However, managers—eager to spark ethical considerations throughout their organizations—receive limited support on how they may establish and manage AI ethics. Although research is concerned with technological-related ethics in organizations, research on the ethical management of AI is limited. Against this background, the goals of this article are to provide a starting point for research on AI-related ethical concerns and to highlight future research opportunities. We propose an ethical management of AI (EMMA) framework, focusing on three perspectives: managerial decision making, ethical considerations, and macro- as well as micro-environmental dimensions. With the EMMA framework, we provide researchers with a starting point to address the managing the ethical aspects of AI.
Conference Paper
Full-text available
Communication with conversational agents (CA) has become increasingly important. It therefore is crucial to understand how individuals perceive interaction with CAs and how the personality of both the CA and the human can affect the interaction experience. As personality differences are manifested in language cues, we investigate whether different language style manifestations of extraversion lead to a more anthropomorphized perception (specifically perceived humanness and social presence) of the personality bots. We examine, whether individuals rate communication satisfaction of a CA similar to their own personality as higher (law of attraction). The results of our experiment indicate that highly extraverted CAs are generally better received in terms of social presence and communication satisfaction. Further, incorporating personality into CAs increases perceived humanness. Although no significant effects could be found in regard to the law of attraction, interesting findings about ambiverts could be made. The outcomes of the experiment contribute towards designing personality-adaptive CAs.
Article
Full-text available
As interest in and use of telehealth during the COVID-19 global pandemic increase, the potential of digital health to increase access and quality of mental health is becoming clear. Although the world today must "flatten the curve" of spread of the virus, we argue that now is the time to "accelerate and bend the curve" on digital health. Increased investments in digital health today will yield unprecedented access to high-quality mental health care. Focusing on personal experiences and projects from our diverse authorship team, we share selected examples of digital health innovations while acknowledging that no single piece can discuss all the impressive global efforts past and present. Exploring the success of telehealth during the present crisis and how technologies like apps can soon play a larger role, we discuss the need for workforce training, high-quality evidence, and digital equity among other factors critical for bending the curve further.
Article
Full-text available
Background Previous research suggests that artificial agents may be a promising source of social support for humans. However, the bulk of this research has been conducted in the context of social support interventions that specifically address stressful situations or health improvements. Little research has examined social support received from artificial agents in everyday contexts. Objective Considering that social support manifests in not only crises but also everyday situations and that everyday social support forms the basis of support received during more stressful events, we aimed to investigate the types of everyday social support that can be received from artificial agents. Methods In Study 1, we examined publicly available user reviews (N=1854) of Replika, a popular companion chatbot. In Study 2, a sample (n=66) of Replika users provided detailed open-ended responses regarding their experiences of using Replika. We conducted thematic analysis on both datasets to gain insight into the kind of everyday social support that users receive through interactions with Replika. Results Replika provides some level of companionship that can help curtail loneliness, provide a “safe space” in which users can discuss any topic without the fear of judgment or retaliation, increase positive affect through uplifting and nurturing messages, and provide helpful information/advice when normal sources of informational support are not available. Conclusions Artificial agents may be a promising source of everyday social support, particularly companionship, emotional, informational, and appraisal support, but not as tangible support. Future studies are needed to determine who might benefit from these types of everyday social support the most and why. These results could potentially be used to help address global health issues or other crises early on in everyday situations before they potentially manifest into larger issues.
Article
Full-text available
Digital assistants engage with us with increasingly human-like conversations, including the expression of human emotions with such utterances as “I am sorry…”, “I hope you enjoy…”, “I am grateful…”, or “I regret that…”. By 2021, digital assistants will outnumber humans. No one seems to stop to ask if creating more digital companions that appear increasingly human is really beneficial to the future of our species. In this essay, we pose the question: “How human should computer-based human-likeness appear?” We rely on the philosophy of humanness and the theory of speech acts to consider the long-term consequences of living with digital creatures that express human-like feelings. We argue that feelings are the very substance of our humanness and therefore are best reserved for human interaction.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Article
Full-text available
More and more people use virtual assistants in their everyday life (e.g., on their mobile phones, in their homes, or in their cars). So-called vehicle assistance systems have evolved over the years and now perform various proactive tasks. However, we still lack concrete guidelines with all the specifics that one needs to consider to build virtual assistants that provide a convincing user experience (especially in vehicles). This research provides guidelines for designing virtual in-vehicle assistants. The developed guidelines offer a clear and structured overview of what designers have to consider while designing virtual in-vehicle assistants for a convincing user experience. Following design science research principles, we designed the guidelines based on the existing literature on the requirements of assistant systems and on the results from interviewing experts. In order to demonstrate the applicability of the guidelines, we developed a virtual reality prototype that considered the design guidelines. In a user experience test with 19 participants, we found that the prototype was easy to use, allowed good interaction, and increased the users’ overall comfort.
Article
Preserving cognition and mental capacity is critical to aging with autonomy. Early detection of pathological cognitive decline facilitates the greatest impact of restorative or preventative treatments. Artificial Intelligence (AI) in healthcare is the use of computational algorithms that mimic human cognitive functions to analyze complex medical data. AI technologies like machine learning (ML) support the integration of biological, psychological, and social factors when approaching diagnosis, prognosis, and treatment of disease. This paper serves to acquaint clinicians and other stakeholders with the use, benefits, and limitations of AI for predicting, diagnosing, and classifying mild and major neurocognitive impairments, by providing a conceptual overview of this topic with emphasis on the features explored and AI techniques employed. We present studies that fell into six categories of features used for these purposes: 1) sociodemographics; 2) clinical and psychometric assessments; 3) neuroimaging and neurophysiology; 4) electronic health records and claims; 5) novel assessments (e.g., sensors for digital data); and 6) genomics/other omics. For each category we provide examples of AI approaches, including supervised and unsupervised ML, deep learning, and natural language processing. AI technology, still nascent in healthcare, has great potential to transform the way we diagnose and treat patients with neurocognitive disorders.