Content uploaded by Joseph W. Newbold
Author content
All content in this area was uploaded by Joseph W. Newbold on May 10, 2019
Content may be subject to copyright.
Politeness Strategies in the Design of
Voice Agents for Mental Health
Joseph Newbold
UCLIC, London
joseph.newbold.14@ucl.a.uk
Gavin Doherty
School of Computer Science and Statistics,
Trinity College Dublin, Dublin, Ireland
Gavin.Doherty@scss.tcd.ie
Sean Rintel
Microsoft Research, Cambridge
serintel@microsoft.com
Anja Thieme
Microsoft Research, Cambridge
anthie@microsoft.com
INTRODUCTION
1
There is growth in the development of conversational agents or chatbots to support (self-)management in mental
health [3,10]. Previous work has shown how perceptions of conversational agents as caring or polite both can
contribute to a sense of empathy and aid disclosure of sensitive information; but also risk inviting misperceptions
of their emotional capabilities [2, 6, 7]. Recent research suggests that we need to better understand how the design
of dialogue systems may impact people’s perceptions of a conversational agent [4,5,9,11], and through this their
readiness to engage or to openly disclose about their mental health. In this paper, we suggest the use of Brown and
Levinson’ politeness strategies [1] to create dialogue templates for a mental health ‘mood log’, which has been
shown to be beneficial way for technology to support mental health self-management [8], as a theoretical
underpinning to the design of conversational dialogue structure.
KEYWORDS
Mental health; health monitoring; voice user interface; voice assistant; accessibility; design concept.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided
that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the
owner/author(s).
CHI’19 Extended Abstracts, May 4-9, 2019, Glasgow, Scotland, UK.
© 2019 Copyright is held by the author/owner(s).
ACM ISBN 978-1-4503-5971-9/19/05.
DOI: https://doi.org/10.1145/3290607.XXXXXXX
Figure 1: Brown and Levison’s politeness
model from [1]
Table.1 Mood log questions built on
politeness strategies that reflect either a more
‘personal’ or a more ‘passive’ agent.
DESIGNING POLITE CONVERSATIONAL VOICE AGENTS FOR MOOD LOGGING
At the heart of Brown and Levinson’ model [1] (Fig. 1) is the concept of the Face-Threatening Act (FTA), any act
which challenges the face wants of an interlocutor. Face wants are divided into positive and negative face. A
person’s positive face is their desire to be wanted. A person’s negative face is desire not to be impeded. We use
these strategies to explore how politeness can be used to create different agent personalities and how this may
impact people’s interactions with the agent and readiness to self-disclose about mental health. We created two
dialogue templates by applying Brown and Levison’s strategies for FTAs to a set of relevant questions for logging
a person’s mood (Tab. 1). Taking the initiation of a mood log as an example, we can apply the strategies of a bold-
on-record approach that may translate to a direct question: “What is your mood today?”; or an off-record
approach to ask indirectly: “Mood logging is a good way to track how you are feeling”. Neither of these strategies
however offers much encouragement to respond. Positive and negative politeness strategies however include
expressions that can motivate engagement. For example, statements such as: “I would love to know how you are
feeling today” may provide encouragement to respond through appeals to positive face; while statements such as:
“You wouldn't be able to tell me how you are feeling would you?” offer appreciation for a person’s negative face.
By comparing different options for politely asking a person about their mood, we further noticed how some queries
are more passive, non-personable such as: “It is good to log your mood often”; whereas others seem to imply some
vested interest in the person and their wellbeing. For example, a voice assistant asking: “I would love to know how
you are feeling today”, implies both a personal connection to the individual, and can make a voice agent appear
as more caring and emotionally intelligent. Other examples seem to part between a passive and personal agent,
such as: “If you let me know how you are feeling, I will log it for you”. This implies a more passive agent, who
upholds politeness, without overly implying an empathetic connection. To make this difference more explicit, we
divided our politeness translations for the mood log (Tab. 1) into those that were more personal – reflecting a
sense of care for, or personal investment in, the person, like a human companion; and those that were more passive
– portraying a more indirect approach to asking for information, like an impersonal assistant to the user.
A clear distinguishing feature in the set of questions of each agent is the use of person pronouns for the personal
agent. Its questions directly refer to itself and its relationship to the person, through expressions such as “I would
love to know…”, “It would be great for me…”, “We could…”; and relating to the person as “partner”. It conveys
a vested emotional interest in the person, stating to be “glad to hear”, “love to know” and to “hope to speak” to
the person again soon; as well as placing the mood log as a shared activity between itself and the person: “Ok
logging-partner…”, or “We have finished the log…”. This can create impressions of the agent as active contributor
to the logging experiences, and – despite a technical system free of any sentimental capability – to be emotionally
invested in the user. Conversely, the passive agent does not make use of any person pronouns and instead puts
emphasis on the user: "would you like to log your mood today?”; and frames its role as an assistant to the person:
“Now that you have completed your mood entry this will be added to your log for you to review later”. This
conversational structure avoids emotional evaluations of the person’s responses; or expressions of own sentiment.
Thus, when translating dialogue structure from human-human conversation to voice agents, we have to be
mindful of, and need to better study, how users come to perceive the agents and their purpose in supporting a
specific activity (here: mood logging). Likely this requires a careful balancing in the dialogue design in ways that
invites self-disclosure without risking unrealistic expectations of the agents emotional or relational capabilities.
Query
Personal Agent
Passive Agent
Log initiation
Let’s get logging
partner!
Okay, let’s begin the mood
log.
Mood rating
I would love to know
your mood on a scale
of 1 - 5?
If you rate your mood from
1-5, this will be added to
the log
Mood
response
(positive)
I am glad to hear it!
Thank you, your mood has
been logged as 5
Mood
response
(negative)
I am sorry to hear
that!
Thank you, your mood has
been logged as 2
Mood
situation
It would be great for
me to know about a
specific situation that
made you feel this
way.
If you have the time could
you log a specific situation
that made you feel this
way?
Lifestyle
choice
Ok mate, could you
tell me how many
hours you slept last
night?
You could also log how
many hours of sleep you
got last night.
Diary entry
It would be great if
you could tell me
how your day has
been.
Could you also log how
your day has been?
Log
completion
That’s great! We
have finished the log,
I hope to speak to
you again soon.
Now that you have
completed your mood entry
this will be added to your
log for you to review later.
REFERENCES
[1] Penelope Brown and Stephen C. Levinson. 1987. Politeness: Some universals in language usage.
Cambridge: Cambridge University Press.
[2] Timothy W. Bickmore and Rosalind W. Picard. 2004. Towards caring machines. In CHI '04 Extended
Abstracts on Human Factors in Computing Systems (CHI EA '04). ACM, 1489-1492.
https://doi.org/10.1145/985921.986097
[3] Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior
therapy to young adults with symptoms of depression and anxiety using a fully automated
conversational agent (Woebot): a randomized controlled trial. JMIR mental health 4, no. 2.
http://mental.jmir.org/2017/2/e19/
[4] David R Large,., Leigh Clark, Annie Quandt, Gary Burnett, and Lee Skrypchuk. "Steering the
conversation: a linguistic exploration of natural language interactions with a digital assistant during
simulated driving." Applied ergonomics 63 (2017): 53-61.
[5] Ewa Luger and Abigail Sellen. 2016. Like Having a Really Bad PA: The Gulf between User
Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems (CHI '16). ACM, 5286-5297.
https://doi.org/10.1145/2858036.2858288
[6] Junhan Kim, Yoojung Kim, Byungjoon Kim, Sukyung Yun, Minjoon Kim, and Joongseek Lee. 2018.
Can a Machine Tend to Teenagers' Emotional Needs?: A Study with Conversational Agents. In Extended
Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, p. LBW018. ACM,
2018. http://dx.doi.org/10.1145/3170427.3188548
[7] Gale M. Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. It’s only a computer:
Virtual humans increase willingness to disclose. Computers in Human Behavior 37 (2014): 94-100.
https://doi.org/10.1016/j.chb.2014.04.043
[8] Mark Matthews and Gavin Doherty. 2011. In the mood: engaging teenagers in psychotherapy using
mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI '11), 2947−2956. http://doi.acm.org/10.1145/1978942.1979379
[9] Adam S. Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni
Linos. 2016. Smartphone-based conversational agents and responses to questions about mental health,
interpersonal violence, and physical health. JAMA internal medicine 176, no. 5 (2016): 619-625.
https://doi.org10.1001/jamainternmed.2016.0400
[10] Jessica Schroeder, Chelsey Wilkes, Kael Rowan, Arturo Toledo, Ann Paradiso, Mary Czerwinski,
Gloria Mark, and Marsha M. Linehan. 2018. Pocket Skills: A Conversational Mobile Web App To
Support Dialectical Behavioral Therapy. In Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems (CHI '18). ACM, Paper 398, 15 pages.
https://doi.org/10.1145/3173574.3173972
[11] Ning Wang, W. Lewis Johnson, Richard E. Mayer, Paola Rizzo, Erin Shaw, and Heather Collins. 2008.
The politeness effect: Pedagogical agents and learning outcomes. International journal of human-
computer studies 66, no. 2 (2008): 98-112. https://doi.org/10.1016/j.ijhcs.2007.09.003