Conference PaperPDF Available

Conversational Agents for Health and Wellbeing

Authors:

Abstract and Figures

Conversational agents have increasingly been deployed in healthcare applications. However, significant challenges remain in developing this technology. Recent research in this area has highlighted that: i) patient safety was rarely evaluated; ii) health outcomes were poorly measured, and iii) no standardised evaluation methods were employed. The conversational agents in healthcare are lagging behind the developments in other domains. This one-day workshop aims to create a roadmap for healthcare conversational agents to develop standardised design and evaluation frameworks. This will prioritise health outcomes and patient safety while ensuring a high-quality user experience. In doing so, this workshop will bring together researchers and practitioners from HCI, healthcare and related speech and chatbot domains to collaborate on these key challenges.
Content may be subject to copyright.
Conversational Agents for Health and
Wellbeing
Abstract
Conversational agents have increasingly been deployed
in healthcare applications. However, significant
challenges remain in developing this technology.
Recent research in this area has highlighted that: i)
patient safety was rarely evaluated; ii) health outcomes
were poorly measured, and iii) no standardised
evaluation methods were employed. The conversational
agents in healthcare are lagging behind the
developments in other domains. This one-day workshop
aims to create a roadmap for healthcare conversational
agents to develop standardised design and evaluation
frameworks. This will prioritise health outcomes and
patient safety while ensuring a high-quality user
experience. In doing so, this workshop will bring
together researchers and practitioners from HCI,
healthcare and related speech and chatbot domains to
collaborate on these key challenges.
Author Keywords
Conversational agent; voice interface; speech interface,
chatbots; healthcare; health informatics.
CSS Concepts
Human-centered computing Human computer
interaction (HCI)
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
CHI20 Extended Abstracts, April 2530, 2020, Honolulu, HI, USA
© 2020 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-6819-3/20/04.
DOI: https://doi.org/10.1145/3334480.3375154
A. Baki Kocaballi
Juan C. Quiroz
Liliana Laranjo
Dana Rezazadegan
Macquarie University
NSW, Australia
{baki.kocaballi; juan.quiroz;
liliana.laranjo;
dana.rezazadegan}@mq.edu.au
Rafal Kocielnik
University of Washington
Washington, USA
rafal.kocielnik@gmail.com
Leigh Clark
Swansea University
Wales, UK
l.m.h.clark@swansea.ac.uk
Q. Vera Liao
IBM Research AI
New York, USA
vera.liao@ibm.com
Sun Young Park
University of Michigan
Michigan, USA
sunypark@umich.edu
Robert J. Moore
IBM Research
California, USA
rjmoore@us.ibm.com
Adam Miner
Stanford University
California, USA
aminer@stanford.edu
Background
Conversational agents are systems that engage in
conversations with humans using text or spoken
language. Advances in speech recognition, natural
language processing, and machine learning have led to
an increasing adoption and use of conversational
agents. Despite the varied terminology [2, 18], there
are numerous commonly used technologies featuring
conversational interfaces: chatbots, which have the
ability to engage in “small talk” and casual
conversation; embodied conversational agents, which
involve a computer-generated character (e.g. avatar,
virtual agent) simulating face-to-face conversation with
verbal and nonverbal behavior; and intelligent
assistants such as Apple Siri and Google Assistant.
Over the last two decades, there is a rapidly growing
market of agents for health-related tasks, with Alexa
alone having over 1000 “skills” in the health category.
Research has shown the potential benefits of using
conversational agents in healthcare [2, 17]. Several
randomized controlled trials of interventions involving
embodied conversational agents have shown significant
improvements in physical activity, dietary, and
accessibility, among other outcomes [5, 6, 10, 30].
Although, there has been an increasing number of
conversational agents using speech interfaces in HCI
[8] with their benefits in accessibility [15, 27], the
majority of the healthcare agents lack the capability to
understand natural language speech input, allowing
only for constrained user input such as multiple-choice
of utterance options. Currently, using unconstrained
natural language input for receiving medical advice is
not recommended due to the risks involved [3].
In various healthcare settings, conversational agents
play increasingly important roles such as assisting
clinicians during the consultation [12], supporting
consumers in changing their behaviors [16, 29], or
assisting patients and elderly individuals in their living
environments [25, 31]. These opportunities also come
with potential safety issues [4, 14, 20] and
psychological and behavioral ramifications for the user
[11, 19]. A recent systematic review focusing on
conversational agents in healthcare found that i)
patient safety was rarely evaluated; ii) health outcomes
were poorly measured, and iii) no standardised
evaluation methods were employed [17]. Another
review study focusing on personalization of healthcare
conversational agents found that most of the studies
implemented the personalization features without a
theoretical or evidence-based support [13].
Because of the fundamental differences between audio
and visual modalities, conversational interfaces have
required us to rethink the application of design and
evaluation factors well-established in graphical user
interfaces such as affordances, constraints, feedback,
and visibility. Although there have been some recent
heuristic guidelines offered for human-AI interaction [1]
and speech interfaces [24], there is not even a shared,
technical definition of what constitutes a "conversation"
versus other kinds of natural-language-based
interactions [22]. Many design questions need to be
revisited for conversational interfaces such as: What
are the best ways of navigation? What are the best
ways to provide feedback or confirmation? How should
the system deal with troubles in understanding? What
constitutes a good user experience? What are people’s
perceptions and mental models about conversational
agents? How do users build trust toward conversational
agents? All these questions are more critical in the
healthcare domain where user-system interactions may
have serious unexpected or unintended consequences
and risks.
More research studies are needed to understand the
complexities of conversational interactions between
patients, healthcare professionals, and technologies in
various healthcare settings. For example, a new class
of conversational technology referred to as the Digital
Scribe aims to work with doctors as digital assistants in
clinical encounters [9]. There are various challenges
involved in introducing such conversational
technologies in clinical settings such as automation
bias, data ownership and privacy, professional
autonomy, and medico-legal issues, going beyond the
scopes of many design and evaluation methods [9].
Therefore, in addition to more standardized design and
evaluation methods to manage sensitive and risky
situations with no harm, some ethical and regulatory
factors and their implications need to be addressed.
Recent workshops at CHI and CSCW have examined the
challenges and opportunities of conversational user
interfaces in general [7, 23, 28] and also looked into
the use of conversational agents for cooperative work
[26]. This workshop will focus on the design and
evaluation of conversational agents in healthcare,
which has unique needs related to safety, trust, and
data sharing [21]. The aim is to create a roadmap to
develop standardised design and evaluation
frameworks prioritising health outcomes and patient
safety while maintaining satisfactory user experience.
Organizers
A. Baki Kocaballi is a research fellow at the Centre for
Health Informatics at Macquarie University. He has a
PhD in Interaction Design and MSc in Information
Systems. His research investigates the opportunities
and challenges of designing and evaluating AI-enabled
conversational agents in healthcare.
Juan C Quiroz is a research fellow at the Centre for
Health Informatics at Macquarie University. He has a
PhD in Computer Science, and his current research
explores the natural language processing for the
automated summarization of medical conversations
through speech interfaces.
Liliana Laranjo is a medical doctor with a Master of
Public Health from Harvard University, and a PhD in
Health informatics. Liliana works at the Centre for
Health Informatics at Macquarie University. Her current
research focuses on person-centered health
informatics, online social networks, artificial
intelligence, and behavior change informatics.
Dana Rezazadegan is a post-doctoral research fellow
at Macquarie University. She is currently working on
using artificial intelligence to automate documentation
and diagnosis for helping doctors in examination room.
She studied her Ph.D. in Robotics and Autonomous
systems with a focus on robotic vision and deep
learning at Queensland University of Technology.
Rafał Kocielnik is a PhD student at Human Centered
Design & Engineering Department at University of
Washington. His current research focuses on creating
smart technologies for persuasion and behavior change
through conversational interfaces. He also investigates
broader user perceptions of AI systems and the ways of
augmenting the design process with AI technologies.
Leigh Clark is a Lecturer in Computer Science at
Swansea University. His research examines user
interactions with speech interfaces, how design choices
and contexts impact user perceptions and behaviors
and how linguistic theories can be implemented and
redefined in speech-based HCI.
Vera Liao is a research staff member in IBM Research
AI at T.J Watson Research Center. She has a PhD in
HCI and master’s degree in human factors from the
University of Illinois at Urbana-Champaign. Her current
research focuses on human-AI interactions, in
particular the design of conversational agents and tools
that support the development of agents.
Sun Young Park is an assistant professor at the
University of Michigan in the Stamps School of Art and
Design and the School of Information. Her research
uses design ethnography to study patient engagement,
patientprovider collaboration, patient-centered health
technology, and technology adaptation. Her work has
been awarded by the National Science Foundation
(NSF) and the Agency for Healthcare Research and
Quality (AHRQ).
Robert J. Moore is a scientist at IBM Research-
Almaden. His current work applies conversation science
to the design of conversational agents. His recent book
outlines a methodology for conversational UX design.
Bob has a Ph.D. in sociology with a concentration in
Conversation Analysis.
Adam Miner is a licensed clinical psychologist, and
instructor at Stanford University School of Medicine. He
uses experimental and observational studies to improve
the ability of conversational AI to recognize and
respond to health issues. His current focus is the use of
modern computational approaches, such as natural
language processing, and AI to understand language
patterns in psychotherapy.
Website
A workshop website to provide the details of the
workshop and the accepted position papers will be
online at: http://casforhealth.org
Themes and Goals
Conversational agents in healthcare is an emerging
field of research with the potential to benefit health
across a wide range of application domains. This
workshop aims to address current issues and potential
benefits by bringing together conversational UX
designers, medical informatics, HCI, machine learning,
and design researchers investigating:
Studies and Cases
Use of conversational agents in clinical and
community settings
Experiences with existing conversational
agents or research prototypes used in clinical
settings by healthcare professionals
Use of conversational agents for sensitive and
mental health-related topics
Patient experience, perception, and
perspectives focusing on empowerment,
safety, trust, mental models and etc.
Design, Development and Evaluation
Emerging uses of artificial intelligence and
machine learning methods in areas such as
dialog management, spoken language
understanding and response generation
Novel applications of UX design and evaluation
methods, and use and development of
standardized measurement scales and tools
Context-specific design guidelines to support
the interactions between conversational agents
and patients/providers
Ethical and regulatory dimensions
Understanding of design factors such as
feedback, feedforward, and affordances within
the context of conversational agents
Evolving relationships, roles, and tasks for
clinicians, patients, and caregivers with the
introduction of conversational systems
Design and development practices specific to
the healthcare domain such as safety-critical
dialogue, ML for healthcare, health
intervention.
This workshop contributes to the growing body of
knowledge on conversational agents. Participants will
gain a better understanding of the different
characteristics, applications, design and evaluation
methods of current conversational agents in healthcare.
Given the growing popularity of conversational agents,
and their increasing use in healthcare, it is crucial to
develop a roadmap for researching more standardized
design and evaluation frameworks prioritising health
outcomes and patient safety while maintaining high-
quality user experience.
Pre-workshop Plans
The workshop aims to bring together researchers and
practitioners from HCI, healthcare, medical informatics,
conversational UX and interaction design, machine
learning, and speech-based domains with an interest in
conversational technologies to support health
outcomes. The call for participation will be distributed
across HCI, AI, design and health informatics mailing
lists. The organizers will also use social networking,
such as Twitter, to target a wider research audience.
The organizers will also ask for submissions from
personal contacts with relevant research expertise,
from the organizers’ home institutions, from contacts in
the relevant industry and other non-academic
institutions.
Workshop Structure
09:00 Arrival
09:15 Introduction
Brief intro about the workshop scope,
aims, and format
A short roundtable discussion to
introduce ourselves to each other
Presentation and discussion of the
outcomes of recent systematic reviews
10:30 Coffee break
11:00 Session 1
Paper presentations by the workshop
participants
Writing affinity notes about the
presentations
12:00 Lunch break
13:00 Session 2
Interactive evaluation of some
healthcare scenarios involving fictive
Important Dates
11 Dec 2019: Call for
Participation released.
11 Feb 2020: Position
papers deadline.
28 Feb 2020: Notification
of acceptance.
conversational systems accompanied
with affinity note-taking
14:00 Coffee break
14:15 Session 3
Generating an affinity diagram
Generating a map of actors, settings,
technologies, design and evaluation
factors
14:45 Wrap up
Summarizing the day and discussing
the next steps with the participants
15:00 End
Post-workshop Plans
After the workshop, we will organize a special issue
either in an HCI or health informatics journal,
addressing the research themes of the workshop. In
addition, the workshop contributors will be invited to
work on a joint journal paper outlining a roadmap
for design and evaluation challenges associated with
conversational agents in healthcare.
Call for Participation
This one-day workshop at the CHI 2020 in Hawaii,
USA invites HCI designers, researchers, healthcare
professionals, health informatics researchers, and AI
developers and researchers to contribute to the
emerging area of conversational agents in
healthcare.
The aim of this workshop is to: i) identify
challenges, opportunities, and research issues in
developing more standardized design and evaluation
frameworks for conversational systems, ii) discuss
ongoing studies employing conversational interfaces
to improve health outcomes, iii) review natural
language input and output technologies playing a
key role in shaping conversational experiences, and
iv) generate a map of actors, settings, technologies,
design and evaluation factors to guide future
research in this emerging area. We invite
researchers and professionals to share their related
work and discuss future research directions. We
would accept position papers including but not
limited to the following topics:
Use of conversational agents in clinical and
community settings
Preliminary results on ongoing empirical
studies of healthcare conversational agents
Conversational agent studies focusing on
supporting patient-centeredness, patient
empowerment, and patient safety or
professional autonomy
Risks and biases associated with design and
use of conversational technologies
Emerging uses of AI and ML methods in
areas such as dialog management, speech
summarization or error recovery
Novel applications of UX design and
evaluation methods.
Position papers should be 2-4 pages long (excluding
references) in the CHI EA format. There will be an
internal peer-review process between the authors of
the submitted papers. Papers will be selected based
on relevance to the workshop, quality of the
contribution, and ability to contribute to discussion.
Submissions and questions should be emailed to
casforhealth@gmail.com. At least one author of each
accepted paper must attend the workshop.
References
[1] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu,
Adam Fourney, Besmira Nushi, Penny Collisson, Jina
Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen,
Jaime Teevan, Ruth Kikin-Gil and Eric Horvitz. 2019.
Guidelines for Human-AI Interaction. 2019.
Guidelines for Human-AI Interaction. In Proc. of CHI.
[2] T. Bickmore and T. Giorgino. 2006. Health Dialog
Systems for Patients and Consumers. Journal of
biomedical informatics 39, 5, 556-571.
[3] T. W. Bickmore, H. Trinh, S. Olafsson, T. K. O'Leary,
R. Asadi, N. M. Rickles and R. Cruz. 2018. Patient and
Consumer Safety Risks When Using Conversational
Assistants for Medical Information: An Observational
Study of Siri, Alexa, and Google Assistant. J Med
Internet Res 20, 9, e11510.
[4] Timothy Bickmore, Ha Trinh, Reza Asadi and Stefan
Olafsson. 2018. Safety First: Conversational Agents
for Health Care. In Studies in Conversational UX
Design, Robert J. Moore et al. Eds. Springer
International Publishing, Cham, 33-57.
[5] Timothy W. Bickmore, Daniel Schulman and Candace
Sidner. 2013. Automated Interventions for Multiple
Health Behaviors Using Conversational Agents.
Patient Educ Couns 92, 2, 142-148.
[6] Timothy W. Bickmore, Rebecca A. Silliman, Kerrie
Nelson, Debbie M. Cheng, Michael Winter, Lori
Henault and Michael K. Paasche-Orlow. 2013. A
Randomized Controlled Trial of an Automated
Exercise Coach for Older Adults. J Am Geriatr Soc 61,
10, 1676-1683.
[7] Leigh Clark, Benjamin R. Cowan, Justin Edwards,
Cosmin Munteanu, Christine Murad, Matthew Aylett,
Roger K. Moore, Jens Edlund, Eva Szekely, Patrick
Healey, Naomi Harte, Ilaria Torre and Philip Doyle.
2019. Mapping Theoretical and Methodological
Perspectives for Understanding Speech Interface
Interactions. In Proc. of Ext. Abstracts of CHI.
[8] Leigh Clark, Philip Doyle, Diego Garaialde, Emer
Gilmartin, Stephan Schlögl, Jens Edlund, Matthew
Aylett, João Cabral, Cosmin Munteanu, Justin
Edwards and Benjamin R Cowan. 2019. The State of
Speech in Hci: Trends, Themes and Challenges.
Interacting with Computers.
[9] Enrico Coiera, Baki Kocaballi, John Halamka and
Liliana Laranjo. 2018. The Digital Scribe. npj Digital
Medicine 1, 1, 58.
[10] Roger A. Edwards, Timothy Bickmore, Lucia Jenkins,
Mary Foley and Justin Manjourides. 2013. Use of an
Interactive Computer Agent to Support Breastfeeding.
Matern Child Health J 17, 10, 1961-1968.
[11] Annabel Ho, Jeff Hancock and Adam S. Miner. 2018.
Psychological, Relational, and Emotional Effects of
Self-Disclosure after Conversations with a Chatbot. J
Commun 68, 4, 712-733.
[12] Jeffrey G. Klann and Peter Szolovits. 2009. An
Intelligent Listening Framework for Capturing
Encounter Notes from a Doctor-Patient Dialog. BMC
Med Inform Decis Mak 9, Suppl 1, S3.
[13] A. Baki Kocaballi, Shlomo Berkovsky, Juan C. Quiroz,
Liliana Laranjo, Huong Ly Tong, Dana Rezazadegan,
Agustina Briatore and Enrico Coiera. 2019.
Personalization of Conversational Agents in
Healthcare: A Systematic Review. J Med Internet Res.
[14] A. Baki Kocaballi, Juan C. Quiroz, Shlomo Berkovsky,
Dana Rezazadegan, Farah Magrabi, Enrico Coiera and
Liliana Laranjo. 2020. Responses of Conversational
Agents to Health and Lifestyle Prompts: Investigation
of Appropriateness and Presentation Structures.
Journal of Medical Internet Research.
[15] Rafal Kocielnik, Elena Agapie, Alexander Argyle,
Dennis T Hsieh, Kabir Yadav, Breena Taira and Gary
Acknowledgements
This workshop is supported by
the National Health and Medical
Research Council (NHMRC)
grant APP1134919 (Centre for
Research Excellence in Digital
Health) and Programme Grant
APP1054146 as part of the
Digital Scribe Project led by
Professor Enrico Coiera. This
work was also supported by
grants from the National
Institutes of Health, National
Center for Advancing
Translational Science, Clinical
and Translational Science
Award (KL2TR001083 and
UL1TR001085), the Stanford
Department of Psychiatry
Innovator Grant Program, and
the Stanford Human-Centered
AI Institute. We would like to
thank Prof Michael McTear for
his comments on an earlier
draft of this proposal.
Hsieh. 2019. Harborbot: A Chatbot for Social Needs
Screening. In Proceedings of AMIA. Washington.
[16] Rafal Kocielnik, Lillian Xiao, Daniel Avrahami and Gary
Hsieh. 2018. Reflection Companion: A Conversational
System for Engaging Users in Reflection on Physical
Activity. Proc. ACM Interact. Mob. Wearable
Ubiquitous Technol. 2, 2, 1-26.
[17] Liliana Laranjo, Adam G. Dunn, Huong Ly Tong, A.
Baki Kocaballi, Jessica Chen, Rabia Bashir, Didi
Surian, Blanca Gallego, Farah Magrabi, Annie Y. S.
Lau and Enrico Coiera. 2018. Conversational Agents
in Healthcare: A Systematic Review. J Am Med Inform
Assn 25, 9, 1248-1258.
[18] Michael McTear, Zoraida Callejas and David Griol.
2016. The Conversational Interface: Talking to Smart
Devices. Springer,
[19] Adam S. Miner, A. Milstein and J. T. Hancock. 2017.
Talking to Machines About Personal Mental Health
Problems. Jama 318, 13, 1217-1218.
[20] Adam S. Miner, A. Milstein, S. Schueller, R. Hegde, C.
Mangurian and E. Linos. 2016. Smartphone-Based
Conversational Agents and Responses to Questions
About Mental Health, Interpersonal Violence, and
Physical Health.JAMA internal medicine 176,5,619-25.
[21] Adam S Miner, Nigam Shah, Kim Bullock, Bruce
Arnow, Jeremy Bailenson and Jeff Hancock. 2019.
Key Considerations for Incorporating Conversational
Ai in Psychotherapy. Frontiers in Psychiatry 10, 746.
[22] Robert J. Moore and Raphael Arar. 2019.
Conversational UX Design: A Practitioner's Guide to
the Natural Conversation Framework. ACM.
[23] Robert J. Moore, Raphael Arar, Guang-Jie Ren and
Margaret H. Szymanski. 2017. Conversational UX
Design. In Proc. Extended Abstracts of CHI. ACM.
[24] Christine Murad, Cosmin Munteanu, Leigh Clark and
Benjamin R. Cowan. 2018. Design Guidelines for
Hands-Free Speech Interaction. In Proc. of Mobile
HCI'18 Adjunct. ACM.
[25] Toyoaki Nishida, Atsushi Nakazawa, Yoshimasa
Ohmoto and Yasser Mohammad. 2014.
Conversational Informatics: A Data-Intensive
Approach with Emphasis on Nonverbal
Communication (2014 edition). Springer, Tokyo.
[26] Martin Porcheron, Joel E. Fischer, Moira McGregor,
Barry Brown, Ewa Luger, Heloisa Candello and Kenton
O'Hara. 2017. Talking with Conversational Agents in
Collaborative Action. In Proc. of CSCW Companion.
[27] Alisha Pradhan, Kanika Mehta and Leah Findlater.
2018. "Accessibility Came by Accident": Use of Voice-
Controlled Intelligent Personal Assistants by People
with Disabilities. In Proc. of CHI'18. ACM
[28] Stuart Reeves, Martin Porcheron, Joel E. Fischer,
Heloisa Candello, Donald McMillan, Moira McGregor,
Robert J. Moore, Rein Sikveland, Alex S. Taylor, Julia
Velkovska and Moustafa Zouinar. 2018. Voice-Based
Conversational UX Studies and Design. In Proc. of
Extended Abstracts of CHI. ACM.
[29] Marie A. Sillice, Patricia J. Morokoff, Ginette Ferszt,
Timothy Bickmore, Beth C. Bock, Ryan Lantini and
Wayne F. Velicer. 2018. Using Relational Agents to
Promote Exercise and Sun Protection: Assessment of
Participants’ Experiences with Two Interventions. J
Med Internet Res 20, 2, e48.
[30] Alice Watson, Timothy Bickmore, Abby Cange, Ambar
Kulshreshtha and Joseph Kvedar. 2012. An Internet-
Based Virtual Coach to Promote Physical Activity
Adherence in Overweight Adults: Randomized
Controlled Trial. J Med Internet Res 14, 1, e1.
[31] Maria Klara Wolters, Fiona Kelly and Jonathan Kilgour.
2016. Designing a Spoken Dialogue Interface to an
Intelligent Cognitive Assistant for People with
Dementia. Health Informatics Journal 22, 4, 854-866.
... For data synthesis, an evaluation framework was developed, leveraging Laranjo et al., Montenegro et al., Chen et al., and Kocaballi et al. 9,32,41,42 . Two sets of criteria were defined: one aimed to characterize the chatbot, and the second addressed relevant NLP features. ...
... Personalization was defined based on whether the healthbot app as a whole has tailored its content, interface, and functionality to users, including individual user-based or user category-based accommodations. Furthermore, methods of data collection for content personalization were evaluated 41 personalization. Forty-three of these (90%) apps personalized the content, and five (10%) personalized the user interface of the app. ...
... This has direct implications on the value of, and the risks associated with the use of, the healthbot apps. Healthbots that use NLP for automation can be user-led, respond to user input, build a better rapport with the user, and facilitate more engaged and effective person-centered care 9,41 . Conversely, when healthbots are driven by the NLP engine, they might also pose unique risks to the user 46,47 , especially in cases where they are expected to serve a function based on knowledge about the user, where an empathetic response might be needed. ...
Article
Full-text available
Health-focused apps with chatbots (“healthbots”) have a critical role in addressing gaps in quality healthcare. There is limited evidence on how such healthbots are developed and applied in practice. Our review of healthbots aims to classify types of healthbots, contexts of use, and their natural language processing capabilities. Eligible apps were those that were health-related, had an embedded text-based conversational agent, available in English, and were available for free download through the Google Play or Apple iOS store. Apps were identified using 42Matters software, a mobile app search engine. Apps were assessed using an evaluation framework addressing chatbot characteristics and natural language processing features. The review suggests uptake across 33 low- and high-income countries. Most healthbots are patient-facing, available on a mobile interface and provide a range of functions including health education and counselling support, assessment of symptoms, and assistance with tasks such as scheduling. Most of the 78 apps reviewed focus on primary care and mental health, only 6 (7.59%) had a theoretical underpinning, and 10 (12.35%) complied with health information privacy regulations. Our assessment indicated that only a few apps use machine learning and natural language processing approaches, despite such marketing claims. Most apps allowed for a finite-state input, where the dialogue is led by the system and follows a predetermined algorithm. Healthbots are potentially transformative in centering care around the user; however, they are in a nascent state of development and require further research on development, automation and adoption for a population-level health impact.
... This has led in turn to increased commercial and research interest in these systems' potential to support wellbeing (2)(3)(4)(5)(6). A recent review by Chung et al., for example, reveals an upward trend (from less than 175 Skills in December 2016 to more than 275 by April 2017) in the total number of CAs published for health and wellbeing purposes via Amazon Alexa alone since the release of their Software Development Kit (SDK) in June 2015 (7). ...
... We next examined the semantic coherence (51) and exclusivity (52) of the individual topics of each of these candidate models using STM's topicQuality function. 5 Semantic coherence is a measure of the probability that a set of topic words 6 co-occur within the corpus, and exclusivity refers to the probability that the top words representing the topic do not appear as top words for other topics. ...
Article
Full-text available
Recent advancements in speech recognition technology in combination with increased access to smart speaker devices are expanding conversational interactions to ever-new areas of our lives – including our health and wellbeing. Prior human-computer interaction research suggests that Conversational Agents (CAs) have the potential to support a variety of health-related outcomes, due in part to their intuitive and engaging nature. Realizing this potential requires however developing a rich understanding of users' needs and experiences in relation to these still-emerging technologies. To inform the design of CAs for health and wellbeing, we analyze 2741 critical reviews of 485 Alexa health and fitness Skills using an automated topic modeling approach; identifying 15 subjects of criticism across four key areas of design (functionality, reliability, usability, pleasurability). Based on these findings, we discuss implications for the design of engaging CAs to support health and wellbeing.
... Various types of CAs have emerged and are used in a variety of different application domains (e.g. customer service, mental health care, education) [4,5]. With their growing popularity, much research in information systems (IS) has been dedicated to the design of CAs [6,7,8,9]. ...
... The first CA, called ELIZA, was developed in 1966 [36] and although it was a rule-based system, it was already able to mimic human conversations and create perceptions of personality among their users. Since then, technology has immensely improved due to advances in AI and machine learning and CAs are implemented in many different application domains such as customer service, mental health care, and education [4,5]. In doing so, CAs use modern natural language processing (NLP) techniques to be able to understand their users, but also to communicate in a natural way. ...
Conference Paper
Full-text available
Conversational agents (CAs)—software systems emulating conversations with humans through natural language—reshape our communication environment. As CAs have been widely used for applications requiring human-like interactions, a key goal in information systems (IS) research and practice is to be able to create CAs that exhibit a particular personality. However, existing research on CA personality is scattered across different fields and researchers and practitioners face difficulty in understanding the current state of the art on the design of CA personality. To address this gap, we systematically analyze existing studies and develop a framework on how to imbue CAs with personality cues and how to organize the underlying range of expressive variation regarding the Big Five personality traits. Our framework contributes to IS research by providing an overview of CA personality cues in verbal and non-verbal language and supports practitioners in designing CAs with a particular personality.
... PACAs are able to recognize and express personality by automatically inferring personality traits from users, giving them the ability to adapt to the changing needs and states of users when establishing a personalized interaction with them. As personality differences are manifested in language use, engagement with users can be further enhanced through tailored conversation styles (Kocaballi et al., 2020). While there is a large body of descriptive knowledge on design elements or cues that can be adapted, there is a lack of prescriptive knowledge on ways in which to actually design PACAs. ...
Article
Full-text available
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues.
Article
Full-text available
Background: Conversational agents (CAs) are systems that mimic human conversations using text or spoken language. Their widely used examples include voice-activated systems such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. The use of CAs in health care has been on the rise, but concerns about their potential safety risks often remain understudied. Objective: This study aimed to analyze how commonly available, general-purpose CAs on smartphones and smart speakers respond to health and lifestyle prompts (questions and open-ended statements) by examining their responses in terms of content and structure alike. Methods: We followed a piloted script to present health- and lifestyle-related prompts to 8 CAs. The CAs' responses were assessed for their appropriateness on the basis of the prompt type: responses to safety-critical prompts were deemed appropriate if they included a referral to a health professional or service, whereas responses to lifestyle prompts were deemed appropriate if they provided relevant information to address the problem prompted. The response structure was also examined according to information sources (Web search-based or precoded), response content style (informative and/or directive), confirmation of prompt recognition, and empathy. Results: The 8 studied CAs provided in total 240 responses to 30 prompts. They collectively responded appropriately to 41% (46/112) of the safety-critical and 39% (37/96) of the lifestyle prompts. The ratio of appropriate responses deteriorated when safety-critical prompts were rephrased or when the agent used a voice-only interface. The appropriate responses included mostly directive content and empathy statements for the safety-critical prompts and a mix of informative and directive content for the lifestyle prompts. Conclusions: Our results suggest that the commonly available, general-purpose CAs on smartphones and smart speakers with unconstrained natural language interfaces are limited in their ability to advise on both the safety-critical health prompts and lifestyle prompts. Our study also identified some response structures the CAs employed to present their appropriate responses. Further investigation is needed to establish guidelines for designing suitable response structures for different prompt types.
Article
Full-text available
Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn’t safe it should not be used, and if it isn’t trusted, it won’t be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.
Article
Full-text available
Background: The personalization of conversational agents with natural language user interfaces is seeing increasing use in health care applications, shaping the content, structure, or purpose of the dialogue between humans and conversational agents. Objective: The goal of this systematic review was to understand the ways in which personalization has been used with conversational agents in health care and characterize the methods of its implementation. Methods: We searched on PubMed, Embase, CINAHL, PsycInfo, and ACM Digital Library using a predefined search strategy. The studies were included if they: (1) were primary research studies that focused on consumers, caregivers, or health care professionals; (2) involved a conversational agent with an unconstrained natural language interface; (3) tested the system with human subjects; and (4) implemented personalization features. Results: The search found 1958 publications. After abstract and full-text screening, 13 studies were included in the review. Common examples of personalized content included feedback, daily health reports, alerts, warnings, and recommendations. The personalization features were implemented without a theoretical framework of customization and with limited evaluation of its impact. While conversational agents with personalization features were reported to improve user satisfaction, user engagement and dialogue quality, the role of personalization in improving health outcomes was not assessed directly. Conclusions: Most of the studies in our review implemented the personalization features without theoretical or evidence-based support for them and did not leverage the recent developments in other domains of personalization. Future research could incorporate personalization as a distinct design factor with a more careful consideration of its impact on health outcomes and its implications on patient safety, privacy, and decision-making.
Conference Paper
Full-text available
The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate theoretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.
Conference Paper
Full-text available
Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.
Article
Full-text available
Donation-based support for open, peer production projects such as Wikipedia is an important mechanism for preserving their integrity and independence. For this reason understanding donation behavior and incentives is crucial in this context. In this work, using a dataset of aggregated donation information from Wikimedia's 2015 fund-raising campaign, representing nearly 1 million pages from English and French language versions of Wikipedia, we explore the relationship between the properties of contents of a page and the number of donations on this page. Our results suggest the existence of a reciprocity mechanism, meaning that articles that provide more utility value attract a higher rate of donation. We discuss these and other findings focusing on the impact they may have on the design of banner-based fundraising campaigns. Our findings shed more light on the mechanisms that lead people to donate to Wikipedia and the relation between properties of contents and donations.
Chapter
Full-text available
Automated dialogue systems represent a promising approach for health care promotion, thanks to their ability to emulate the experience of face-to-face interactions between health providers and patients and the growing ubiquity of home-based and mobile conversational assistants such as Apple’s Siri and Amazon’s Alexa. However, patient-facing conversational interfaces also have the potential to cause significant harm if they are not properly designed. In this chapter, we first review work on patient-facing conversational interfaces in healthcare, focusing on systems that use embodied conversational agents as their user interface modality. We then systematically review the kinds of errors that can occur if these interfaces are not properly constrained and the kinds of safety issues these can cause. We close by outlining design recommendations for avoiding these issues.
Article
Speech interfaces are growing in popularity. Through a review of 99 research papers this work maps the trends, themes, findings and methods of empirical research on speech interfaces in the field of human–computer interaction (HCI). We find that studies are usability/theory-focused or explore wider system experiences, evaluating Wizard of Oz, prototypes or developed systems. Measuring task and interaction was common, as was using self-report questionnaires to measure concepts like usability and user attitudes. A thematic analysis of the research found that speech HCI work focuses on nine key topics: system speech production, design insight, modality comparison, experiences with interactive voice response systems, assistive technology and accessibility, user speech production, using speech technology for development, peoples’ experiences with intelligent personal assistants and how user memory affects speech interface interaction. From these insights we identify gaps and challenges in speech research, notably taking into account technological advancements, the need to develop theories of speech interface interaction, grow critical mass in this domain, increase design work and expand research from single to multiple user interaction contexts so as to reflect current use contexts. We also highlight the need to improve measure reliability, validity and consistency, in the wild deployment and reduce barriers to building fully functional speech interfaces for research. RESEARCH HIGHLIGHTS Most papers focused on usability/theory-based or wider system experience research with a focus on Wizard of Oz and developed systems Questionnaires on usability and user attitudes often used but few were reliable or validated Thematic analysis showed nine primary research topics Challenges identified in theoretical approaches and design guidelines, engaging with technological advances, multiple user and in the wild contexts, critical research mass and barriers to building speech interfaces
Book
With recent advances in natural language understanding techniques and farfield microphone arrays, natural language interfaces, such as voice assistants and chatbots, are emerging as a popular new way to interact with computers. They have made their way out of the industry research labs and into the pockets, desktops, cars and living rooms of the general public. But although such interfaces recognize bits of natural language, and even voice input, they generally lack conversational competence, or the ability to engage in natural conversation. Today's platforms provide sophisticated tools for analyzing language and retrieving knowledge, but they fail to provide adequate support for modeling interaction. The user experience (UX) designer or software developer must figure out how a human conversation is organized, usually relying on commonsense rather than on formal knowledge. Fortunately, practitioners can rely on conversation science. This book adapts formal knowledge from the field of Conversation Analysis (CA) to the design of natural language interfaces. It outlines the Natural Conversation Framework (NCF), developed at IBM Research, a systematic framework for designing interfaces that work like natural conversation. The NCF consists of four main components: 1) an interaction model of "expandable sequences," 2) a corresponding content format, 3) a pattern language with 100 generic UX patterns and 4) a navigation method of six basic user actions. The authors introduce UX designers to a new way of thinking about user experience design in the context of conversational interfaces, including a new vocabulary, new principles and new interaction patterns. User experience designers and graduate students in the HCI field as well as developers and conversation analysis students should find this book of interest.
Conference Paper
As research on speech interfaces continues to grow in the field of HCI, there is a need to develop design guidelines that help solve usability and learnability issues that exist in hands-free speech interfaces. While several sets of established guidelines for GUIs exist, an equivalent set of principles for speech interfaces does not exist. This is critical as speech interfaces are so widely used in a mobile context, which in itself evolved with respect to design guidelines as the field matured. We explore design guidelines for GUIs and analyze how these are applicable to speech interfaces. For this we identified 21 papers that reflect on the challenges of designing (predominantly mobile) voice interfaces. We present an investigation of how GUI design principles apply to such hands-free interfaces. We discuss how this can serve as the foundation for a taxonomy of design guidelines for hands-free speech interfaces.