ResearchPDF Available

Empathic Chatbot: Emotional Intelligence for Mental Health Well-being

Authors:

Abstract

Conversational chatbots are Artificial Intelligence (AI)-powered applications that assist users with various tasks by responding in natural language and are prevalent across different industries. Most of the chatbots that we encounter on websites and digital assistants such as Alexa, Siri does not express empathy towards the user, and their ability to empathise remains immature. Lack of empathy towards the user is not critical for a transactional or interactive chatbot, but the bots designed to support mental healthcare patients need to understand the emotional state of the user and tailor the conversations. This research explains the different types of emotional intelligence methodologies adopted in the development of an empathic chatbot and how far they have been adopted and succeeded.
Empathic Chatbot: Emotional Intelligence for
Mental Health Well-being
Sarada Devaram
Faculty of Science & Technology
Bournemouth University
Bournemouth, United Kingdom
s5227932@bournemouth.ac.uk
AbstractConversational chatbots are Artificial Intelligence
(AI)-powered applications that assist users with various tasks by
responding in natural language and are prevalent across
different industries. Most of the chatbots that we encounter on
websites and digital assistants such as Alexa, Siri does not
express empathy towards the user, and their ability to empathise
remains immature. Lack of empathy towards the user is not
critical for a transactional or interactive chatbot, but the bots
designed to support mental healthcare patients need to
understand the emotional state of the user and tailor the
conversations. This research explains the different types of
emotional intelligence methodologies adopted in the development
of an empathic chatbot and how far they have been adopted and
succeeded.
Keywords empathy, emotions, chatbots, conversational
agent, mental health, sentiment analysis, artificial intelligence,
affective computing
I. INTRODUCTION
According to the World Health Organization(WHO), 1 in
10 people need mental healthcare worldwide, and different
mental disorders are, portrayed by a combination of
perceptions, feelings, and relationships with others [1].
Results of a household survey conducted by National Health
Services(NHS) states that 1 in 4 people experience a mental
health problem each year, 1 in 6 people face a common
mental health problem such as anxiety, depression each week
in England [6]. The number of people affected is increasing
gradually, and with the isolation that Covid-19 brought, the
numbers are much higher [7]. Despite having access to
health and social services and the number of people who
need care is higher, only 70 per 100,000 mental health
professionals are available in high-income nations and 2 per
100,000 in low-income nations [5].
The patients distressed with mental conditions struggle to
get professional help due to social stigma and hesitation [8].
Furthermore, countries are facing a shortage of mental health
professionals [1]. Due to this situation, it is not easy to
provide one to one support to treat a patient with a mental
health disorder. To overcome these problems, the mental
health professionals have adopted the use of technology
specifically Artificial Intelligence-based chatbots to address
the needs of individuals affected by mental health problems
as the first line of defence [2]-[4]. While dealing with a
mental health patient, it is vital to understand the emotional
state and respond with simple micro-interventions such as
suggestions for a deep breathing exercise or a friendly
conversation can be useful in increasing the positiveness of
patient's mood [9]. The main advantage of these bots is to
provide a practical, evidence-based, and an attractive digital
solution to help fill the gap of the professional instantly [10].
The evolution of Artificial Intelligence has paved ways for
many chatbots, but three therapeutic mental health chatbots
[Woebot, Wysa and Tess] are prominent and widely in use
[10]. A chatbot programmed to understand emotions might
be similarly proactive and keep a history containing that
patient’s likes and dislikes, or topics that make them laugh
and the chatbots could communicate about the likes and
dislikes of a patient as situations warrant. Additionally, the
adaption of therapeutic chatbots is increasing rapidly due to
the following advantages [11]
1. Understand and manage the patient’s psychological
state and connect them with a health professional
during unfavourable events.
2. 24/7 Instant chat support
3. Smart with reactive behaviour such as prompt
answering of a question and engage the patients with
illness prevention and care tips.
4. Easy to install, configure and maintain and is
compatible with various operating systems such as
Android, iOS and Linux.
5. For sensitive health care issues, patients might feel
less shame and feel more private.
6. Security of personal data is enhanced using different
authentication techniques such as login using facial
recognition, biometrics or with a passcode.
7. Cost-effective for a few mental conditions, such as
stress release.
8. Provide reminders such as taking medication, do
exercise, slots for jogging.
II. TYPES OF EMPATHIC CHATBOTS
A mental health patient can express their feelings using
text, emojis or emoticons, voice, recorded audio/video clips
or live audio/video. The main aim of the therapeutic chatbots
is to understand the appropriate emotions from the user’s
conversations and suggest them with appropriate treatment
or therapy. The empathy expressed by the mental health
patient can be cognitive, emotional and compassionate. The
purpose of all these categories is to understand the emotions
in the user context and relate them to appropriate emotions
such as happy, sad, anger, fear [2]. The user emotions can be
processed with Artificial Intelligence and deep learning
techniques using Natural Language Processing (NLP). NLP
depicts how chatbots translate and understand the patient’s
language. Using NLP, chatbots can make sense of the spoken
or written text and accomplish the tasks like keyword
extraction, translation, and topic classification. NLP
processes the content expressed in natural human language
with the help of the techniques such as sentiment analysis,
facial recognition and voice recognition [10].
Chatbots with NLP capability can understand the patterns
in patient conversation context and analyse the sentiment
behind the message by using contextual clues from the voice,
video or text input [12].
TABLE I. TYPES OF CHATBOTS
Methodology
Description
Sentiment Analysis
Sentiment Analysis extracts opinions,
thoughts and emotions from the text or
emoticons.
Video-based emotion
recognition
Facial features extracted from a live or
video clip are used to understand the
emotions of the patient.
Voice-based emotion
recognition
Speech features extracted from recorded
audio or a phone call are used to
understand the emotions of the patient.
A. Sentiment Analysis
Emotion detection is a division of sentiment analysis that
deals with the analysis and extraction of emotions. Emotion
detection helps mental health professionals to provide
tailor‐made treatments to their patients. Sentiment analysis
recognises how the mental health patient is feeling regarding
something. It identifies the patient's message as well as
emojis/emoticons as positive, negative or neutral based on
the context of the patient's conversation [13]. Extracted
opinions are used by the therapeutic chatbots to suggest an
appropriate treatment or redirect to a mental health
professional in case of any emergencies. The usage of
emojis/emoticons increased rapidly in electronic messages
[14] and mental health patients can easily express their
emotions using smileys and ideograms. An emoticon
indicates a deeper meaning in the context of the patient’s
conversation [14]. The patient responses and
emojis/emoticons are converted into a Unicode character set
to train the model. The training data collected to train the
models vary based on the clinical tools used to gather the
data, for instance, collecting the data from clinical records,
surveys, and patient’s blogs [13]. Depending upon the
interpretation of the patient’s queries, the categories of the
sentiment analysis will be defined, such as aspect-based
sentiment analysis is used to analyse the text-based messages
from the patient response and emotion-based analysis is used
to analyse emoticons [15].
Advantages
Chatbots provide treatment analysis for a patient by
analysing their responses, whether the treatment is
causing negative or positive effects on the patient.
Chatbots will have a clear overview of the patient’s
emotional state and respond accordingly.
Chatbots identify what messages and conversations
act as emotive triggers that change the patient’s
mood.
Chatbots can identify emergencies such as suicidal
thoughts and escalate or redirect them to appropriate
professionals.
Limitations
Multiple sentiments in one sentence is complicated
for sentiment annotations [15].
Defining neutral: Sometimes, the patient does not
show any indication of their emotional state, but they
describe situations then the chatbot will have the
difficulties to consider that the patient is in a
negative emotional state.
B. Video-based emotion analysis using facial recognition
In the process of patient-therapeutic chatbot video-based
interactions, it is vital to detect, process and analyse the
patient’s emotions and perceptions to adjust the treatment
strategies. The goal of facial recognition is to collect data and
analyse the feelings of patients to make relevant responses
possible. The data is gathered from various physical features
such as body movements, facial expressions, eye contact and
other physical, biological signals. These physical emotions
are classified into different categories, such as sadness,
happiness, surprise, fear, and anger. Image processing and
computer vision techniques are used to extract the mental
health patient’s facial features using two approaches
Geometric-based, appearance-based [16].
The geometric-based approach signifies the mental health
patient face’s geometry by extracting the nodal points, the
shapes and the positions of the facial components like
eyebrows, eyes, mouth, cheeks and nose then compute the
total distance among facial components to create an input
feature vector. The main challenge with this approach is to
gain high accuracy in facial component detection in real-
time.
The appearance-based approach indicates mental health
patient face textures by extracting the variations in skin
textures and face appearances. This approach uses Local
Binary Patterns (LBP), Local Directional Patterns (LDP),
and Directional Ternary Patterns (DTP) to encode the
textures as training data. The empathic chatbot detects the
emotions from facial expressions using appearance-based
approach because a geometry-based approach needs reliable
and accurate facial component detection to gain maximum
accuracy value which is very difficult in real-time scenarios
[17].
The process of developing a video-based emotional
system is as follows [16]
1. The chatbot detects the patient’s face from the video
chat.
2. The facial appearance detection gets the patient’s
facial features and converts them to input feature
vectors.
3. The selected Machine Learning(ML) classifier
categorises the patient emotions into different classes
such as sadness, happiness, disgust, neutral, anger,
surprise and fear.
4. Finally, the accuracy metrics are calculated for
subsequent analysis.
Advantages
It provides the flexibility of the appointments by
reducing the physical contact, or direct physician
interaction.
It makes it easier to organise appointments.
It improves medical treatment by examining subtle
facial traits, facial recognition.
Limitations
The accuracy may vary when a patient changes
appearance, or the camera angle is not quite right.
C. Voice-based emotion identification
The empathic chatbots should understand the emotions
from the context of the patient’s voice calls or recorded
audio files. The chatbot is equipped to access the
sensor/microphone, which can capture the voice sample on
the patient’s behaviour without having to interpret the inputs.
The emotions are identified using two classes of speech
features such as the lexical and acoustic features [16].
Lexical speech features: These features are bound
with the vocabulary used by the mental health
patient. Lexical features need the text extraction
from the speech to predict the patient’s emotions so
it can be used on the recorded audio files.
Acoustic speech features: These features are bound
with the pitch, jitter and tone of the mental health
patient. Acoustic features need the audio data for
understanding the emotions in the patient’s
conversation so it can be used for voice calls with
the patient. The acoustic model will be trained to
extract the spectral features from speech signals.
A voice-based emotion recognition system is a pattern
recognition system which consists of three principal parts:
processing the audio signals, feature calculation and voice
classification. The main aim of signal processing includes the
digitisation of the audio signal, filtering the audio signal and
the segmentation of the audio conversation of spoken words
into text. The feature calculation aims to find the properties
of the pre-processed and digitised acoustic signal that
represent the emotions and convert them into an encoded
vector. Finally, the Machine Learning(ML) classification
algorithms will be used on the feature selection vector. These
classification algorithms can vary based on the trained
dataset [18].
Advantages
Chatbots can often detect a mental health patient
emotion even if it cannot understand the language
because the acoustic speech features use voice
elements such as pitch and tone.
Patients find it works faster than typing the text
messages.
Limitations
Sometimes it is difficult to analyse the speech
elements such a patient can express anger in slow
pitch and tone, which makes it challenging to
identify the emotion behind the speech.
III. THE SUCCESS OF THE EMPATHIC CHATBOTS
The proliferation of chatbots that are dedicated to helping
mentally and emotionally distressed people indicates that
seeking help online is becoming increasingly popular. The
empathic chatbots developed on Cognitive Behavioural
Therapy (CBT) platform, which essentially means therapy
through conversation. The therapy aims to turn the patient’s
negative thoughts into positive ones, by initiating a joyful
daily talk and creating a relaxing environment for the patient.
Patients with emotional distress are more comfortable talking
anonymously to a machine from the comfort of their home
without the fear of being judged, than physically visiting a
psychologist’s office, which is already stigmatised in many
societies around the world [19].
WoeBot, Wysa and Tess are few prominent chatbots that
are helping the anxiety and depression patients [10].
WoeBot is an AI application that claims to help alleviate
mental health disorders through fully automated
conversations. The application’s conversational agent
initiates the chat by asking users how there are feeling and
sends them tips and videos on wellbeing according to their
needs. Surveys conducted on Woebot users by Stanford
University indicate a significant improvement with feelings
of depression and anxiety [20].
Wysa is an AI-powered bot that helps users manage their
feelings through Cognitive Behavioural Therapy (CBT),
Dialectical Behavior Therapy (DBT), and simple exercises
[21].
Tess is a psychological AI-powered chatbot that focuses
on mental health and emotional wellness. Tess does not work
on the pre-programmed responses; it understands the
situation and response according to user preferences, and
also it remembers user’s likes, dislikes and poses an
understanding attitude [22].
There are many more apps that claim to cater to one’s
wellbeing through conversing and analysing mood, physical
activities, movement patterns, energy, social interactions and
locations [19].
IV. LIMITATIONS OF THE EMPATHIC CHATBOTS
One of the main challenges being faced are the
contextual awareness during the patient
conversations; lack of contextual data for training,
changes in patient's conversational behaviour with
emojis, short descriptions/abbreviated texts during
the discussions.
Few healthcare professionals argue that AI should be
supplemented instead of replacing the health
professionals and finding an appropriate role of AI is
a significant challenge for the future. [23]
Limited Adoption Many health professionals in the
US indicated that the bots cannot effectively
understand the needs of patients and cannot be
responsible for a thorough diagnosis. Some think
that the usage of chatbots in health care might pose a
risk of self-diagnosis and failing to understand the
diagnosis [24]
Other challenges are confidentiality and patient
privacy. Since patient conversation includes personal
matters, it is essential to encrypt patient
conversations or anonymise the patient data in the
database.
V. CONCLUSION
While an empathic chatbot may offer a mental health
patient with a forum to discuss problems and provide access
to help guides and also increase mental health literacy and a
way to track moods, but an empathic chatbot is not an
alternative of a mental health professional or a therapist.
Despite the few limitations, empathic chatbots are proving a
nascent technology with massive future potential. The
empathy added to a chatbot has filled a clear and critical gap
that is already proving life-changing for patients. These
chatbots show how it is possible to leverage conversational
AI in different ways. So, empathy chatbots in mental health
are not just making waves in the healthcare industry, but they
are also paving the way for more innovative and beneficial
uses of chatbot technology in all aspects of life.
This research explained the various methods of AI
techniques which are applied to classify the intent of the
conversation into emotions and explored a few prominent
apps in this space.
REFERENCES
[1] World Health Organization, "Mental health: massive scale-up of
resources needed if global targets are to be met," 2017.
[2] A. Rababeh, M. Alajlani, B. M. Bridgette and M. Househ ,
"Effectiveness and Safety of Using Chatbots to Improve Mental
Health: Systematic Review and Meta-Analysis," Journal of medical
internet research, 2020.
[3] E. Adamopoulou and M. Lefteris, "An Overview of Chatbot
Technology," PMC, 2020.
[4] R. Morris, K. Kouddous, R. Kshirsagar and S. Schueller, "Towards an
Artificially Empathic Conversational Agent for Mental Health
Applications: System Design and User Perceptions.," J Med Internet,
2018.
[5] World Health Organization, "Mental disorders," WHO, 2019.
[6] S. McManus, H. Meltzer, T. Brugha, P. Bebbington and R. Jenkins,
"Adult psychiatric morbidity in England: Results of a household
survey.," NHS, Leicester, 2016.
[7] J. Xiong, O. Lipsitz and F. Nasri, "Impact of COVID-19 pandemic on
mental health in the general population: A systematic review," PMC,
2020.
[8] D. Susman, "8 Reasons Why People Don’t Get Treatment for Mental
Illness,"2020.[Online].Available:
http://davidsusman.com/2015/06/11/8-reasons-why-people-dont-get-
mental-health-treatment/. [Accessed 6 November 2020].
[9] R. Arrabales, Perla: A Conversational Agent for Depression
Screening in Digital Ecosystems. Design, Implementation and
Validation, 2020.
[10] Simon D’Alfonso, "AI in mental health," Current Opinion in
Psychology, 2020.
[11] J. Pereira and O. Diaz, "Using Health Chatbots for Behavior Change:
A Mapping Study," Journal of Medical Systems, 2019.
[12] R. Dharwadkar and N. A. Deshpande, "A Medical ChatBot," IJCTT,
2018.
[13] A. Rajput, "Natural Language Processing, Sentiment Analysis and
Clinical Analytics," Researchgate, 2019.
[14] A. Fadhil, G. Schiavo and Y. Wang, "The Effect of Emojis when
interacting with Conversational Interface Assisted Health Coaching
System," ACM, 2018.
[15] S. Poria, I. Chaturvedi, E. Cambria and A. Hussain, "Convolutional
MKL Based Multimodal Emotion Recognition and Sentiment
Analysis," IEEE, 2016.
[16] M. Hossain, "Patient State Recognition System for Healthcare Using
Speech and Facial Expressions," Journal of Medical Systems, 2016.
[17] . T. Somchanok, "Emotional healthcare system: Emotion detection by
facial expressions using Japanese database," IEEE, 2014.
[18] T. Fei, L. Gang and Z. Qingen, "An Ensemble Framework of Voice-
Based Emotion Recognition System," ACII Asia, 2018.
[19] A. Vaidyam, H. Wisniewski, J. Halamka, M. Kashavan and J. Torous,
"Chatbots and Conversational Agents in Mental Health: A Review of
the Psychiatric Landscape," PubMed, 2019.
[20] K. Fitzpatrick, A. Darcy and M. Vierhile, "Delivering Cognitive
Behavior Therapy to Young Adults With Symptoms of Depression
and Anxiety Using a Fully Automated Conversational Agent
(Woebot): A Randomized Controlled Trial," JMIR Ment Health,
2017.
[21] I. Becky, S. Shubhankar and S. Vinod, "An Empathy-Driven,
Conversational Artificial Intelligence Agent (Wysa) for Digital
Mental Well-Being: Real-World Data Evaluation Mixed-Methods
Study," JMIR Mhealth Uhealth, 2018.
[22] R. Fulmer, A. Joerin, B. Gentile, L. Lakerink and M. Rauws, "Using
Psychological Artificial Intelligence (Tess) to Relieve Symptoms of
Depression and Anxiety: Randomized Controlled Trial," PubMed,
2018.
[23] J. Powell, "Trust Me, I'm a Chatbot: How Artificial Intelligence in
Health Care Fails the Turing Test," J Med Internet Res, vol. 21, p.
e16222, 2019.
[24] A. Palanica, P. Flaschner, A. Thommandram, M. Li and Y. Fossat,
“Physicians' Perceptions of Chatbots in Health Care: Cross-Sectional
Web-Based Survey,” J Med Internet Res, vol. 21, no. 4, p. e12887,
2019.
... The current Chatbot systems like Alexa, Siri to name a few lacks emotional cognizance and they are not able to involve in an emotional interaction with humans [31]. But Chatbot applications need to evolve towards emotion based interaction as applications like psychological counseling, patient health care etc. need interaction at an emotional level. ...
... With emotion recognition models trained on dataset of controlled environments, it becomes difficult to generalize these models to natural recordings made in unconstrained settings. For models to work in realistic environments, they should adapt to various factors like pose changes, illumination variations etc. Micro Expression (ME) analysis is gaining importance in various applications like security and forensic applications [31]. ME occur very briefly and very subtle. ...
Article
Emotion recognition is the gap in today’s Human Computer Interaction (HCI). These systems lack the ability to effectively recognize, express and feel emotion limits in their human interaction. They still lack the better sensitivity to human emotions. Multi modal emotion recognition attempts to addresses this gap by measuring emotional state from gestures, facial expressions, acoustic characteristics, textual expressions. Multi modal data acquired from video, audio, sensors etc. are combined using various techniques to classify basis human emotions like happiness, joy, neutrality, surprise, sadness, disgust, fear, anger etc. This work presents a critical analysis of multi modal emotion recognition approaches in meeting the requirements of next generation human computer interactions. The study first explores and defines the requirements of next generation human computer interactions and critically analyzes the existing multi modal emotion recognition approaches in addressing those requirements.
... A study is conducted on different techniques to detect emotions through sentiment analysis, voice-based and videobased emotion analysis. This study proves that the classification of emotions is more precise through the preprocessing of text [7]. A chatbot is developed using SAT counselling method. ...
... Most of the chatbots and digital assistants we find on the web like Alexa and Siri do not show compassion for their users and their ability to empathize is immature. A lack of compassion for users is not important for interactive chatbots (Devaram, 2020). Chatbots are automated chat programs which are also effective teaching assistants for answering students' questions about lessons, homework submissions, scores, and issues. ...
Article
Full-text available
The objectives of this research on Cognitive Technology for Academic Counselling in the New Normal were (1) to design an architecture of cognitive technology for academic counselling in the new normal, (2) to develop a system of cognitive technology for academic counselling in the new normal, (3) to assess the academic performance of students using cognitive technology for academic counselling in the new normal, and (4) to assess the satisfaction of students using cognitive technology for academic counselling in the new normal. The sample group in this study were 30 students at a secondary school level 1 from Ongkharak Demonstration School, Srinakharinwirot University, in the 2021 academic year. The students were enrolled on a course in Design and Technology and selected by multistage randomization, i.e. 1) group randomization and, 2) simple random sampling. The research tools were, (1) a system of cognitive technology for academic counselling in the new normal, (2) a performance assessment form, (3) a test and worksheets on design and technology, and (4) a satisfaction assessment form. The research results found: (1) learning outcomes after using the system were significantly better than before at the .01 level, and (2) the overall satisfaction of students using cognitive technology for academic counselling in the new normal was at the highest level (Mean = 4.50, S.D. = 0.64).
... He suggests that input into patients' emotional features can accurately represent the state of patients' wellbeing, thus improving the treatment and monitoring processes. For example, empathetic AI-based chatbots are attracting increasing research attention as they are believed to be beneficial in supporting patients with mental health issues or emotional distress [15][16][17]. For example, the empathetic chatbot Elomia, if used regularly, was found to cause a substantial decline in the succumb to depression (approximately 28%), anxiety (approximately 31%), and negative effects (approximately 15%) [18]. ...
Preprint
Japan has one of the most advanced healthcare systems in the world. However, under the impact of a rapidly aging population, the system suffers from shortages of labor due to an increase of senior patients. The Japanese government has actively embraced the development of society 5.0, in which artificial intelligence (AI) integration in the healthcare system is considered to be a viable solution for the demographical change. In such a context, this paper seeks to examine the perspectives of clinic visitors, who are both stakeholders and beneficiaries of the AI-integrated healthcare system. We hypothesized the predicting variables for patients’ perceptions of EAI in healthcare and perform multiple linear regression modeling. The results show that in general, senior patients and male patients perceive the EAI technology with more negativity. As for behavioral variables, attitude toward EAI-based applications has positive correlations with patients’ level of familiarity (β=0.346***;0.297***) and negative correlations with concerns about losing control to AI (β=-0.262**; -0.188*), in both private and public healthcare settings. Meanwhile, concerns about privacy violation and discrimination are non-significant predictors, which contradict the emerging literature on this subject. As such, we contextualize these findings with insights afforded by an understanding of Japanese culture as well as the Technological Acceptance Model (Davis, 1989). Finally, policy and education implications to promote its EAI acceptance to general senior members of society are recommended.
Chapter
Both patients and healthcare providers may benefit from systems performing medical sentiment analysis. Extracted sentiment has been used to realise a variety of use cases in healthcare, including quality assessment, pharmacovigilance, risk prediction for various diseases, and public health. Through monitoring and analysing sentiment expressed on social media, it can help health professionals understand more about mental illnesses and patients’ perceptions or establish necessary public health initiatives. This chapter presents applications of sentiment analysis in the medical domain, building on the definition and concepts of medical sentiment that were established in the previous chapter.
Article
Full-text available
The article deals with topical issues of digitalization of psychological support in sports. The authors substantiate the demand for modern solutions based on digital technologies using artificial intelligence in the athletes’ preparation. The results of the literature review showed that for the development of information and analytical systems for psychological support of athletes, as one of the promising solutions for the systematization of scientific and practical knowledge, in accordance with the state policy of the Russian Federation on the development of physical culture and sports using digital technologies, it is advisable to use chatbots. The presented review of existing developments allows us to state that despite their rapid development in our daily life, they are less represented in the field of sports. The existing solutions at this stage of development are far from perfect and need serious refinement both from a technical point of view and synthesis of interdisciplinary scientific knowledge.
Article
Full-text available
The importance of this paper is its discovery of the unused synergetic potential of integration between several AI techniques into an orchestrated effort to improve service. Special emphasis is given to the conversational capabilities of AI systems. The paper shows that the literature related to the use of AI in service is divided into independent knowledge domains (silos) that are either related to the technology under consideration, or to a small group of technologies related to a certain application; it then discusses the reasons for the isolation of these silos, and reveals the barriers and the traps for their integration. Two case studies of service systems are presented to illustrate the importance of synergy. A special focus is given to the conversation part of these service systems: the first case presents an application with high potential for integrating new AI technologies into its AI portfolio, while the second case illustrates the advantages of a mature application that has already integrated many technologies into its AI portfolio. Finally, the paper discusses the two case studies and presents inclusion relationships between AI capabilities to facilitate generating a roadmap for extending AI capabilities with synergetic opportunities.
Article
Full-text available
Background: As a major virus outbreak in the 21st century, the Coronavirus disease 2019 (COVID-19) pandemic has led to unprecedented hazards to mental health globally. While psychological support is being provided to patients and healthcare workers, the general public's mental health requires significant attention as well. This systematic review aims to synthesize extant literature that reports on the effects of COVID-19 on psychological outcomes of the general population and its associated risk factors. Methods: A systematic search was conducted on PubMed, Embase, Medline, Web of Science, and Scopus from inception to 17 May 2020 following the PRISMA guidelines. A manual search on Google Scholar was performed to identify additional relevant studies. Articles were selected based on the predetermined eligibility criteria. Results: Relatively high rates of symptoms of anxiety (6.33% to 50.9%), depression (14.6% to 48.3%), post-traumatic stress disorder (7% to 53.8%), psychological distress (34.43% to 38%), and stress (8.1% to 81.9%) are reported in the general population during the COVID-19 pandemic in China, Spain, Italy, Iran, the US, Turkey, Nepal, and Denmark. Risk factors associated with distress measures include female gender, younger age group (≤40 years), presence of chronic/psychiatric illnesses, unemployment, student status, and frequent exposure to social media/news concerning COVID-19. Limitations: A significant degree of heterogeneity was noted across studies. Conclusions: The COVID-19 pandemic is associated with highly significant levels of psychological distress that, in many cases, would meet the threshold for clinical relevance. Mitigating the hazardous effects of COVID-19 on mental health is an international public health priority.
Conference Paper
Full-text available
The use of chatbots evolved rapidly in numerous fields in recent years, including Marketing, Supporting Systems, Education, Health Care, Cultural Heritage, and Entertainment. In this paper, we first present a historical overview of the evolution of the international community’s interest in chatbots. Next, we discuss the motivations that drive the use of chatbots, and we clarify chatbots’ usefulness in a variety of areas. Moreover, we highlight the impact of social stereotypes on chatbots design. After clarifying necessary technological concepts, we move on to a chatbot classification based on various criteria, such as the area of knowledge they refer to, the need they serve and others. Furthermore, we present the general architecture of modern chatbots while also mentioning the main platforms for their creation. Our engagement with the subject so far, reassures us of the prospects of chatbots and encourages us to study them in greater extent and depth.
Article
Full-text available
Background: The global shortage of mental health workers has prompted the utilization of technological advancements, such as chatbots, to meet the needs of people with mental health conditions. Chatbots are systems that are able to converse and interact with human users using spoken, written, and visual language. While numerous studies have assessed the effectiveness and safety of using chatbots in mental health, no reviews have pooled the results of those studies. Objective: This study aimed to assess the effectiveness and safety of using chatbots to improve mental health through summarizing and pooling the results of previous studies. Methods: A systematic review was carried out to achieve this objective. The search sources were 7 bibliographic databases (eg, MEDLINE, EMBASE, PsycINFO), the search engine “Google Scholar,” and backward and forward reference list checking of the included studies and relevant reviews. Two reviewers independently selected the studies, extracted data from the included studies, and assessed the risk of bias. Data extracted from studies were synthesized using narrative and statistical methods, as appropriate. Results: Of 1048 citations retrieved, we identified 12 studies examining the effect of using chatbots on 8 outcomes. Weak evidence demonstrated that chatbots were effective in improving depression, distress, stress, and acrophobia. In contrast, according to similar evidence, there was no statistically significant effect of using chatbots on subjective psychological wellbeing. Results were conflicting regarding the effect of chatbots on the severity of anxiety and positive and negative affect. Only two studies assessed the safety of chatbots and concluded that they are safe in mental health, as no adverse events or harms were reported. Conclusions: Chatbots have the potential to improve mental health. However, the evidence in this review was not sufficient to definitely conclude this due to lack of evidence that their effect is clinically important, a lack of studies assessing each outcome, high risk of bias in those studies, and conflicting results for some outcomes. Further studies are required to draw solid conclusions about the effectiveness and safety of chatbots. Trial Registration: PROSPERO International Prospective Register of Systematic Reviews CRD42019141219; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42019141219
Article
Full-text available
Over the next decade, one issue which will dominate sociotechnical studies in health informatics is the extent to which the promise of artificial intelligence in health care will be realized, along with the social and ethical issues which accompany it. A useful thought experiment is the application of the Turing test to user-facing artificial intelligence systems in health care (such as chatbots or conversational agents). In this paper I argue that many medical decisions require value judgements and the doctor-patient relationship requires empathy and understanding to arrive at a shared decision, often handling large areas of uncertainty and balancing competing risks. Arguably, medicine requires wisdom more than intelligence, artificial or otherwise. Artificial intelligence therefore needs to supplement rather than replace medical professionals, and identifying the complementary positioning of artificial intelligence in medical consultation is a key challenge for the future. In health care, artificial intelligence needs to pass the implementation game, not the imitation game.
Article
Full-text available
This study conducts a mapping study to survey the landscape of health chatbots along three research questions: What illnesses are chatbots tackling? What patient competences are chatbots aimed at? Which chatbot technical enablers are of most interest in the health domain? We identify 30 articles related to health chatbots from 2014 to 2018. We analyze the selected articles qualitatively and extract a triplet <technicalEnablers, competence, illness> for each of them. This data serves to provide a first overview of chatbot-mediated behavior change on the health domain. Main insights include: nutritional disorders and neurological disorders as the main illness areas being tackled; “affect” as the human competence most pursued by chatbots to attain change behavior; and “personalization” and “consumability” as the most appreciated technical enablers. On the other hand, main limitations include lack of adherence to good practices to case-study reporting, and a deeper look at the broader sociological implications brought by this technology.
Article
Full-text available
Background Many potential benefits for the uses of chatbots within the context of health care have been theorized, such as improved patient education and treatment compliance. However, little is known about the perspectives of practicing medical physicians on the use of chatbots in health care, even though these individuals are the traditional benchmark of proper patient care. Objective This study aimed to investigate the perceptions of physicians regarding the use of health care chatbots, including their benefits, challenges, and risks to patients. Methods A total of 100 practicing physicians across the United States completed a Web-based, self-report survey to examine their opinions of chatbot technology in health care. Descriptive statistics and frequencies were used to examine the characteristics of participants. Results A wide variety of positive and negative perspectives were reported on the use of health care chatbots, including the importance to patients for managing their own health and the benefits on physical, psychological, and behavioral health outcomes. More consistent agreement occurred with regard to administrative benefits associated with chatbots; many physicians believed that chatbots would be most beneficial for scheduling doctor appointments (78%, 78/100), locating health clinics (76%, 76/100), or providing medication information (71%, 71/100). Conversely, many physicians believed that chatbots cannot effectively care for all of the patients’ needs (76%, 76/100), cannot display human emotion (72%, 72/100), and cannot provide detailed diagnosis and treatment because of not knowing all of the personal factors associated with the patient (71%, 71/100). Many physicians also stated that health care chatbots could be a risk to patients if they self-diagnose too often (714%, 74/100) and do not accurately understand the diagnoses (74%, 74/100). Conclusions Physicians believed in both costs and benefits associated with chatbots, depending on the logistics and specific roles of the technology. Chatbots may have a beneficial role to play in health care to support, motivate, and coach patients as well as for streamlining organizational tasks; in essence, chatbots could become a surrogate for nonmedical caregivers. However, concerns remain on the inability of chatbots to comprehend the emotional state of humans as well as in areas where expert medical knowledge and intelligence is required.
Article
Full-text available
Background: A World Health Organization 2017 report stated that major depression affects almost 5% of the human population. Major depression is associated with impaired psychosocial functioning and reduced quality of life. Challenges such as shortage of mental health personnel, long waiting times, perceived stigma, and lower government spends pose barriers to the alleviation of mental health problems. Face-to-face psychotherapy alone provides only point-in-time support and cannot scale quickly enough to address this growing global public health challenge. Artificial intelligence (AI)-enabled, empathetic, and evidence-driven conversational mobile app technologies could play an active role in filling this gap by increasing adoption and enabling reach. Although such a technology can help manage these barriers, they should never replace time with a health care professional for more severe mental health problems. However, app technologies could act as a supplementary or intermediate support system. Mobile mental well-being apps need to uphold privacy and foster both short- and long-term positive outcomes. Objective: This study aimed to present a preliminary real-world data evaluation of the effectiveness and engagement levels of an AI-enabled, empathetic, text-based conversational mobile mental well-being app, Wysa, on users with self-reported symptoms of depression. Methods: In the study, a group of anonymous global users were observed who voluntarily installed the Wysa app, engaged in text-based messaging, and self-reported symptoms of depression using the Patient Health Questionnaire-9. On the basis of the extent of app usage on and between 2 consecutive screening time points, 2 distinct groups of users (high users and low users) emerged. The study used mixed-methods approach to evaluate the impact and engagement levels among these users. The quantitative analysis measured the app impact by comparing the average improvement in symptoms of depression between high and low users. The qualitative analysis measured the app engagement and experience by analyzing in-app user feedback and evaluated the performance of a machine learning classifier to detect user objections during conversations. Results: The average mood improvement (ie, difference in pre- and post-self-reported depression scores) between the groups (ie, high vs low users; n=108 and n=21, respectively) revealed that the high users group had significantly higher average improvement (mean 5.84 [SD 6.66]) compared with the low users group (mean 3.52 [SD 6.15]); Mann-Whitney P=.03 and with a moderate effect size of 0.63. Moreover, 67.7% of user-provided feedback responses found the app experience helpful and encouraging. Conclusions: The real-world data evaluation findings on the effectiveness and engagement levels of Wysa app on users with self-reported symptoms of depression show promise. However, further work is required to validate these initial findings in much larger samples and across longer periods.
Conference Paper
Full-text available
The recent rise of conversational interfaces have made it possible to integrate this technology into various domains, among which is health. Dialogue systems and conversational agents can bring a lot into healthcare to reduce cost, increase efficiency and provide continuing care, albeit its infancy and complexity about building natural dialogues. However, the design guidelines to design dialogues for conversational agents are usually based on common knowledge, and less frequently on empirical evidence. For example, the use of emojis in conversational agent dialogues is still a debated issue, and the added value of adding such graphical elements is mainly anecdotal. In this work, we present an empirical study comparing users feedback when interacting with chatbot applications that use different dialogue styles, i.e., plain text or text with emoji, when asking different health related questions. The analysis found that when participants had to score an interaction with a chatbot that asks personal questions on their mental wellbeing, they rated the interaction with higher scores with respect to enjoyment, attitude and confidence. Differently, participants rated with lower scores a chatbot that uses emojis when asking information on their physical wellbeing compared to a dialogue with plain text. We believe this work can contribute to the research on integrating conversational agents in the health and wellbeing context and can serve as a guidance in the design and development of interfaces for text-based dialogue systems.
Article
Full-text available
Background: Conversational agents cannot yet express empathy in nuanced ways that account for the unique circumstances of the user. Agents that possess this faculty could be used to enhance digital mental health interventions. Objective: We sought to design a conversational agent that could express empathic support in ways that might approach, or even match, human capabilities. Another aim was to assess how users might appraise such a system. Methods: Our system used a corpus-based approach to simulate expressed empathy. Responses from an existing pool of online peer support data were repurposed by the agent and presented to the user. Information retrieval techniques and word embeddings were used to select historical responses that best matched a user's concerns. We collected ratings from 37,169 users to evaluate the system. Additionally, we conducted a controlled experiment (N=1284) to test whether the alleged source of a response (human or machine) might change user perceptions. Results: The majority of responses created by the agent (2986/3770, 79.20%) were deemed acceptable by users. However, users significantly preferred the efforts of their peers (P<.001). This effect was maintained in a controlled study (P=.02), even when the only difference in responses was whether they were framed as coming from a human or a machine. Conclusions: Our system illustrates a novel way for machines to construct nuanced and personalized empathic utterances. However, the design had significant limitations and further research is needed to make this approach viable. Our controlled study suggests that even in ideal conditions, nonhuman agents may struggle to express empathy as well as humans. The ethical implications of empathic agents, as well as their potential iatrogenic effects, are also discussed.
Article
Objective: The aim of this review was to explore the current evidence for conversational agents or chatbots in the field of psychiatry and their role in screening, diagnosis, and treatment of mental illnesses. Methods: A systematic literature search in June 2018 was conducted in PubMed, EmBase, PsycINFO, Cochrane, Web of Science, and IEEE Xplore. Studies were included that involved a chatbot in a mental health setting focusing on populations with or at high risk of developing depression, anxiety, schizophrenia, bipolar, and substance abuse disorders. Results: From the selected databases, 1466 records were retrieved and 8 studies met the inclusion criteria. Two additional studies were included from reference list screening for a total of 10 included studies. Overall, potential for conversational agents in psychiatric use was reported to be high across all studies. In particular, conversational agents showed potential for benefit in psychoeducation and self-adherence. In addition, satisfaction rating of chatbots was high across all studies, suggesting that they would be an effective and enjoyable tool in psychiatric treatment. Conclusion: Preliminary evidence for psychiatric use of chatbots is favourable. However, given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents. Regardless, early evidence shows that with the proper approach and research, the mental health field could use conversational agents in psychiatric treatment.