ChapterPDF Available

Artificial Intelligence for Mental Health: A Review of AI Solutions and Their Future

Authors:

Abstract and Figures

The philosopher Spinoza once asserted that no one knows what a body can do, conceiving an intrinsic bodily power with unknown limits. Similarly, we can ask ourselves about Artificial Intelligence (AI): to what extent is the development of intelligence limited by its technical and material substrate? In other words, what can AI do? The answer is analogous to Spinoza’s: nobody knows the limit of AI. Critically considering this issue from philosophical, inter-disciplinary and engineering perspectives respectively, this book assesses the scope and pertinence of AI technology and explores how it could bring about both a better and more unpredictable future. ‘What AI Can Do’ highlights at both the theoretical and practical levels the cross-cutting relevance that AI is having on society, appealing to students of engineering, computer science and philosophy, as well as all who hold a practical interest in the technology.
Content may be subject to copyright.
Artificial intelligence for mental health: A review of AI solutions and
their future
Castañeda-Garza, G., Ceballos, H.G., Mejía-Almada, P.G.
Chapter in What AI Can Do: Strengths and Limitations of Artificial Intelligence, 2023, pp. 373
399
Summary
As an aftermath of the global COVID-19 outbreak, the importance for the attention of mental health
disorders was brought back to the spotlight. Conditions such as anxiety and depression have been identified
as leading causes of disability worldwide, being the latter a concern due to its effects, as lower quality of
life, or even suicidal ideation and execution. Despite this situation, healthcare systems around the world
face difficulties to provide mental care to their inhabitants because of different challenges, as not enough
funding, lack of infrastructure or trained personnel in some geographic areas. Thus, there exists a need to
be fulfilled in the field of mental health attention. According to a literature review, artificial intelligence
could provide mental health support in ways that were not possible before, with options as being available
anytime, through text, speech or even with physical interfaces in the case of robots. However, among the
general public, and as well in professional communities, there is not enough understanding of how does
artificial intelligence works. Therefore, in this chapter we offer a brief introduction to artificial intelligence
and cognitive-behavioral therapy, a common therapeutic approach used in AI applications. Next, we discuss
how artificial intelligence manages to achieve mental health goals through different technologies, such as
machine learning, detection of speech patterns, computer vision, among others. Then, examples of AI in
three different areas are presented, considering their attributes and how these may support mental health.
Finally, ethical considerations of using AI for mental health are examined, addressing issues as privacy,
security, accessibility, that should be considered in the development of any project development in the field
of artificial intelligence for mental health.
Keywords: mental health, artificial intelligence, cognitive-behavioral therapy
Introduction
While artificial intelligence may have reached a broader part of the population through the rules of
robotics in the book of I, Robot (Asimov, 1950) or found a way to cover every emotional need in a
relationship, as portrayed in the film Her (Jonze, 2013), the progress of artificial intelligence in real life has
not advanced as fast as expected. The term artificial intelligenceis recognized to be used for the first
time in 1955 by Prof. John McCarty, considered one of the founders of the field (McCarthy et al., 2006;
Myers, 2011). Since its origin, one of the ultimate goals behind artificial intelligence has been to make
computer programs that can solve problems and achieve goals in the world as well as humans(McCarthy,
2007). To pursuit this goal, the development of AI has needed to overcome multiple constraints, such as
the cost of computing equipment, infrastructure (Anyoha, 2017), long-term funding, and energy
consumption (Martin, 1995).
With the passage of time, many AI-like projects trying to resemble humans have appeared, such as
Eliza, one of the first chatbots to which humans opened their hearts (The Tech, 2008); the speech capable
Japanese anthropomorphic robot, WABOT-1 (Waseda University, n.d.); even reaching to building of
humanoid robots, such as Sophia (Hanson Robotics, 2022a), capable of communicating, learning in
different contexts, as well as joking about our future demise as humans (Fallon, 2017). However simple
these actions may seem, achieving human-like communication is not an easy task, being that our
communication process includes complex aspects, yet so familiar. For instance, just non-verbal
communication can be broken-down in different broad categories, such as kinesics, haptics, vocalics,
proxemics, and chronemics, each one of them filled with other characteristics (University of Minnesota,
2013); while on the other hand there is verbal communication, which inclusion requires another kind of
specific approaches to help machine-learning models to learn on their own (Yao, 2017). Therefore, to
resemble human communication in the best possible manner, different subsets of artificial intelligence, such
as machine learning (ML), natural language processing (NLP) and computer vision, have been developed
along the years to process inputs like voice, text, images, or videos in interpretable data for
computational models. Nevertheless, communication alone is not sufficient for a computer to completely
interact with the physical word, neither being emotionally accepted by humans in their daily life interactions.
To fulfill the ambition of achieving artificial intelligence, it is said that a machine should surpass
the Turing test being able to confuse a person to think that it is interacting with a human, instead of a
machine (Oppy & Dowe, 2021). Thus, for artificial intelligence in mental health, this would mean the dream
of achieving the potential perception of receiving psychotherapy from AI with a similar quality as to the
one provided by a human. Still, this dream would lead to other questions before occurring: should AI or a
robot be our therapist?
Mental Health Overview
There are concerning reasons to evaluate the possibilities of introducing artificial intelligence
solutions for the treatment of mental conditions. For starters, among the leading causes of global causes of
burden and disability worldwide, mental disorders such as depressive and anxiety disorders figured as
the main causes for the highest disability-adjusted life years (DALYs)
1
in 2019, accounting 125.3 million
DALYs (Ferrari et al., 2022). Across age groups, their rates affect people in working age, starting from 15
years old until late age, with a greater effect in females over men (Vos et al., 2020; Ferrari et al., 2022, p.
146). Around the world, it is estimated that 5% of the population (approximately 280 million people) are
affected by depression (World Health Organization (WHO), 2021) with a majority of them unaware of it
for long periods of time, unable to receive treatment or effective care due to the lack of resources, such as
available trained health-care providers, financial barriers, or others as social stigma. Unaware of the
consequences, mental health disorders may continuously impact their quality of life in different areas of
their lives. Now, with global events as the surge of the COVID-19 pandemic, the worldwide prevalence of
only depression has increased among the population (Santomauro et al., 2021).
One of the reasons which impede to reach out for help is the accessibility to mental health
professionals. The geographical distribution of licensed professionals can be uneven between rural and
urban areas for different regions (Salinas et al., 2010; Chen et al., 2019, Morales et al., 2020; Zhang et al.,
2021) limiting the information, education, and care that people could receive. In some cases, the availability
of health professionals per capita may differ significatively across multiple countries. As an example,
countries as Mexico reported 3.45 psychologists per 100,000 population, while others like Argentina and
USA had 222.57 and 29.86 respectively (WHO, 2019). In such cases, remote therapy may offer an
alternative to provide treatment to distant communities, but it still would require access to technology that
covers minimum requirements, such as a smartphone, fast enough internet, private spaces (Watson et al.,
2021), but mostly, licensed personnel available to provide it at different schedules. Scenarios as these have
promoted the development of tools based on artificial intelligence (AI) that may facilitate the work of
mental health professionals through activities as initial screenings (i.e.: applying questionnaires for mental
health), self-monitoring actions for wellness (i.e.: guided activities based on cognitive-behavioral therapy
(CBT)), organizational monitoring (services provided by companies) and in some cases, being able to
provide guidance or referral to patients with acute symptoms, as self-harming behaviors or suicidal thoughts.
1
One DALY represents the loss of the equivalent of one year of full health. DALYs for a disease or health condition are the sum of the years of
life lost to due to premature mortality (YLLs) and the years lived with a disability (YLDs) due to prevalent cases of the disease or health condition
in a population (WHO, 2020, p. 6).
By exploring these new possibilities, we may discover how artificial intelligence, able to work
continuously for longer periods compared to humans, accessible in any moment, could participate in our
pursuit to deliver better healthcare services or even more, able to perform them independently. To grasp
the probabilities of these outcomes, in this chapter we discuss about the potential of artificial intelligence
for mental health in different sections. For that purpose, this book chapter has been divided in the following
four sections:
In the first section, we offer an introduction of cognitive-behavioral therapy (CBT), an
approach of psychotherapy that is commonly used for AI applications due to scientific
approach and proved record in treating mental health disorders as depression, anxiety, and
many others.
The second section describes examples how artificial intelligence is providing mental
healthcare, including information about the machine learning models, natural language
algorithms, computer vision, and robotics to serve in the treatment of mental health disorders.
For the third section, examples of available applications, products, and proof of concept of AI
will be provided, such as chatbots, robot therapist and multimodal perception virtual therapist.
The fourth section includes a reflection concerned to the ethical aspects related to the use of
artificial intelligence for mental health, given that while these technologies may provide an
utmost value to people, but also carrying the potential to severely harm if not properly tested
and regulated.
At the end of the chapter, as a conclusion we offer a reflection related to present state of AI for mental
health and its future, both in their possible utopic or dystopic outcomes, thinking about future scenarios and
issues of AI, the human condition and dignity and the following step if AI (as we may imagine it right now)
is ever reached by any of our societies.
Part 1. Cognitive behavioral therapy (CBT)
The understanding of psychology as the scientific study of the mind and its processes is nearly as
recent as artificial intelligence, with its first experiments designed by Wilhelm Wundt nearly 100 years ago,
in the last decade of the nineteenth century (The Editors of Encyclopaedia Britannica, 2022). Since then,
multiple theories and therapeutical approaches have been elaborated with the purpose of explaining the
underlying mechanisms of the mind, and furthermore, approaches to treat cognitive, behavioral, or affective
pathologies in a person. Therefore, while there is consensus in contemporaneous trends in psychology (as
psychoanalysis, cognitive behavioral psychology, humanistic psychology, among others), one approach,
cognitive behavioral psychology, stands out as it has been used consistently in recent years.
The appearance of CBT as a new approach in psychotherapy would first be recognized until 1977,
after major clinical trials found that patients receiving CBT had a higher efficiency in the treatment of
depression in comparison with a medical treatment (Rush et al., 1977; Blackburn et al., 1981). Ever since,
CBT has gained recognition in the treatment of a range of problems, including depression, anxiety, drug
use problems, eating disorders and others. Nowadays, CBT is one of a few psychotherapeutic approaches
recommended by the American Psychological Association for its scientific evidence, made both in research
and clinical practice (APA, 2022).
The goal in CBT is to help patients to develop coping skills that let them understand better and
change their own affective, cognitive, and behavioral patterns (APA, 2017). To do so, a therapist could
employ exercises to reevaluate reality, learning problem-solving skills and other strategies. The purpose is
to guide the patient through a process of (1) obtaining awareness of automatic thoughts, feelings, and core
beliefs, (2) performing an examination of these in the practice, and (3) creating alternative thoughts that
may allow the acquisition of new thought, behavior, or emotional patterns (Crane & Watters, 2019). To
achieve this task, treatment with CBT commonly includes activities from which data generation is easily
obtained and useful to provide feedback and accomplish goals. A few examples of these are monitoring
logs for thoughts and behaviors, relaxation training, thought stopping, and activity monitoring. An extensive
description of treatment strategies in cognitive behavioral therapy is presented in Table 1.
Table 1. Examples of treatment strategies in CBT.
Behavioral strategies
Cognitive strategies
Activity monitoring and log
Values identification
Mood log
Pleasant activities
Activity scheduling
Increasing pleasure and achievement
Behavioral activation
Graded task assignments
Relaxation training
Scheduled worry time
Problem solving
Recognizing mood shifts
Combining thoughts and emotions
Thought stopping
Downward arrow technique
Use of thought change records
Identifying cognitive errors
Socratic questioning
Challenging questions
Generating rational alternatives
Life history
Modifying core beliefs
Guilt vs Shame
Source: Crane & Watters (2019; p. 23)
When using artificial intelligence, programmers can take advantage of obtaining helpful data to understand
the mental health state of the user. Therefore, from mental disorders screenings (i.e., clinical questionnaires)
to complex algorithms (i.e., combinations of user smartphone behavioral activity with feedback), the use
of machine learning models can promote the analysis of multiple factors with the purpose of attaining one
or a few goals in mental health, since this continue to be a trend in therapy through AI. As an example of
this, we may find smartphone applications focused on different activities and goals, as journaling,
meditation, managing anxiety, conversation agents, among others (Wasil et al., 2022).
Once we understand some of the fundamentals behind cognitive-behavioral therapy, and how this
can be enhanced by artificial intelligence, it is easier to understand how machine learning models could
take advantage of proved psychological techniques for mass, accessible and continuous replication.
Part 2. How does artificial intelligence provide mental health support?
The mechanisms by which artificial intelligence is able to provide support in mental health differ
according to different attributes. Depending on the task, AI could take advantage of local processing
through a costly hardware, or enable greater capabilities through a cost-effective solution, as cloud
computing. On another instance, AI characteristics are not the only to be considered, understanding that the
human experience by many of their cultural, age and social differences is important as well in the
adoption or rejection of a new technology.
In this section of the chapter, it will be discussed how artificial intelligence provides mental health
according to recent published papers on the matter. For this purpose, we have categorized artificial
intelligence in three areas, corresponding to the increasing level of complexity that each one of these
requires:
(1) Natural Language Processing: Focuses on communication processes in written or speech forms,
through the use of different natural language algorithms. Considers solutions like
conversational agents (i.e., chatbots).
(2) Virtual: Compared to natural language algorithms, virtual AI adds another multimodal
perception capabilities, such as visual recognition (i.e., machine vision) and allows AIs to
communicate through virtual layers (i.e., virtual avatars).
(3) Robotics: In contrast with virtual, AI in robotics adds a physical layer of action with humans,
with enables further interaction, non-verbal communication, as other patterns in robotic
solutions (i.e., humanoid robots, commercial robots).
Mental health support through natural language processing
With the rise of social media, the amount and variety of information available to NLP researchers
has transformed (Hirschberg & Manning, 2015). Shaped by machine learning and NLP, le Glaz et al. (2021)
affirm that natural language processing can be used with different data sources, going from static sources
(such as electronic health records [EHRs], psychological evaluation reports, and transcribed interviews) to
dynamic data (as an ongoing conversation with a person through a conversational agent). This information
can be gathered and then analyzed with the support of medical dictionaries with the purpose of detecting
specific terms, as those related to suicide (p. 3).
According to a literature review performed by Zhang et al. (2022), research about NLP for mental
health has increased over the last 10 years with a major interest for the study of depression (p. 46). The
extent of this research goes through multiple themes and disorders. In prevention, on the one hand, research
has found that text analysis and NLP can be an effective tool to identify shifts to suicidal ideation (the act
of thinking about taking your own life) (de Choudhury et al., 2016; Hassan et al., 2020), as well to discover
valuable patterns, such as identifying people at-risk in social media, or underlying manifestations of
symptoms through expressed through language (Low et al., 2020). On the other hand, reaction settings,
NLP sentiment analysis has proved useful to perform analysis in millions of messages posted in social
media, making them useful to analyze great amounts of messages in real-time, with the purpose of grasping
the emotions due to traumatic events, as it occurred during the COVID-19 pandemic workplace and school
reopening (Q. Chen et al., 2021). Properly used, these insights can offer potential information to authorities
in order to design strategies for mental health programs, to future health policy.
Addressing more disorder-specific aspects of mental health, NLP has been used to predict episodes
of violence by using electronic health EHRs, risk assessment scales and NLP dictionaries (Le et al., 2018),
to build a suicide prevention system through text and voice analysis with a chatbot (Kulasinghe et al., 2019),
early depression detection systems with BERT transformers (El-Ramly et al., 2021), and late life depression
(LLD) by using speech analysis (DeSouza et al., 2021). Nonetheless, NLP applications have also been
created to address other disorders, such as diagnosis of achluophobia and autism spectrum disorder (ASD)
by using decisions trees (Mujeeb et al., 2017), insomnia treatment (Shaikh & Mhetre, 2022), detection of
anorexia in social media (Hacohen-Kerner et al., 2022) among many other disorders (Abd-alrazaq et al.,
2019a).
Mental health support through Computer Vision
Going beyond natural language processing algorithms, there are other channels from which
machines can obtain multimodal data, such as visual or biometrical. In this sense, computer vision enables
computers to derive meaningful information from visual inputs such as images, videos, and others and
take action or make recommendations based on the analyzed data (IBM, 2022). By using this technology,
computer vision permits conducting a variety of useful tasks for mental health, such as recognition of facial
gestures, emotion recognition and prediction, eye tracking, movement patterns, among others (Sapiro et al.,
2019). Consequently, it is discussed that computer vision may enable the creation of low-cost (Hashemi et
al., 2014), mobile health methods to assess disorders as autism spectrum disorder (ASD) (Sapiro et al.,
2019; p. 15), early detection of depressions signs in students (Namboodiri & Venkataraman, 2019), as well
as detection of stress and anxiety from videos (Giannakakis et al., 2017).
Evidence of the value of reading facial features is continuously recognized in artificial intelligence
as a way to identify proof of potential emotional states in an individual. To perform these tasks, artificial
intelligence relies on the use of multiple algorithms, such as LBP, Viola-Jones algorithm, and support vector
machine (SVM) for data classification in the research of Namboodiri & Venkataraman (2019). Additionally,
the use of computer vision algorithms has been recognized as useful for the differential analysis of
obsessive-compulsive disorder (OCD) symptomatology, between subtypes of OCD such as compulsive
cleaning and compulsive checking (Zor et al., 2011).
In another example of detection of mental health disorders, computer vision has demonstrated its
benefits when identifying attention deficit hyperactivity disorder (ADHD) by using an extension of dynamic
time warping (DT) to recognize behavioral patterns in children (Bautista et al., 2016). Similarly, dynamic
deep learning and 3D analysis of behaviour has been employed to diagnose ADHD and ASD, with
classification rates higher than 90% for both disorders (Jaiswal et al., 2017). Likewise, Zhang et al. (2020)
designed a system to perform functional test tasks through inputs of multimodal data, including facial
expressions, eye and limb movements, language expressions and reactions abilities in children. The data is
then analyzed by using deep learning to detect specific behaviors, which can serve as a diagnosis
complementary to the one of health professionals. Even more, whereas these cases have been addressed for
human patients, it is suggested that these computational methods could also be used to detect cases of
ADHD like behavior in dogs (Bleuer-Elsner et al., 2019), opening the umbrella for further approaches for
well-being, not only in humans, but also in other species.
Mental health support through AI Robotics
As the advances in electronic technologies continue to occur, the boundaries between the digital
and physical world continues to blur. Human beings, used to a physical reality are now aiming to create
new digital worlds in what we call “the metaverse, while we welcome computers to go further than just
sensing our world but also, moving and interacting with it. With the power of robotics, virtualized
applications can go further than communicating with NLP, or identifying visual cues, and allowing them
to participate in physical activities with people with the possibility of gaining an embodiment.
During the last few decades, advancements in AI with robotics grew rapidly in multiple areas, from
household appliances to medicine and astronautical applications (Andreu-Perez et al., 2017). For the case
of AI for mental well-being, researchers have published evidence of how robotics may give some
advantages over the other methods. As a taxonomic description, Feil-Seifer & Mataric (2005) suggests that
socially assistive robots can help users in multiple populations, such as (1) elderly, (2) individuals with
physical impairments, (3) in convalescent care, (4) with cognitive disorders, and (5) students in special
education (p. 466). For instance, Fasola & Mataric (2013) found that robots can be used to promote physical
exercise on elderly, users which have a preference of physical embodied robot coach over virtual solutions;
whereas Kabacińska et al. (2021) and (Okita, 2013) observed that interventions using different types of
robots with children were able to reduce anger and depression levels, distress, as well as pain in some cases.
Similarly, other studies have discovered that robotic dogs can be able to reduce loneliness in a comparable
fashion to a living dog could do (Banks et al., 2008a). As a fact, a decrease in feelings of loneliness would
be beneficial in a variety of settings, being that it may affect greatly groups as women, people living alone
or without a partner. Moreover, its reduction could reduce risks associated to depression, anxiety, suicidal
ideation, among others (Beutel et al., 2017).
For researchers to continue understanding how these relationships between humans and robots may
be favorable, Riek (2016) suggests that Human Robot Interaction (HRI) requires further work as an
emerging area, as it is noted that the looks of robots (i.e., their morphology), their autonomy, and
capabilities can differ greatly, between them. This is meaningful according to Šabanović et al. (2015) who
suggest that design of robots might be challenging to use for older adults or people suffering depression,
even when there is willingness to participate or adopt these new technologies.
Now that some examples of how mental health if being supported by different technologies
associated to artificial intelligence has been presented, a description of some current applications of AI
as mental health coaches (chatbots), virtual therapists and AI robots will be presented in the following
section as examples of the point in which these technologies have been developed.
Part 3. Current applications of AI
Over the last decade, many artificial intelligence applications that been developed, with some of
them focused on providing support for mental health issues. Virtual therapists, chatbots, robots, have slowly
gained the attention as alternatives for the treatment of specific mental conditions. In this part of the chapter,
we will describe some examples of the applications developed over the last ten years, how these are
providing support for the care, and moreover, how this assistance could change to reach and ensure
availability to more people over the next decade
Chatbots
Considered as one of the technological solutions to mitigate the lack of mental health workforce
are chatbots (Abd-alrazaq et al., 2019b). Designed to communicate with humans, chatbots have surged
during the last decade as an option to provide free, accessible, and immediate mental health support. To
access them, several options are offered as smartphone applications, making them one of the most
accessible options compared to other equipment (i.e., VR headsets, computers, robots). Supplying a
therapist-like support, available 24/7 or as long as there is internet these intelligent products allow their
users to express by using a chat interface. Through multiple interactions, chatbots may perform data analysis
to provide a better sense of comprehension to their users, learning and invoking information from previous
conversations (Woebot Health, 2022), with some cases demonstrating potential evidence of capabilities to
establish a therapeutic bond with their users (Darcy et al., 2021).
Before continuing the discussion around chatbots, it is important to remark that due to their lower
difficulty to be built, there is an increasing quantity of digital well-being apps that have being produced for
major smartphone app stores over the last decade. In this sense, Martínez-Pérez et al. (2013) found that
more than 1,500 commercial apps are targeted as “useful” for treatment or reduction of symptoms of
depression, with most of them lacking any notice of evidence in clinical trials to prove their effectiveness.
Even more, many apps may not consider anonymity and privacy in their design, becoming a potential risk
in safeguarding the data of their users. For this reason, this section will provide information in mental health
chatbots that have undergone a process of academic research experiment (as clinical trials) to prove their
efficacy. To illustrate this task, we chose one chatbot which has been previously studied by researchers
over the last decade: Wysa (Tewari et al., 2021; Kulkarni, 2022).
Wysa Mental health chatbot
Represented by a bluish penguin, Wysa is an AI-enabled mental health app that leverages evidence-
based cognitive behavioral therapy (CBT) techniques through its conversational agent (Malik et al., 2022).
Requiring a smartphone and an internet connection, the app offers their users free features to interact with
their chatbot, to reflect in their emotions, thoughts, and experiences. Consequently, these interactions feed
a journal with conversations, which help both, the chatbot to improve, as well as their users to reflect on
their experiences in life. The app includes as well premium features, as guided self-care activities for
different topics (i.e., overcoming grief, trauma, confidence, self-esteem) with the possibility of receiving
remote therapy sessions through the app.
In comparison to human psychological interventions, chatbots require to surpass and maintain a
certain level of quality to facilitate the effectiveness of therapeutic exercises and other activities. Kim et al.
(2019) in a literature review identifies three common themes in research paper about chatbots: (1)
therapeutic alliance, which refers to the collaboration between the patient and therapist to achieve treatment
goals; (2) trust; and (3) human intervention. As for this section, (Beatty et al., 2022) found that Wysa, as a
chatbot, was able to establish a therapeutic alliance with their users in a variety of treatment scenarios,
including chronic pain management (Meheli et al., 2022; Sinha et al., 2022), depression (Inkster et al.,
2018). Nonetheless, is important to note that the representativeness of the data needs to be judged carefully,
due to the involvement of representatives of Wysa involved in some popular studies about the application.
Despite of this, Wysa still remains as one of the few mental health applications receiving multiple
organization acknowledgements, such as one of the best apps for managing anxiety during COVID-19
(ORCHA, 2020), being compliant to Clinical Safety Standards in UK (Wysa, 2022b), among others.
As a supplemental option for mental healthcare, Wysa stands around as one of the accessible
alternatives across chatbots and could continue being the first step for many people who have never
experienced a therapeutical approach before. While chatbots are not able yet to listen carefully to us, and
help us to reorganize or thoughts, or process our feelings, the data that they keep obtaining will probably
help in their improvement. As long as a person has a smartphone with an estimated of 6.64 billion users
around the world (Turner, 2022) chatbots could continue to be one of the most accessible and affordable
forms to provide care for the preservation and recovery of human well-being.
Virtual AI therapists
While a human therapist can be physically only in one place, virtual AI therapists (or virtual human
therapists
2
) could be awake when we need them most, in any place with a screen, in anytime with the
added value of a having a familiar look, resembling a human face through the screen. Swartout et al. (2013)
defines virtual humans as computer-generated characters designed to look and behave like real people and
designed to be engaging. On these aspects, recent studies have found that trusting an AI may improve the
user’s commitment and usage of technology assistants, which can improve performance of intelligent
assistants (Song et al., 2022). In a similar trend, virtual (AI) humans have been found to be a support in
times of distress and increase the willingness of some patients to disclose (Lucas et al., 2014; Pauw et al.,
2022).
A popular example of virtual AI in the
contemporary era is associated to “Ellie” (Figure 1), a
virtual human therapist presented by the University of
Southern California in 2013, with the objective of
giving support for people suffering mental disorders as
post-traumatic stress disorder (PTSD) or depression
(Brigida, 2013). Being able to have some perception of
the physical world and react accordingly, Ellie performs
its functions powered by two technological systems: (1)
Multisense, a program that facilitates real-time tracking
and analysis of facial expressions, body posture and
movement, sound characteristics, linguistic patterns
and high level behavioral descriptors (e.g.: attention, agitation); and (2) SimSensei, a virtual human model
platform able to detect audiovisual signals in real-time captured by Multisense (USC ICT, 2014). Working
together, both technologies have shown how the virtual AI may provide feedback and encouragement to a
patient through comments, gesture reactions, and to be able to perform strategies that motivate participants
to continue the conversation (Rizzo & Morency, 2013). As a result, Ellie aims to establish a similar level
of human empathy, confidence, or rapport, as a therapist would do.
Though the possibility of having a therapeutic conversation with a virtual AI may seem convenient,
neither the affordability nor accessibility issues were found widely discussed about the resources needed to
provide mental health care through these intelligent applications. Consequently, it is not possible to affirm
2
Although the term in the case of Ellie is “virtual human therapist”, this construct may lead to confusion associated
to its approach (e.g.: a virtual human therapist could be a person providing remote therapy, and not an artificial
intelligence). Due to this reason, the term “virtual AI therapist” may be used interchangeably in this chapter.
Figure 1. Ellie, a virtual human AI therapist developed by
the University of Southern California.
Source: Rizzo & Morency (2013)
or predict if virtual AI therapists may represent a convenient and effective option in the future to receive
mental care.
Meanwhile, in the near future it is expected that with new advancements in algorithms,
infrastructure and equipment, virtual AI therapists could be better at tracking and analyzing data, with the
possibility of including other channels for data gathering for example, breath, heart frequency or data
from other sources (Lazarus, 2006). Also, it is expected that the aesthetics may change in the short term
(Epic Games Inc, 2022; Estela, 2022) opening of a new research area to understand what effect different
digital avatars may have in their users.
Robot AI therapists
Probably, robots are still one of the most expected technologies that people have wished for a long
time. Imagining them has accompanied us with different stories, from robots that turn companions through
life, as in the Bicentennial Man (IMDB, 2022), to others as HAL (SHOCHIKU, 2013) created with the
specific purpose of helping people to overcome grief and provide closure to traumatic events. Examples
like these are just a few of many that explore how relationships between humans and robots could work in
a future we have not reached yet. However, the expectation might continue for a little longer, due to personal
robots and emotion AI being expected to take at least 10 years to offer practical benefits (Klappich & Jump,
2022). Nevertheless, recent research in the field of artificial intelligence in robotics allow us now to see
how robots could be used to improve, maintain, and recover mental health.
Commercial therapy robots
In this section, we define “commercial therapy robots” as those that are more affordable to
consumers and are more related to the ideal of personal robots’, compared to others that may be acquired
only by organizations as research institutes, universities, companies, or governments. Along with chatbots,
commercial therapy robots continue to be tested by people around the world, in a broad market (personal
service robotics) expected to rapidly grow at a compound annual growth rate of 38.5% by 2030 (Globe
Newswire, 2022). Therefore, it is relevant to understand how these robots are integrated in people’s lives
and comprehend which needs they help to solve through artificial intelligence from being the new best
friend of the century to ---
Japanese robotic companions
In an era in which many people fear the potential consequences of artificial intelligence and
autonomous robots in society (Liang & Lee, 2017), Japan embraces and adopts these technologies more as
partners than work and life competitors. In fact, there is an expected shortage of elderly care givers,
estimated in 370,000 nurses and other care professionals by 2025, that, along with other factors, continues
to promote the development of ‘carebots’ (Plackett, 2022). Moreover, Japan tops the in the world rankings
for the biggest proportion of people over 65 years, with 29.1%, representing 36.27 million people (Kyodo
News, 2022). Because of these differences, the case of Japanese robotic companions deserves a bit of
attention, due to their positive assessment of robot participation in certain tasks for elderly care (Coco et
al., 2018) while observing ethical limitations in naturalistic contexts (A. Gallagher et al., 2016).
Mentioned this, it should be easier to realize the motivations behind the advances and the role of
robotic companions in Japan, and how these may provide insights for researchers around the world.
Aibo: A robotic companion
While the ownership or coexistence with a pet (as a dog) has its benefits for mental health, as
promoting physical activity, enhance mood and psychological health (Knight & Edwards, 2008), it is highly
possible that in the near future we may see people interacting with a new breed of robotic pets, such as
Aibo. Created by Sony Corporation, Aibo (artificial intelligence robot) is robotic puppy made in Japan,
able to interact with people, developing its own identity over time. Originally appearing in 1999, the robot
reappeared in 2018, selling 11,111 puppies in the first three months after its launch (Kyodo, 2018)
announcing a potential interest from the country population. Since its beginning, the robot pet has attracted
the interest, not only from children and a new kind of pet owners, but as well from researchers who found
that the robotic puppy could help in the improvement or treatment of mental health disorders. In one
example of this, Stanton et al. (2008) found that children with autism spectrum disorder (ASD) that used
Aibo instead of other mechanical toy dogs exhibited behaviors commonly found in children without autism
more frequently, while reducing autistic behaviors as well. Other studies explored how Aibo could reduce
anxiety and pain in hospitalized children (Tanaka et al., 2022a); as in other cases, its purpose has been
focused on reducing loneliness and increasing social interaction in patients with dementia (Banks et al.,
2008b; Kramer et al., 2009).
Deciding whether a robotic dog or a live Australian Shepherd is a best option is matter of debate,
and of research. Compared to a living dog, a robotic pet has less care requirements (as no need for food,
water, cleaning, etc.), attributes that could be convenient for some people (as children, or people suffering
memory loss) to whom taking care of a living being would prove a difficult endeavor, or instead, for those
who are stimulated and engaged by certain pet behaviors, as playing with a ball (Ribi et al., 2008). Thus,
the acceptance of a robotic pet as a similar (or equal) to a living dog has been found high in children (Melson
et al., 2005), noting the possibility that Aibo is capable of being present in controlled environment settings
(as hospitals), assisting in the improvement of the quality of life of pediatric inpatients (Kimura et al., 2004;
Tanaka et al., 2022). In spite of this information, it is possible that the robotic puppy is not affordable for
everyone, since its offering price remains still around $3,000 USD (Craft, 2022).
Paro: A robotic companion
Presented in the form of a small seal with huge eyes, Paro is a advanced interactive robot that
includes different kinds of sensors, such as tactile, light, audition, temperature, and posture (PARO Robots,
2014). In research, the robotic white seal has been used as therapeutic tool in different settings. In elderly
populations, (Inoue et al., 2021) explored how Paro could support the care of people with dementia in a
home context, in group therapy settings (Chang et al., 2013). Similarly, Paro seal has been studied as an
option for the reduction of symptoms of depression in elderly adults (Bennett et al., 2017), reduction of
loneliness, and as a potential substitute for animal therapy in specific settings (Shibata & Wada, 2011).
Even though there are numerous publications
providing evidence of its potential, there are as well
concerns to its use, and underlying effects in how it
affects or could damage the dignity of elderly people
(Sharkey & Wood, 2014). Lastly, in comparison to
other robots, as Aibo, the price of the therapeutic
robot seal has been previously offered around $6,000
USD (Tergesen & Inada, 2010), becoming a less
affordable options compared to other robotic
companions.
Humanoid therapy robots
The ideal representation of a robot therapist is continuously depicted in media with a human form.
While not limited to it, is almost certain that a human face and shape will provide more assurance and
familiarity to talk in a similar way as we would do with a therapist. In this aspect, there are examples
available that exhibit the capabilities of a humanoid robot for therapy.
Figure 2. Paro therapy robot.
Source: Trowbridge (2010).
Sophia first robot citizen
Created by Hanson Robotics, Sophia is a full body human-like robot that combines advances both
in robotics and artificial intelligence with the purpose of resembling the experience of interacting with a
human. To achieve this task, the AI combines symbolic AI, neural networks, expert systems, machine
perception, conversational natural language processing, cognitive architecture, among others (Hanson
Robotics, 2022b). Even though Sophias main purpose has been to promote public discussion about the
interaction between humans and robots and being recognized as the UN Innovation Champion (UN, 2018)
some of her traits have been used to explore her role in the treatment of mental health disorders, such as
those in the ASD. In this sense, Hanson et al. (2012) found that children with ASD were curious about
robots, engaged in conversations with them and in most cases, did not show fear (p. 4,5). These results were
positive despite cross-cultural environments (USA and Italy). Unfortunately, while Sophia has made a spark
in the debate of the ethics, governance and other aspects of AI and robotics, other publications associated
to her role as a potential aid in mental health treatments were not found. Moreover, news on the web
expected the possibility of a mass production of the AI robot to offer assistance and care of the sick and
elderly throughout the pandemic of COVID-19 (Hennessy, 2021) but neither news or publications
following this possibility were discovered, implying multiple potential difficulties (economical, technical,
social, among others) for humanoid robots as Sophia to participate in critical tasks as the care of living
humans in middle of a healthcare crisis.
Concerns and Ethical implications of AI in mental health
The potential of AI to generate both good and harm could be something that we have never witnessed before.
Compared to the developments of the second and third industrial revolution, we need to imagine the
implications of what was thought to be impossible (as flying, something impossible a few centuries ago, or
travelling around the world in less than 80 days, to using smartphones and other devices nearly as an
extension of ourselves). Because of reasons like these, it stays relevant to reflect on what role we wish AI
to have in something as personal our and others mental health.
In this sense, governments of United States, United Kingdom, as well as the European Union have raised
serious concerns on the applications of artificial intelligence and robotics in multiple domains (Cath et al.,
2018) with the participation of academic and corporate experts in the analysis. These reports provide
recommendations of ethical issues that should be addressed in any AI project, such as fairness,
accountability, and social justice, through transparency in the use of these techniques. In addition, the use
of AI in robots envisions other challenges, with a combination of hard and soft laws to guard against
possible risks, including the development of legal tools to assess liability issues associated to these
technologies. Lastly, to prepare to future outcomes, these reports examine prospective problems and
adverse consequences on the use of AI and robotics that may require prevention, mitigation, and governance.
Nevertheless, none of these reports seems to specifically analyze the concerns and ethical implications on
the use of AI for therapeutic applications in mental health services. Even so, a first approach to this is
offered by Fiske et al. (2019), who summarize these current and potential ethical issues, including: (1) harm
prevention and questions of data ethics, (2) lack of guidance on development of AI applications, (3) their
clinical integration and training of health professionals, among others. Consequently, other challenges they
address in this sense include risk-assessment, referrals and supervision, loss of patient autonomy, long-term
effects, and transparency in the use of algorithms. For instance, concerns about the potential harm that AI
may provoke in a patient due to a malfunction. A few examples of these critical functions are when a video
recording fails to be preserved in confidentiality, affecting the privacy of a person; or when deciding how
should AI support and refer a patient in crisis to a human or clinical institution. In this sense, as long as
there exist a lack of understanding of which data privacy frameworks could safeguard our identities, a need
to prevent future steps and its likely damage will remain of the utmost importance. As it is commented by
some mental health applications, current AI maturity levels still remain to assist users, and are not intended
to replace mental health professional activities (Wysa, 2022a).
Where these consequences may occur, which other exist and how do we deal with them are questions open
to future research. As Fiske et al. (2019) found, challenges for clinicians and patients will continue to
emerge on the field of human robotics interactions, with common topics as patient adherence to treatment,
therapist bias, or attachment appearing new settings, as chatbot screens, through virtual platforms, or with
robots and us in a safe room, listen closely to everything that we may have to express.
Conclusions
Should artificial intelligence be your therapist?
Why do we still have a duty to treat the dead with dignity
if they will not benefit from our respect?”
Michael Rosen in “Dignity Its history and meaning (2018)
“How could artificial intelligence help everyone to have a better mental health around the world?”
This was one of the main questions behind the development of this chapter, which purpose has been to
provide an integrative foundation of what exists in AI for mental health, how does it work and under what
psychotherapy foundations. Designed to provide the basis for health and information technologies
professionals alike, this chapter focused on establishing a common ground to understand what is behind the
algorithms of artificial intelligence, a few examples of their current applications in different levels, as well
as ethical concerns and implications. However, despite providing the core themes of AI for mental health,
this chapter would be incomplete without including a last comprehensive philosophical reflection of the
utopic and dystopic futures that AI could bring us, depending in how careful we are. After all, artificial
intelligence therapy could have an impact sooner or later in ways that we have not thought or addressed
before, including possible new ways of discrimination. As in the idea of Gallagher (2012), if we decide to
use AI as way, not to increase the quality of our health services, but more for conveniency or cost reduction,
we risk losing our own humanity while denying others dignity. In other words: only those with enough
financial and economic resources to pay for human therapists will be served, reducing the perceived human
dignity of those unable to afford for their care.
As this future could be seen as dystopic, it is not for many people around some geographies of the
world and even for developed countries. As this have been documented, the number of bankruptcies due
to healthcare have reached in some regions up to 62%, while attention disparities continue to hinder efforts
to provide the best healthcare for those who need it most (Shmerling, 2021). Moreover, concerns about
potential biases in the development of chatbots remain in various areas, as governance boards in these
applications may not be designed to be representative, or even unaware of hardships that their users could
be living according to community and individual context factors, as gender, development stage, health
conditions, and others, changing their life conditions. Additionally, the question remains if their creators
would (or do) use their own creations to fulfill their needs, as they intend to provide value to others.
Therefore, while these services are continuously portrayed as useful for others, it remains to be seen if their
creators have assimilated these technologies into their own lives.
Before we go further and beyond in constructing solutions through AI, reflection and critical
thinking will be continuing to be more important than the entrepreneurial spirit conveyed towards its
development. Not aware of what it means to have knowledge of the intimacies of someone, unregulated or
unvalidated mental health applications could have a lasting impact in a society, due to a combination of
high-power processing and the vulnerability factors in people. We don’t need to go further than a few years
ago, to remember when a trend of eating tide pods (a laundry detergent product) sparked and went viral on
social media, affecting young people and leading to an estimate of 18 intentional poisoning cases in 2018
(Bever, 2018). What was the role of the underlying algorithms to prevent these incidents? In another
example of the risks when algorithms, social media, and vulnerability are combined, Raman et al. (2020)
evaluates how disinformation could be weaponized, leading to city-scale blackouts due to changes in energy
consumption behaviors of users who receive false information about energy discounts. As a result of these
scenarios, it is important to think about the role of present algorithms, and furthermore, the artificial
intelligence that is behind these systems. If the present capabilities of AI will seem elemental compared to
what will be used in 5 to 10 years, risks will be higher, and those in a vulnerable situation due to a lack of
education and digital literacy could suffer the most.
It will remain significant to imagine and appreciate the role of our technology frameworks for the
future, and several important questions will require a careful thought process about the topic:
Could artificial intelligence discourage cases where humans are harmed, and decide between
their creators wishes and the benefits for humanity?
What would be the ethical framework to assess the better outcome, from minimizing harm for
the most or maintaining virtuous imperatives in favor of mankind?
What would happen to our experiences with these AI, and what would happen to the data and
(our) memories we generate with them?
Undoubtedly, the notion of what AI for mental health represents will continue to advance along the
time, and potentially will we grasp new possibilities in the development of these technologies that go in
conjunction with others, as blockchains or internet 3.0 infrastructures are built and developed. Finally, the
question remains as well for the field of psychology and mental health professionals, since artificial
intelligence could open the opportunity to have, sooner or later, a psychologist who will learn across time,
from multiple sources of data and without the limitations of a human lifetime span.
References
Abd-alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P., & Househ, M. (2019a). An
overview of the features of chatbots in mental health: A scoping review. International Journal of
Medical Informatics, 132, 103978. https://doi.org/10.1016/j.ijmedinf.2019.103978
Abd-alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P., & Househ, M. (2019b). An
overview of the features of chatbots in mental health: A scoping review. International Journal of
Medical Informatics, 132, 103978. https://doi.org/10.1016/j.ijmedinf.2019.103978
American Psychological Association (APA). (2017, July). What is Cognitive Behavioral Therapy?
Clinical Practice Guideline for the Treatment of Posttraumatic Stress Disorder.
American Psychological Association (APA). (2022, April). PTSD Treatment: Information for Patients
and Families. Clinical Practice Guideline for the Treatment of Posttraumatic Stress Disorder.
Andreu-Perez, J., Deligianni, F., Ravi, D., & Guang-Zhong Yang. (2017). 9. Robotics and AI. In
Artificial intelligence and robotics. UK-RAS Whitepaper.
https://www.researchgate.net/publication/318859042_Artificial_Intelligence_and_Robotics
Anyoha, R. (2017, August 28). The History of Artificial Intelligence. Special Edition: Artificial
Intelligence. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Asimov, I. (1950). I, Robot (First edition). Fawcett Publications. https://www.worldcat.org/title/i-
robot/oclc/1384948
Banks, M. R., Willoughby, L. M., & Banks, W. A. (2008a). Animal-Assisted Therapy and Loneliness in
Nursing Homes: Use of Robotic versus Living Dogs. Journal of the American Medical Directors
Association, 9(3), 173177. https://doi.org/10.1016/j.jamda.2007.11.007
Banks, M. R., Willoughby, L. M., & Banks, W. A. (2008b). Animal-Assisted Therapy and Loneliness in
Nursing Homes: Use of Robotic versus Living Dogs. Journal of the American Medical Directors
Association, 9(3), 173177. https://doi.org/10.1016/j.jamda.2007.11.007
Bautista, M. A., Hernandez-Vela, A., Escalera, S., Igual, L., Pujol, O., Moya, J., Violant, V., & Anguera,
M. T. (2016). A Gesture Recognition System for Detecting Behavioral Patterns of ADHD. IEEE
Transactions on Cybernetics, 46(1), 136147. https://doi.org/10.1109/TCYB.2015.2396635
Beatty, C., Malik, T., Meheli, S., & Sinha, C. (2022). Evaluating the Therapeutic Alliance With a Free-
Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Frontiers in Digital Health, 4.
https://doi.org/10.3389/fdgth.2022.847991
Bennett, C. C., Sabanovic, S., Piatt, J. A., Nagata, S., Eldridge, L., & Randall, N. (2017). A Robot a Day
Keeps the Blues Away. 2017 IEEE International Conference on Healthcare Informatics (ICHI),
536540. https://doi.org/10.1109/ICHI.2017.43
Beutel, M. E., Klein, E. M., Brähler, E., Reiner, I., Jünger, C., Michal, M., Wiltink, J., Wild, P. S.,
Münzel, T., Lackner, K. J., & Tibubos, A. N. (2017). Loneliness in the general population:
prevalence, determinants and relations to mental health. BMC Psychiatry, 17(1), 97.
https://doi.org/10.1186/s12888-017-1262-x
Bever, L. (2018, January 17). Teens are daring each other to eat Tide pods. We don’t need to tell you
that’s a bad idea. The Washington Post. https://www.washingtonpost.com/news/to-your-
health/wp/2018/01/13/teens-are-daring-each-other-to-eat-tide-pods-we-dont-need-to-tell-you-thats-
a-bad-idea
Blackburn, I. M., Bishop, S., Glen, A. I. M., Whalley, L. J., & Christie, J. E. (1981). The Efficacy of
Cognitive Therapy in Depression: A Treatment Trial Using Cognitive Therapy and
Pharmacotherapy, each Alone and in Combination. British Journal of Psychiatry, 139(3), 181189.
https://doi.org/10.1192/bjp.139.3.181
Bleuer-Elsner, S., Zamansky, A., Fux, A., Kaplun, D., Romanov, S., Sinitca, A., Masson, S., & van der
Linden, D. (2019). Computational Analysis of Movement Patterns of Dogs with ADHD-Like
Behavior. Animals, 9(12), 1140. https://doi.org/10.3390/ani9121140
Brigida, A.-C. (2013, October 18). A Virtual Therapist. USC Viterbi.
https://viterbi.usc.edu/news/news/2013/a-virtual-therapist.htm
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the
‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505528.
https://doi.org/10.1007/s11948-017-9901-7
Chang, W.-L., Sabanovic, S., & Huber, L. (2013). Use of seal-like robot PARO in sensory group therapy
for older adults with dementia. 2013 8th ACM/IEEE International Conference on Human-Robot
Interaction (HRI), 101102. https://doi.org/10.1109/HRI.2013.6483521
Chen, Q., Leaman, R., Allot, A., Luo, L., Wei, C.-H., Yan, S., & Lu, Z. (2021). Artificial Intelligence in
Action: Addressing the COVID-19 Pandemic with Natural Language Processing. Annual Review of
Biomedical Data Science, 4(1), 313339. https://doi.org/10.1146/annurev-biodatasci-021821-
061045
Chen, X., Orom, H., Hay, J. L., Waters, E. A., Schofield, E., Li, Y., & Kiviniemi, M. T. (2019).
Differences in Rural and Urban Health Information Access and Use. The Journal of Rural Health,
35(3), 405417. https://doi.org/10.1111/jrh.12335
Coco, K., Kangasniemi, M., & Rantanen, T. (2018). Care Personnel’s Attitudes and Fears Toward Care
Robots in Elderly Care: A Comparison of Data from the Care Personnel in Finland and Japan.
Journal of Nursing Scholarship, 50(6), 634644. https://doi.org/10.1111/jnu.12435
Craft, L. (2022, January 3). Robo-dogs and therapy bots: Artificial intelligence goes cuddly. CBS News.
https://www.cbsnews.com/news/robo-dogs-therapy-bots-artificial-intelligence/
Crane, K. L., & Watters, K. M. (2019). Cognitive behavioral therapy strategies. Mental Illness Research,
Education and Clinical Center (MIRECC).
https://www.mirecc.va.gov/visn16/clinicalEducationProducts_topic.asp
Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021). Evidence of Human-Level Bonds
Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational
Study. JMIR Formative Research, 5(5), e27868. https://doi.org/10.2196/27868
de Choudhury, M., Kiciman, E., Dredze, M., Coppersmith, G., & Kumar, M. (2016). Discovering Shifts
to Suicidal Ideation from Mental Health Content in Social Media. Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems, 20982110.
https://doi.org/10.1145/2858036.2858207
DeSouza, D. D., Robin, J., Gumus, M., & Yeung, A. (2021). Natural Language Processing as an
Emerging Tool to Detect Late-Life Depression. Frontiers in Psychiatry, 12.
https://doi.org/10.3389/fpsyt.2021.719125
El-Ramly, M., Abu-Elyazid, H., Mo’men, Y., Alshaer, G., Adib, N., Eldeen, K. A., & El-Shazly, M.
(2021). CairoDep: Detecting Depression in Arabic Posts Using BERT Transformers. 2021 Tenth
International Conference on Intelligent Computing and Information Systems (ICICIS), 207212.
https://doi.org/10.1109/ICICIS52592.2021.9694178
Epic Games Inc. (2022). Metahuman: High-fidelity digital humans made easy. Unreal Engine.
https://www.unrealengine.com/en-US/metahuman
Estela, L. (2022, June 10). Metahuman Designs for Therapy. Lucy Estela.
https://www.unrealengine.com/en-US/metahuman
Fallon, J. (2017, April 25). Tonight Showbotics: Jimmy Meets Sophia the Human-Like Robot. YouTube.
Fasola, J., & Mataric, M. (2013). A Socially Assistive Robot Exercise Coach for the Elderly. Journal of
Human-Robot Interaction, 2(2). https://doi.org/10.5898/JHRI.2.2.Fasola
Feil-Seifer, D., & Mataric, M. J. (2005). Socially Assistive Robotics. 9th International Conference on
Rehabilitation Robotics (ICORR), 465468. https://doi.org/10.1109/ICORR.2005.1501143
Ferrari, A. J., Santomauro, D. F., Mantilla Herrera, A. M., Shadid, J., Ashbaugh, C., Erskine, H. E.,
Charlson, F. J., Degenhardt, L., Scott, J. G., McGrath, J. J., Allebeck, P., Benjet, C., Breitborde, N.
J. K., Brugha, T., Dai, X., Dandona, L., Dandona, R., Fischer, F., Haagsma, J. A., … Whiteford, H.
A. (2022). Global, regional, and national burden of 12 mental disorders in 204 countries and
territories, 19902019: a systematic analysis for the Global Burden of Disease Study 2019. The
Lancet Psychiatry, 9(2), 137150. https://doi.org/10.1016/S2215-0366(21)00395-3
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your Robot Therapist Will See You Now: Ethical
Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy.
Journal of Medical Internet Research, 21(5), e13216. https://doi.org/10.2196/13216
Gallagher, A., Nåden, D., & Karterud, D. (2016). Robots in elder care: Some ethical questions. Nursing
Ethics, 23(4), 369371. https://doi.org/10.1177/0969733016647297
Gallagher, J. (2012, March 25). Dignity: Its History and Meaning by Michael Rosen review. The
Guardian. https://www.theguardian.com/books/2012/mar/25/dignity-history-meaning-rosen-review
Giannakakis, G., Pediaditis, M., Manousos, D., Kazantzaki, E., Chiarugi, F., Simos, P. G., Marias, K., &
Tsiknakis, M. (2017). Stress and anxiety detection using facial cues from videos. Biomedical Signal
Processing and Control, 31, 89101. https://doi.org/10.1016/j.bspc.2016.06.020
Globe Newswire. (2022, June 24). Personal Service Robotics Market Anticipated to Hit USD 35.9 Billion
at a Whopping 38.5% CAGR by 2030 - Report by Market Research Future (MRFR). GLOBE
NEWSWIRE. https://www.globenewswire.com/en/news-
release/2022/06/24/2468841/0/en/Personal-Service-Robotics-Market-Anticipated-to-Hit-USD-35-9-
Billion-at-a-Whopping-38-5-CAGR-by-2030-Report-by-Market-Research-Future-MRFR.html
Hacohen-Kerner, Y., Manor, N., Goldmeier, M., & Bachar, E. (2022). Detection of Anorexic Girls-In
Blog Posts Written in Hebrew Using a Combined Heuristic AI and NLP Method. IEEE Access, 10,
3480034814. https://doi.org/10.1109/ACCESS.2022.3162685
Hanson, D., Mazzei, D., Garver, C. R., Ahluwalia, A., de Rossi, D., Stevenson, M., & Reynolds, K.
(2012, June). Realistic Humanlike Robots for Treatment of ASD, Social Training, and Research;
Shown to Appeal to Youths with ASD, Cause Physiological Arousal, and Increase Human-to-Human
Social Engagement. 5th International Conference on Pervasive Technologies Related to Assistive
Environments (PETRA).
https://www.researchgate.net/publication/233951262_Realistic_Humanlike_Robots_for_Treatment_
of_ASD_Social_Training_and_Research_Shown_to_Appeal_to_Youths_with_ASD_Cause_Physiol
ogical_Arousal_and_Increase_Human-to-Human_Social_Engagement
Hanson Robotics. (2022a). Being Sophia. Being Sophia.
Hanson Robotics. (2022b). Sophia’s Artificial Intelligence. Hanson Robotics.
https://www.hansonrobotics.com/sophia/
Hashemi, J., Tepper, M., Vallin Spina, T., Esler, A., Morellas, V., Papanikolopoulos, N., Egger, H.,
Dawson, G., & Sapiro, G. (2014). Computer Vision Tools for Low-Cost and Noninvasive
Measurement of Autism-Related Behaviors in Infants. Autism Research and Treatment, 2014, 112.
https://doi.org/10.1155/2014/935686
Hassan, S. B., Hassan, S. B., & Zakia, U. (2020). Recognizing Suicidal Intent in Depressed Population
using NLP: A Pilot Study. 2020 11th IEEE Annual Information Technology, Electronics and Mobile
Communication Conference (IEMCON), 01210128.
https://doi.org/10.1109/IEMCON51383.2020.9284832
Hennessy, M. (2021, January 24). Makers of Sophia the robot plan mass rollout amid pandemic. Reuters.
https://www.reuters.com/article/us-hongkong-robot-idUSKBN29U03X
Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing. Science, 349(6245),
261266. https://doi.org/10.1126/science.aaa8685
IBM. (2022). What is computer vision? IBM.
IMDB. (2022). Bicentennial Man (1999). IMDB. https://www.imdb.com/video/vi783941913/
Inkster, B., Sarda, S., & Subramanian, V. (2018). An Empathy-Driven, Conversational Artificial
Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-
Methods Study. JMIR MHealth and UHealth, 6(11), e12106. https://doi.org/10.2196/12106
Inoue, K., Wada, K., & Shibata, T. (2021). Exploring the applicability of the robotic seal PARO to
support caring for older persons with dementia within the home context. Palliative Care and Social
Practice, 15, 263235242110302. https://doi.org/10.1177/26323524211030285
Jaiswal, S., Valstar, M. F., Gillott, A., & Daley, D. (2017). Automatic Detection of ADHD and ASD from
Expressive Behaviour in RGBD Data. 2017 12th IEEE International Conference on Automatic Face
& Gesture Recognition (FG 2017), 762769. https://doi.org/10.1109/FG.2017.95
Jonze, S. (2013). Her. Warner Bros. Pictures. https://www.imdb.com/title/tt1798709/
Kabacińska, K., Prescott, T. J., & Robillard, J. M. (2021). Socially Assistive Robots as Mental Health
Interventions for Children: A Scoping Review. International Journal of Social Robotics, 13(5), 919
935. https://doi.org/10.1007/s12369-020-00679-0
Kim, J., Park, S. Y., & Robert, L. (2019, November 9). Conversational Agents for Health and Wellbeing:
Review and Future Agendas. 22th ACM Conference on Computer Supported Cooperative Work and
Social Computing.
Kimura, R., Abe, N., Matsumura, N., Horiguchi, A., Sasaki, T., Negishi, T., Ohkubo, E., & Naganuma,
M. (2004, August 4). Trial of robot assisted activity using robotic pets in children hospital. SICE
2004 Annual Conference. https://ieeexplore.ieee.org/abstract/document/1491448
Klappich, D., & Jump, A. (2022). Hype Cycle for Mobile Robots and Drones, 2022.
https://www.gartner.com/interactive/hc/4016694
Knight, S., & Edwards, V. (2008). In the Company of Wolves: The Physical, Social, and Psychological
Benefits of Dog Ownership. Journal of Aging and Health, 20(4), 437455.
https://doi.org/10.1177/0898264308315875
Kramer, S. C., Friedmann, E., & Bernstein, P. L. (2009). Comparison of the Effect of Human Interaction,
Animal-Assisted Therapy, and AIBO-Assisted Therapy on Long-Term Care Residents with
Dementia. Anthrozoös, 22(1), 4357. https://doi.org/10.2752/175303708X390464
Kulasinghe, S. A. S. A., Jayasinghe, A., Rathnayaka, R. M. A., Karunarathne, P. B. M. M. D., Suranjini
Silva, P. D., & Anuradha Jayakodi, J. A. D. C. (2019). AI Based Depression and Suicide Prevention
System. 2019 International Conference on Advancements in Computing (ICAC), 7378.
https://doi.org/10.1109/ICAC49085.2019.9103411
Kulkarni, S. (2022). Chatbots: a futuristic approach to therapy. International Research Journal of
Modernization in Engineering, Technology, and Science, 4(5).
https://www.irjmets.com/uploadedfiles/paper/issue_5_may_2022/24692/final/fin_irjmets165667195
7.pdf
Kyodo. (2018, May 7). Sales of Sony’s new Aibo robot dog off to solid start. The Japan Times.
Kyodo News. (2022, September 19). Over 75s make up over 15% of Japan’s population for first time. The
Japan Times. https://www.japantimes.co.jp/news/2022/09/19/national/japans-graying-population/
Lazarus, A. A. (2006). Multimodal Therapy: A Seven-Point Integration. In A casebook of psychotherapy
integration. (pp. 1728). American Psychological Association. https://doi.org/10.1037/11436-002
le Glaz, A., Haralambous, Y., Kim-Dufor, D.-H., Lenca, P., Billot, R., Ryan, T. C., Marsh, J., DeVylder,
J., Walter, M., Berrouiguet, S., & Lemey, C. (2021). Machine Learning and Natural Language
Processing in Mental Health: Systematic Review. Journal of Medical Internet Research, 23(5),
e15708. https://doi.org/10.2196/15708
Le, D. van, Montgomery, J., Kirkby, K. C., & Scanlan, J. (2018). Risk prediction using natural language
processing of electronic mental health records in an inpatient forensic psychiatry setting. Journal of
Biomedical Informatics, 86, 4958. https://doi.org/10.1016/j.jbi.2018.08.007
Liang, Y., & Lee, S. A. (2017). Fear of Autonomous Robots and Artificial Intelligence: Evidence from
National Representative Data with Probability Sampling. International Journal of Social Robotics,
9(3), 379384. https://doi.org/10.1007/s12369-017-0401-3
Low, D. M., Rumker, L., Talkar, T., Torous, J., Cecchi, G., & Ghosh, S. S. (2020). Natural Language
Processing Reveals Vulnerable Mental Health Support Groups and Heightened Health Anxiety on
Reddit During COVID-19: Observational Study. Journal of Medical Internet Research, 22(10),
e22635. https://doi.org/10.2196/22635
Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual humans
increase willingness to disclose. Computers in Human Behavior, 37, 94100.
https://doi.org/10.1016/j.chb.2014.04.043
Malik, T., Ambrose, A. J., & Sinha, C. (2022). Evaluating User Feedback for an Artificial Intelligence
Enabled, Cognitive Behavioral TherapyBased Mental Health App (Wysa): Qualitative Thematic
Analysis. JMIR Human Factors, 9(2), e35668. https://doi.org/10.2196/35668
Martin, C. D. (1995). ENIAC: press conference that shook the world. IEEE Technology and Society
Magazine, 14(4), 310. https://doi.org/10.1109/44.476631
McCarthy, J. (2007, November 12). What is artificial intelligence? - Basic Questions. Stanford
University.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth
Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4).
https://doi.org/10.1609/aimag.v27i4.1904
Meheli, S., Sinha, C., & Kadaba, M. (2022). Understanding People With Chronic Pain Who Use a
Cognitive Behavioral TherapyBased Artificial Intelligence Mental Health App (Wysa): Mixed
Methods Retrospective Observational Study. JMIR Human Factors, 9(2), e35671.
https://doi.org/10.2196/35671
Melson, G. F., Kahn, P. H., Beck, A. M., Friedman, B., Roberts, T., & Garrett, E. (2005). Robots as
dogs?: children’s interactions with the robotic dog AIBO and a live Australian shepherd. CHI ’05
Extended Abstracts on Human Factors in Computing Systems, 16491652.
https://doi.org/10.1145/1056808.1056988
Mujeeb, S., Hafeez, M., & Arshad, T. (2017). Aquabot: A Diagnostic Chatbot for Achluophobia and
Autism. International Journal of Advanced Computer Science and Applications, 8(9).
https://doi.org/10.14569/IJACSA.2017.080930
Myers, A. (2011, November 25). Stanford’s John McCarthy, seminal figure of artificial intelligence, dies
at 84. Stanford | News.
Namboodiri, S. P., & Venkataraman, D. (2019). A computer vision based image processing system for
depression detection among students for counseling. Indonesian Journal of Electrical Engineering
and Computer Science, 14(1), 503. https://doi.org/10.11591/ijeecs.v14.i1.pp503-512
Okita, S. Y. (2013). Self–Other’s Perspective Taking: The Use of Therapeutic Robot Companions as
Social Agents for Reducing Pain and Anxiety in Pediatric Patients. Cyberpsychology, Behavior, and
Social Networking, 16(6), 436441. https://doi.org/10.1089/cyber.2012.0513
Oppy, G., & Dowe, D. (2021, October 4). The Turing Test. Stanford Encyclopedia of Philosophy.
ORCHA. (2020, March 13). Coronavirus: Apps to help self-management. ORCHA - News.
https://orchahealth.com/coronavirus-apps-to-help-self-management/
PARO Robots. (2014). PARO Therapeutic Robot. PARO Therapeutic Robot. http://www.parorobots.com/
Pauw, L. S., Sauter, D. A., van Kleef, G. A., Lucas, G. M., Gratch, J., & Fischer, A. H. (2022). The avatar
will see you now: Support from a virtual human provides socio-emotional benefits. Computers in
Human Behavior, 136, 107368. https://doi.org/10.1016/j.chb.2022.107368
Plackett, B. (2022, January 19). Tackling the crisis of care for older people: lessons from India and
Japan. Nature. https://www.nature.com/articles/d41586-022-00074-x
Raman, G., AlShebli, B., Waniek, M., Rahwan, T., & Peng, J. C.-H. (2020). How weaponizing
disinformation can bring down a city’s power grid. PLOS ONE, 15(8), e0236517.
https://doi.org/10.1371/journal.pone.0236517
Ribi, F. N., Yokoyama, A., & Turner, D. C. (2008). Comparison of Children’s Behavior toward Sony’s
Robotic Dog AIBO and a Real Dog: A Pilot Study. Anthrozoös, 21(3), 245256.
https://doi.org/10.2752/175303708X332053
Riek, L. D. (2016). Robotics Technology in Mental Health Care. In Artificial Intelligence in Behavioral
and Mental Health Care (pp. 185203). Elsevier. https://doi.org/10.1016/B978-0-12-420248-
1.00008-8
Rizzo, A., & Morency, L.-P. (2013, February 7). SimSensei & MultiSense: Virtual Human and
Multimodal Perception for Healthcare Support. USCICT (YouTube).
https://www.youtube.com/watch?v=ejczMs6b1Q4
Rosen, M. (2018). Dignity Its History and Meaning. Harvard University Press.
https://www.hup.harvard.edu/catalog.php?isbn=9780674984059
Rush, A. J., Beck, A. T., Kovacs, M., & Hollon, S. (1977). Comparative efficacy of cognitive therapy and
pharmacotherapy in the treatment of depressed outpatients. Cognitive Therapy and Research, 1(1),
1737. https://doi.org/10.1007/BF01173502
Šabanović, S., Chang, W.-L., Bennett, C. C., Piatt, J. A., & Hakken, D. (2015). A Robot of My Own:
Participatory Design of Socially Assistive Robots for Independently Living Older Adults Diagnosed
with Depression (pp. 104114). https://doi.org/10.1007/978-3-319-20892-3_11
Santomauro, D. F., Mantilla Herrera, A. M., Shadid, J., Zheng, P., Ashbaugh, C., Pigott, D. M., Cristiana,
A., Adolph, C., Amlag, J. O., Aravkin, A. Y., Bang-Jensen, B. L., Bertolacci, G. J., Bloom, S. S.,
Castel, R., & Ferrari, A. J. (2021). Global prevalence and burden of depressive and anxiety disorders
in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet (London, England),
398(10312), 17001712. https://doi.org/10.1016/S0140-6736(21)02143-7
Sapiro, G., Hashemi, J., & Dawson, G. (2019). Computer vision and behavioral phenotyping: an autism
case study. Current Opinion in Biomedical Engineering, 9, 1420.
https://doi.org/10.1016/j.cobme.2018.12.002
Shaikh, T. A. H., & Mhetre, M. (2022). Autonomous AI Chat Bot Therapy for Patient with Insomnia.
2022 IEEE 7th International Conference for Convergence in Technology (I2CT), 15.
https://doi.org/10.1109/I2CT54291.2022.9825008
Sharkey, A., & Wood, N. (2014). The Paro seal robot: Demeaning or enabling? AISB 2014 - 50th Annual
Convention of the AISB.
https://www.researchgate.net/publication/286522298_The_Paro_seal_robot_Demeaning_or_enablin
g
Shibata, T., & Wada, K. (2011). Robot Therapy: A New Approach for Mental Healthcare of the Elderly
A Mini-Review. Gerontology, 57(4), 378386. https://doi.org/10.1159/000319015
Shmerling, R. H. (2021, July 13). Is our healthcare system broken? Harvard Health Publishing.
https://www.health.harvard.edu/blog/is-our-healthcare-system-broken-202107132542
SHOCHIKU. (2013, February 7). 『ハル』 本予告. 松竹チャンネル.
https://www.youtube.com/watch?v=tNXhoXiufGk
Sinha, C., Cheng, A. L., & Kadaba, M. (2022). Adherence and Engagement With a Cognitive Behavioral
TherapyBased Conversational Agent (Wysa for Chronic Pain) Among Adults With Chronic Pain:
Survival Analysis. JMIR Formative Research, 6(5), e37302. https://doi.org/10.2196/37302
Song, X., Xu, B., & Zhao, Z. (2022). Can people experience romantic love for artificial intelligence? An
empirical study of intelligent assistants. Information & Management, 59(2), 103595.
https://doi.org/10.1016/j.im.2022.103595
Stanton, C. M., Kahn Jr., P. H., Severson, R. L., Ruckert, J. H., & Gill, B. T. (2008). Robotic animals
might aid in the social development of children with autism. Proceedings of the 3rd International
Conference on Human Robot Interaction - HRI ’08, 271. https://doi.org/10.1145/1349822.1349858
Swartout, W., Artstein, R., Forbell, E., Foutz, S., Lane, H. C., Lange, B., Morie, J. F., Rizzo, A. S., &
Traum, D. (2013). Virtual Humans for Learning. AI Magazine, 34(4), 1330.
https://doi.org/10.1609/aimag.v34i4.2487
Tanaka, K., Makino, H., Nakamura, K., Nakamura, A., Hayakawa, M., Uchida, H., Kasahara, M., Kato,
H., & Igarashi, T. (2022). The pilot study of group robot intervention on pediatric inpatients and
their caregivers, using ‘new aibo.’ European Journal of Pediatrics, 181(3), 10551061.
https://doi.org/10.1007/s00431-021-04285-8
Tergesen, A., & Inada, M. (2010, June 21). It’s Not a Stuffed Animal, It’s a $6,000 Medical Device. The
Wallstreet Journal.
https://www.wsj.com/articles/SB10001424052748704463504575301051844937276
Tewari, A., Chhabria, A., Khalsa, A. S., Chaudhary, S., & Kanal, H. (2021). A Survey of Mental Health
Chatbots using NLP. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3833914
The Editors of Encyclopaedia Britannica. (2022, August 27). Wilhelm Wundt. Encyclopaedia Britannica.
https://www.britannica.com/biography/Wilhelm-Wundt
The Tech. (2008, March 14). Joseph Weizenbaum. The Tech.
https://thetech.com/2008/03/14/weizenbaum-v128-n12
Trowbridge, T. (2010, January 9). Paro Therapy Robot. Flickr.
https://www.flickr.com/photos/therontrowbridge/4261112915/
Turner, A. (2022, October). How Many Smartphones Are In The World? Bank My Cell.
https://www.bankmycell.com/blog/how-many-phones-are-in-the-world
United Nations (UN). (2018, October 10). Robot Sophia, UN’s First Innovation Champion, Visited
Armenia. United Nations Development Programme.
University of Minnesota. (2013). Communication in the Real World: An Introduction to Communication
Studies. University of Minnesota Libraries Publishing.
https://open.lib.umn.edu/communication/chapter/4-2-types-of-nonverbal-communication/
USCICT. (2014). SimSensei/MultiSense Overview 2014. YouTube.
https://www.youtube.com/watch?v=I2aBJ6LjzMw
Waseda University. (n.d.). WABOT -WAseda roBOT-. Waseda University Humanoid. Retrieved
September 14, 2022, from http://www.humanoid.waseda.ac.jp/booklet/kato_2.html
Wasil, A. R., Palermo, E. H., Lorenzo-Luaces, L., & DeRubeis, R. J. (2022). Is There an App for That? A
Review of Popular Apps for Depression, Anxiety, and Well-Being. Cognitive and Behavioral
Practice, 29(4), 883901. https://doi.org/10.1016/j.cbpra.2021.07.001
Watson, A., Mellotte, H., Hardy, A., Peters, E., Keen, N., & Kane, F. (2021). The digital divide: factors
impacting on uptake of remote therapy in a South London psychological therapy service for people
with psychosis. Journal of Mental Health, 18. https://doi.org/10.1080/09638237.2021.1952955
Woebot Health. (2022). Life changes. And so do we, with the help of AI. Woebot Health.
https://woebothealth.com/what-powers-woebot/
World Health Organization (WHO). (2019, April). Mental health workers - Data by country. World
Health Organization.
World Health Organization (WHO). (2020, December). WHO methods and data sources for global
burden of disease estimates 2000-2019. https://cdn.who.int/media/docs/default-source/gho-
documents/global-health-estimates/ghe2019_daly-methods.pdf?sfvrsn=31b25009_7
World Health Organization (WHO). (2021, September 13). Depression. World Health Organization.
https://www.who.int/news-room/fact-sheets/detail/depression
Wysa. (2022a). FAQs. Wysa. https://www.wysa.io/faq
Wysa. (2022b). First AI mental health app to meet Clinical Safety Standards. Wysa.
https://wysa.io/clinical-validation
Yao, M. (2017, March 21). 4 Approaches To Natural Language Processing & Understanding.
FreeCodeCamp.
Zhang, T., Schoene, A. M., Ji, S., & Ananiadou, S. (2022). Natural language processing applied to mental
illness detection: a narrative review. Npj Digital Medicine, 5(1), 46. https://doi.org/10.1038/s41746-
022-00589-7
Zhang, Y., Kong, M., Zhao, T., Hong, W., Zhu, Q., & Wu, F. (2020). ADHD Intelligent Auxiliary
Diagnosis System Based on Multimodal Information Fusion. Proceedings of the 28th ACM
International Conference on Multimedia, 44944496. https://doi.org/10.1145/3394171.3414359
Zor, R., Fineberg, N., Eilam, D., & Hermesh, H. (2011). Video telemetry and behavioral analysis
discriminate between compulsive cleaning and compulsive checking in obsessive-compulsive
disorder. European Neuropsychopharmacology, 21(11), 814824.
https://doi.org/10.1016/j.euroneuro.2011.03.006
... Even with challenges similar to inadequate finance and infrastructure, AI deals an encouraging solution through AI-based cognitive-behavioural therapy. The study discusses AI applications in three spaces, highlighting their potential contributions and conversing ethical considerations similar to privacy, security, and accessibility [10]. Objectives of the proposed work is to assesses AI's application in psychological health care, focusing on healing efficacy, digital dynamics, equity, inclusion, and socially learned technologies for permission. ...
Article
Full-text available
Diagnosing and treating mental and psychological health problems has long been a challenge for people and psychologists. This is due to several reasons, one of which is society's view, where people can be treated as abnormal. Therefore, with the development of technology, especially artificial intelligence techniques, researchers began to work on finding different ways for people with psychological problems to ensure their privacy and personal information. In this paper, the design and implementation of a chatbot was reviewed for helping people who suffer from some psychological problems, and to take advice or suggest ways to recover based on a database of books related to this topic. A modified artificial intelligence algorithm was used; the chatbot integrates natural language processing (NLP) algorithms to facilitate smooth and intuitive interactions. Taking into account the inability of some Arabic-speaking users to write or understand the answers in English, as the approved sources are mostly written in English. So the feature of translating questions was added and then replying also in the language that the user understands. After the implementation and testing of the chatbot by a group of people, the accuracy of the results was satisfactory. The accuracy rate is up to 85% and the accuracy of the results depends on the accuracy of the questions.
... Additionally, many people who require assistance are unable to receive it because of a lack of providers in their area or even because they are unable to afford it due to socioeconomic issues. A large number of people also are simply uncomfortable talking to others about their problems, which has created the demand for AI to help reduce the fear of judgment and talking to another person face to face (Castañeda-Garza et al., 2023). ...
Conference Paper
Full-text available
This paper's endeavor is concentrated primarily aiding insomniacs. The importance of accomplishing the work is critical since insomnia is a common sleep condition that stops people from sleeping and therefore is common in today's society. Because of its chronic nature, insomnia is connected to a considerable deterioration in an individual's quality of life. The system is powered via Artificial Intelligence and Deep Learning. The goal of this system is to increase the number of encounters with these individuals as they get more sad and anxious. So, essentially, a friendly Chat Bot is created to better understand and treat insomniacs. The algorithm, how it works, and why it has to be implemented are briefly discussed in the paper.
Article
Full-text available
Background: Digital applications are commonly used to support mental health and well-being. However, successfully retaining and engaging users to complete digital interventions is challenging, and comorbidities such as chronic pain further reduce user engagement. Digital conversational agents may improve user engagement by applying engagement principles that have been implemented within in-person care settings. Objective: To evaluate user retention and engagement with an artificial intelligence (AI)-led digital mental health application (app) that is customized for individuals managing mental health symptoms and coexisting chronic pain (Wysa for Chronic Pain). Methods: In this ancillary survival analysis of a clinical trial, participants included 51 adults who presented to a tertiary care center for chronic musculoskeletal pain, who endorsed coexisting symptoms of depression and/or anxiety (PROMIS Depression and/or Anxiety score ≥ 55), and initiated onboarding to an 8-week subscription of Wysa for Chronic Pain. The study outcomes were user retention, defined as revisiting the app each week and the last day of engagement, and user engagement, defined by the number of sessions the user completed. Results: Users engaged in a cumulative mean of 33.3 sessions during the eight-week study period. The survival analysis depicted a median user retention period (i.e., time to complete disengagement) of 51 days, with the usage of a morning check-in feature statistically significant in its relationship with a longer retention period (p = .001). Conclusions: Our findings suggest that the user retention and engagement with a CBT-based conversational agent which is built for users with chronic pain is higher than standard industry metrics. These results have clear implications for addressing issues of suboptimal engagement of digital health interventions and improving access to care for chronic pain. Future work should use these findings to inform the design of evidence-based interventions for individuals with chronic pain and to enhance user retention and engagement of digital health interventions more broadly. Clinicaltrial: NCT04640090, Clinicaltrials.gov.
Article
Full-text available
Background: Digital health interventions can bridge barriers in access to treatment among individuals with chronic pain. Objective: This study aimed to evaluate the perceived needs, engagement, and effectiveness of the mental health app Wysa with regard to mental health outcomes among real-world users who reported chronic pain and engaged with the app for support. Methods: Real-world data from users (N=2194) who reported chronic pain and associated health conditions in their conversations with the mental health app were examined using a mixed methods retrospective observational study. An inductive thematic analysis was used to analyze the conversational data of users with chronic pain to assess perceived needs, along with comparative macro-analyses of conversational flows to capture engagement within the app. Additionally, the scores from a subset of users who completed a set of pre-post assessment questionnaires, namely Patient Health Questionnaire-9 (PHQ-9) (n=69) and Generalized Anxiety Disorder Assessment-7 (GAD-7) (n=57), were examined to evaluate the effectiveness of Wysa in providing support for mental health concerns among those managing chronic pain. Results: The themes emerging from the conversations of users with chronic pain included health concerns, socioeconomic concerns, and pain management concerns. Findings from the quantitative analysis indicated that users with chronic pain showed significantly greater app engagement (P<.001) than users without chronic pain, with a large effect size (Vargha and Delaney A=0.76-0.80). Furthermore, users with pre-post assessments during the study period were found to have significant improvements in group means for both PHQ-9 and GAD-7 symptom scores, with a medium effect size (Cohen d=0.60-0.61). Conclusions: The findings indicate that users look for tools that can help them address their concerns related to mental health, pain management, and sleep issues. The study findings also indicate the breadth of the needs of users with chronic pain and the lack of support structures, and suggest that Wysa can provide effective support to bridge the gap.
Article
Full-text available
The present study aims to examine whether users perceive a therapeutic alliance with an AI conversational agent (Wysa) and observe changes in the t‘herapeutic alliance over a brief time period. A sample of users who screened positively on the PHQ-4 for anxiety or depression symptoms (N = 1,205) of the digital mental health application (app) Wysa were administered the WAI-SR within 5 days of installing the app and gave a second assessment on the same measure after 3 days (N = 226). The anonymised transcripts of user's conversations with Wysa were also examined through content analysis for unprompted elements of bonding between the user and Wysa (N = 950). Within 5 days of initial app use, the mean WAI-SR score was 3.64 (SD 0.81) and the mean bond subscale score was 3.98 (SD 0.94). Three days later, the mean WAI-SR score increased to 3.75 (SD 0.80) and the mean bond subscale score increased to 4.05 (SD 0.91). There was no significant difference in the alliance scores between Assessment 1 and Assessment 2.These mean bond subscale scores were found to be comparable to the scores obtained in recent literature on traditional, outpatient-individual CBT, internet CBT and group CBT. Content analysis of the transcripts of user conversations with the CA (Wysa) also revealed elements of bonding such as gratitude, self-disclosed impact, and personification. The user's therapeutic alliance scores improved over time and were comparable to ratings from previous studies on alliance in human-delivered face-to-face psychotherapy with clinical populations. This study provides critical support for the utilization of digital mental health services, based on the evidence of the establishment of an alliance.
Article
Full-text available
Mental illness is highly prevalent nowadays, constituting a major cause of distress in people's life with impact on society's health and well-being. Mental illness is a complex multi-factorial disease associated with individual risk factors and a variety of socioeconomic, clinical associations. In order to capture these complex associations expressed in a wide variety of textual data, including social media posts, interviews, and clinical notes, natural language processing (NLP) methods demonstrate promising improvements to empower proactive mental healthcare and assist early diagnosis. We provide a narrative review of mental illness detection using NLP in the past decade, to understand methods, trends, challenges and future directions. A total of 399 studies from 10,467 records were included. The review reveals that there is an upward trend in mental illness detection NLP research. Deep learning methods receive more attention and perform better than traditional machine learning methods. We also provide some recommendations for future studies, including the development of novel detection methods, deep learning paradigms and interpretable models. npj Digital Medicine (2022) 5:46 ; https://doi.
Article
Full-text available
In this study, we aim to detect in social media texts written in Hebrew girls who are suspected of being anorexic. We constructed a dataset containing 100 blog posts written by females who are probably anorexic, and 100 blog posts written by females who are likely to be non-anorexic. The construction of this dataset was supervised and approved by an international expert on anorexia. We tested several text classification (TC) methods, using various feature sets (content-based and style-based), five machine learning (ML) methods, three RNN models, four BERT models, three basic preprocessing methods, three feature filtering methods, and parameter tuning. Several insights were found as follows. A set of 50-word n-grams (mostly word unigrams) given by an expert was found as a good basic detector. A heuristic process based on the random forest ML method has overcome a combinatorial explosion and led to significant improvement over a baseline result at a level of P=\text{P}\,{=} .01. Application of an iterative process that tests combinations of “k out of n\text{n}' ” where n<\text{n}'\,{ < } n (n is the number of feature sets) lead to a result of 90.63%, using a combination of 300 features from ten feature sets.
Article
Family ties in the world’s second-most-populous country are loosening as more Indians move for work. Farther east, one in three Japanese people will be over 65 by 2036. What can these countries teach us? Family ties in the world’s second-most-populous country are loosening as more Indians move for work. Farther east, one in three Japanese people will be over 65 by 2036. What can these countries teach us?
Article
Along with the development of artificial intelligence (AI), more IT applications based on AI are being created. A personal intelligent assistant is an AI application that provides information, education, consulting, or entertainment to users. Due to their high levels of cognitive and emotional capabilities, we assume that users can form humanlike relationships with intelligent assistants, therefore, we develop a research model based on the theory of love. Data were collected from users of intelligent assistants through a survey. The results indicate that users can develop intimacy and passion for an AI application similar to that experienced with human beings. These feelings are related to users’ commitment, promoting the usage of an intelligent assistant, influenced by AI factors (performance efficacy and emotional capability), and moderated by human trust disposition.