ArticlePDF Available

To chat or bot to chat: Ethical issues with using chatbots in mental health


Abstract and Figures

This paper presents a critical review of key ethical issues raised by the emergence of mental health chatbots. Chatbots use varying degrees of artificial intelligence and are increasingly deployed in many different domains including mental health. The technology may sometimes be beneficial, such as when it promotes access to mental health information and services. Yet, chatbots raise a variety of ethical concerns that are often magnified in people experiencing mental ill-health. These ethical challenges need to be appreciated and addressed throughout the technology pipeline. After identifying and examining four important ethical issues by means of a recognised ethical framework comprised of five key principles, the paper offers recommendations to guide chatbot designers, purveyers, researchers and mental health practitioners in the ethical creation and deployment of chatbots for mental health.
This content is subject to copyright.
To chat or bot to chat: Ethical issues with
using chatbots in mental health
Simon Coghlan
, Kobi Leins
, Susie Sheldrick
, Marc Cheong
Piers Gooding
and Simon DAlfonso
This paper presents a critical review of key ethical issues raised by the emergence of mental health chatbots. Chatbots use
varying degrees of articial intelligence and are increasingly deployed in many different domains including mental health.
The technology may sometimes be benecial, such as when it promotes access to mental health information and services.
Yet, chatbots raise a variety of ethical concerns that are often magnied in people experiencing mental ill-health. These eth-
ical challenges need to be appreciated and addressed throughout the technology pipeline. After identifying and examining
four important ethical issues by means of a recognised ethical framework comprised of ve key principles, the paper offers
recommendations to guide chatbot designers, purveyers, researchers and mental health practitioners in the ethical creation
and deployment of chatbots for mental health.
chatbots, articial intelligence, ethics, mental health, data privacy
Submission date: 7 February 2023; Acceptance date: 5 June 2023
The rapid rise of chatbots in information and service provi-
sion by businesses, government agencies and non-prot
has inevitably touched the domain of mental
health. As benign as they may rst appear, chatbots raise
ethical issues. Mental health chatbots that offer information,
advice and therapies have the potential to benet patients
and the general public, but they also have the capacity to
harm vulnerable individuals and communities.
regarded as unethical may also damage the reputation of
individuals and organisations who deploy them. Like
some other digital tools, chatbots raise a range of specic
ethical issues such as privacy, transparency, accuracy,
safety and accountability.
The importance of these
ethical issues will only grow as chatbots get more
This paper provides a critical ethical overview of chat-
bots that provide information, advice and therapies to
users in regard to mental health. It examines ethical issues
for the design and deployment of these mental health chat-
bots and provides recommendations to guide their respon-
sible development and use. The paper should be useful
for chatbot designers, purveyers, researchers and mental
health practitioners who seek a clear and solid framework
for understanding and/or navigating the ethical issues that
mental health chatbots create.
Chatbots have been dened as any software application
that engages in a dialog with a human using natural lan-
Other terms for chatbots include dialogue agents,
conversational agents and virtual assistants.
scholarly work has identied several ethical advantages
and challenges of chatbots and similar technologies,
and some works have offered ethical guidelines or frame-
works relevant to mental health chatbots. For example,
Wykes et al. examined ethical issues for a range of
mental health technologies including health apps using
School of Computing and Information Systems, The University of Melbourne
Department of War Studies, Kings College London
Melbourne Law School, The University of Melbourne
Corresponding author:
Simon DAlfonso, School of Computing and Information Systems, The
University of Melbourne, Grattan Street, Parkville, Victoria, 3010, Australia.
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.
org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is
attributed as specied on the SAGE and Open Access page (
Volume 9: 111
© The Author(s) 2023
Article reuse guidelines:
DOI: 10.1177/20552076231183542
the principles of privacy and data security, good develop-
ment practices, feasibility and the health benets of such
Luxton et al.s discussion of ethical consid-
erations raised by intelligent machines for mental healthcare
drew on work in robot and machine ethics as well as on pro-
fessional ethical codes for mental health professionals.
Meanwhile, Lederman et al. applied the classic four-
principle ethical framework from medical ethics to a spe-
cic mental health intervention platform that some of
those authors were developing.
Our aim in this paper is to provide a critical review of
ethical considerations regarding mental health chatbots in
general. In contrast to other ethical discussions about
chatbot-style technologies such as those just mentioned
and others,
we employ the recent ve-principle frame-
work developed within articial intelligence (AI) ethics.
This framework incorporates the classic and widely
accepted four-principle framework from medical ethics,
but it also adds to it by means of a fth principle of explic-
abilitywhich accommodates special features of intelligent
digital technologies.
The paper runs as follows. We rst provide some general
background about the history of chatbot development (Section
II). We then canvas the operation of chatbots in mental health
and observe some benets and challenges
(III). Next we
explain and then utilise an established and helpful ethical
framework from medial and AI ethics (IV) to analyse core
issues raised by mental health chatbots (V). Finally, we
provide ethical recommendations that apply throughout the
technology pipeline from inception to chatbot retirement
(VI), with the aim of giving practical assistance to chatbot
designers, mental health practitioners, researchers, etc. for
ensuring that mental health chatbots are constructed and
implemented in ethically defensible ways. Section VII sum-
marises the discussion.
Background to chatbots
Some general background about the history of chatbots will
help in understanding their ethical implications in mental
healthcare. Chatbots generally depend on some degree of
natural language processing (NLP). NLP uses computing
and statistical techniques, which these days often involves
machine learning (ML), to interprethuman language
and provide intelligible responses to human language
The very rst NLP computer program was
created by Joseph Weizenbaum, a GermanAmerican com-
puter scientist considered a father of modern AI. Named
ELIZA, this chatterbotsimulated a Rogerian therapist
(relating to the system of therapy developed by the psych-
ologist Carl Rogers) by demonstrating the responses of a
non-directional psychotherapist in an initial psychiatric
Weizenbaum aimed to show that commu-
nication between man and machine was supercial.
Unexpectedly, however, Weizenbaums assistant, though
aware of the machines purpose and limitations, began dis-
closing personal matters and forming a supercial relation-
ship with ELIZA.
This led Weizenbaum to warn that
human engagement with even limited technologies could
appear less than fully rational.
The so-called ELIZA
effect describes our tendency to ascribe to computers
human traits and intentions which we may know they lack.
Many consumer chatbots, such as in banking or telecom-
munications, are designed to enable human users to ef-
ciently nd information or services. In doing this, they
may tailor user responses to a history of use to increase
search accuracy. Still, the style of interaction here can be
relatively simple. In comparison, some chatbots have
more ambitious features. For example, Microsofts
Xiaoice, which has over 660 million users, is a social
agent with a personality modelled on that of a teenage
girl, and [has] a dauntingly precocious skill set.
Chatbots simulate human conversation by collecting
data and nding and projecting patterns. Simple chatbots
are often rule-based
and follow pre-programmed decision
trees or simple If-Then-Elserules. Conversational agents
with greater exibility may rely on AI and ML techniques.
Some of these employ deep learning neural networks,
and some require the collection and storage of large
amounts of user data to operate.
Such chatbots may sometimes loosely be said to under-
standor at least to predict and respond to user needs. That
chatbots can generate affective responses from users is
sometimes regarded as desirable. Microsoft, for example,
suggest that [s]ocial chatbotsappeal lies not only in
their ability to respond to usersdiverse requests, but also
in being able to establish an emotional connection with
usersand so better understand them and therefore help
them over a long period of time.
In the 1950s, Alan Turing proposed a procedure for deter-
mining whether a computer could exhibit human intelligence.
Roughly speaking, the Turing Test involves testing whether a
device hidden to a person can convince that person that they
are interacting with another human being rather than a
When chatbots are sufciently adept, it can
seem to the user that they are actually dealing with another
human. To avoid this problematic confusion, many current
public-facing chatbots reveal attheoutsetthattheyarenot
people. Nonetheless, some other chatbots may not make it par-
ticularly clear to users that they are just digital machines.
Chatbots for mental health: some benets and
As chatbots proliferated, they moved into the eld of mental
health. In this eld, some chatbots (like many consumer
chatbots) are largely a conduit to a human professional.
However, many chatbots, like the popular Woebot,
provide mental health assistance to users, such as advice
and exercises based on cognitive behavioural therapy.
These chatbots can often be accessed by anyone who can
download an app to a smartphone. Mental health chatbots
may also be used by health professionals as an online
element of therapies or patient monitoring.
Some chat-
bots, like Replika,
are capable of emulating emotion and
psychological connections.
Chatbots sometimes have simple interfaces that provide
conversational search engines for digital mental health therapy
Whilst chatbots cannot satisfactorily replicate psy-
chotherapeutic dialogue, they can now maintain conversation
beyond simple, single linguistic outputs. Some chatbots can
guide users through exercises on mental health apps, such as
in the examples of Wysa and Tess.
Other chatbots can initi-
ate welcoming chats with clients waiting to see a human ther-
apist via online mental health portals such as e-headspace.
Information useful for therapists can be gathered when chat-
bots ask clients questions, sometimes using NLP to summarise
Chatbots have begun to ourish in the digital mental
health space partly because they apparently offer certain
benets and advantages. For example, chatbots can
perhaps make mental health services and support more
accessible for many individuals. They can also run day
and night and do not require salaries or sick leave, although
they require humans to monitor and update them. When
chatbots malfunction, they can be upgraded or switched
off. These features have led to interest in chatbots from
businesses and organisations and from those who want to
expand the supply of mental health assistance amid
growing need and insufcient services, further exacerbated
by the COVID-19 pandemic.
Research shows that in embarrassing or stigmatising cir-
cumstances, chatbots may sometimes be preferred to human
contact and conversation.
Like Weizenbaums assistant,
users may be more likely to disclose emotional and
factual information to a chatbot than to a human.
Furthermore, the fact that chatbots may potentially engen-
der emotional connections may be viewed as a potential
advantage for socially isolated individuals. We know that
in healthcare, a perceived relationship of trust and mutual
understanding can be vital for successful therapy.
Chatbots that facilitate emotional connections and that
promote trust
and understanding may therefore beneta
range of user groups.
However, some research indicates barriers to using chat-
bots for mental health. Barriers include privacy concerns,
nancial constraints, scepticism and reduced willingness
among potential users from lower socioeconomic back-
Some studies have found that the main barriers
to chatbot use among adolescents are stigma, embarrass-
ment, poor mental health literacy and preferences for self-
A large-scale survey warned that the current
mental health app landscape tends to over-medicalise dis-
tress and over-emphasise individual responsibility for
mental well-being.
For chatbot design and use to be successful, users must
be able to trust that the technology will meet their needs
safely and effectively and that developers are responsive
to problems and user feedback.
This point is underscored
in the examples of various failed chatbots featured in
Table 1. The table illustrates ethical issues caused by chat-
bots designed to provide, respectively, legal assistance,
news, weather and general conversation. As the table
shows, these chatbots were the cause of various ethical pro-
blems, such as being offensive and causing emotional harm
(Lawbot and Tay), misunderstanding user questions
(Poncho) and failing to deliver the service and benets
that were promised (newsbots). Mental health chatbots
raise similar and also further ethical issues in an especially
acute manner, in virtue of the vulnerability of the indivi-
duals they are designed to assist. In the next section, we
shall see how mental health chatbots present a range of
ethical challenges that require careful attention.
Table 1. Examples of failed chatbots and reasons behind failure.
Chatbot Reasons for Failure
Lawbot: created by Cambridge
University students to help
victims of sexual assault
navigate the legal system
Emotionally insensitive
Overly strict checklist to
determine what a crime is
Can discourage users from
seeking help
Directs users to local police
station but not to support
Newsbots: In 2016, several
news agencies sought to
create bots that
personalised content and
opened up new audiences
Signicant resources
required for maintenance
Did not sync with existing
formats, delivery or
distribution of news content
Lacked sophistication to
personalise content
Minimal input from
journalists during
Poncho: Weather chatbot using
Facebook Messenger with a
sassy cartoon cat as the
Sending users unrelated
Not understanding words it
should, e.g. weekend
Tay: Microsoft chatbot trained
via crowdsourced input on
Shut down after 24 h for
producing racist, sexist,
anti-Semitic tweets
Public able to inuence
outputs as minimal human
supervision provided
Coghlan et al. 3
Ethical framework: ve principles
Although there is some ethical discussion relevant to mental
health chatbots in the literature,
ethical evaluation of such
chatbots is still relatively limited.
To make better sense of
the ethical issues and to help guide designers, purveyers and
practitioners, it makes sense to draw on existing ethical
approaches or frameworks. In our view, the ve-principle
framework outlined by Floridi and Cowls
is particularly
useful. In this section, we briey explain the ve-principle
ethical framework. We apply the framework to mental
health chatbots in the subsequent section.
The ve key principles in this framework are (a) non-
malecence, (b) benecence, (c) respect for autonomy,
(d) justice and (e) explicability.
The rst four principles
are drawn from medical ethics.
The last principle, explic-
ability, has been added because of the special nature of
digital technologies such as AI. Explicability is composed
of two sub-principles: transparency and accountability.
Transparency is important because the ways in which intel-
ligent digital technologies work are often unknown to users
(and sometimes even to experts). Accountability is import-
ant in part because it can be unclear who is ethically and
legally liable for adverse outcomes of intelligent technolo-
Each of the ve principles gives a distinctive form
of ethical guidance, which we describe in Table 2. The prin-
ciples are non-absolute but guiding prima facie rules for
those who design, implement, research or oversee digital
To be clear, the eld of AI ethics has identied many
other ethical principles relevant to intelligent technologies
including chatbots, including safety, solidarity, trust,
responsibility and dignity.
However, the ve-principle
approach has several advantages. First, some of these
other principles can be subsumed under the ve-principle
framework. For example, safety is covered by the principle
of non-malecence, and responsibility is covered by the
principle of accountability.
Second, a longer list of overlapping principles can cause
confusion and reduce the effectiveness and practicality of a
more succinct principle set that is relatively easily remem-
bered and understood. Third, the ve principles, with the
exception of explicability, have a long track record of use-
fulness. The principles of non-malecence, benecence,
respect for autonomy and justice comprise the classic
ethical framework introduced in the 1970s by the bioethi-
cists Tom Beauchamp and James Childress.
These prin-
ciples are well accepted in healthcare, whereas other ethical
principles (e.g. solidarity and dignity) are less well-known
and sometimes more contentious.
It is important to note that while the principles can apply
to a range of parties, including those who design and market
the technologies, they can apply somewhat differently to
mental health practitioners who deploy the chatbots.
Mental health practitioners, of course, have especially
exacting responsibilities to patients, stemming from their
professional roles as health carers. There is thus a context-
ual element to precisely determine how the principles work
in practice. No prima facie ethical principle can fully
specify how it should be applied in all situations.
Nonetheless, as AI ethics scholars have argued, the ve
principles are still helpfully action-guiding for a range of
parties, including technologists.
They are also helpful for those who research chatbots
for and with people who have mental illness. Here again,
context can affect the precise application of the principles.
For example, research ethics in healthcare recognises that
researchers may balance potential harm to participants
against potential benets to society (e.g. future patients)
in contrast, medical practitioners not involved in research
should ordinarily pioritise their current patientsinterests
over social benets in their provision of health services
(although they are sometimes required to take social impli-
cations into account). This example illustrates how the prin-
ciples of non-malecence and benecence operate
somewhat differently in different contexts and require
judgement in their application. Again, this need for judge-
ment goes for the other principles in the framework.
Issues with mental health chatbots: applying the
ve-principle framework
Guided by these ethical principles, we now identify and
discuss four ethically important issues for chatbots in
mental health care: human involvement, evidence base, col-
lection and use of data and unexpected disclosure of crimes.
Although this list is not exhaustive, it contains key moral
considerations and allows us to demonstrate the need for
ethical thinking about chatbot use. It also serves to illustrate
Table 2. Five key ethics principles for mental health chatbots.
AI Ethics Principle Ethical Requirements
Non-malecence Avoid causing physical, social or mental
harm to users
Benecence Ensure that interventions do good or
provide real benet to users
Respect for
Respect usersvalues and choices
Justice Treat users without unfair bias,
discrimination or inequity
Explicability Provide to users sufcient transparency
about the nature and effects of the
technology and be accountable for its
design and deployment
how the ve-principle framework can be used. In the subse-
quent section, we make some recommendations for addres-
sing the sorts of ethical issues we discuss below.
Human involvement
To operate successfully and continuously, chatbots require
human supervision. As chatbots learn and develop, they
may acquire glitches and fail in various ways (Table 1).
Thus, human supervision is required to ensure that chatbots
operate as desired. Yet, adequate supervision is not always
achieved, and this creates the potential for harm. At the
same time, chatbot moderation can put pressure on
service providers to increase multitasking and workloads
in collecting, inputting, organising and constantly updating
digital materials, which, paradoxically, may reduce time for
teamwork and face-to-face engagement. Further risks arise
if there is a power outage that prevents mental health chat-
bots from providing services.
The complete or relative absence of trained human
supervisors from the chatbot environment can undermine
the role of expert professionals. Mental health chatbots
that provide an automated service are still far from being
able to recreate the rich therapeutic alliance
that can exist
between patients and human professionals, notwithstanding
their efforts to mirror real-life interactions. Though remark-
able in its way, ELIZA was never able to substitute for a
human therapist and the broad range of skills they
possess. The same drawback also applies to more sophisti-
cated, contemporary mental health chatbots, such as those
that use AI and NLP and far exceed ELIZA in intelligence
and learning ability. Recent work, however, is starting to
examine if and how a version of the therapeutic alliance,
so central to traditional psychotherapy, can be partly emu-
lated or fostered by mental health apps and chatbots.
But although increasing personalisation is possible (e.g.
different tips/strategies for depression versus anxiety), the
support provided by many chatbots at this point in time is
still relatively generic and in some ways resembles self-help
books. Current chatbots cannot grasp the nuances of social,
psychological and biological factors that feed into mental
health difculties. As the popular Woebot warns: As
smart as I may seem, Im not capable of really understand-
ing what you need.
The explosion of digital technology in health and social
services is premised on the reasonable idea that some forms
of automation and digital communication could assist with
care. One common aim of new technology, such as articial
intelligence, is to break down tasks into individual compo-
nents that can be repetitively undertaken. However,
genuine, comprehensive care is not fully reducible to
these tasks, since care also has a rich emotional and
social dimension.
Chatbots are not capable of genuine
empathy or of tailoring responses to reect human emo-
tions, and this comparative lack of affective skill may
compromise engagement.
Although affective computing
is creating systems that can recognise and simulate human
these systems still cannot match the capaci-
ties of human therapists.
What does this mean in terms of our chosen ethical
framework? The above considerations illustrate ways in
which chatbots run some risk of failing to accord with ben-
ecence and non-malecence. Because they cannot fully
replicate the range of skills and the affective dimensions
of a human therapist and because they cannot entirely
replace the practitionerclient therapeutic alliance, chatbots
may potentially cause harm to some people and thus not
align with the principle of non-malecence. For the same
reason, chatbots may also fail to provide the benets for
mental well-being which are intended, thereby not
meeting the requirement of benecence.
However, if mental health chatbots can offer some semb-
lance of an effective therapeutic alliance and/or augment the
humanclient relationship without causing harm, then they
may respect the principles of benecence and non-
malecence after all. Whether or not this occurs will
depend on various factors, such as the nature and compe-
tence of the chatbot, the feelings and attitudes of the
clients who interact with it, the level of technical support
provided and the reliability of the technology and the
involvement and role of mental health practitioners.
Where mental health chatbots (or particular instances of
them) pose risks of some harm but also promise some
degree of benet, judgement must be used to balance the
principles of non-malecence and benecence. As we
noted earlier, each of the ve principles is a prima facie
rather than absolute principle, and the principles must
often be carefully weighed against one another when they
point in different directions. For example, if the only
option was to offer a client a mental health chatbot due to
long waiting lists for practitioners and if that chatbot had
the potential to offer some temporary assistance despite car-
rying certain risks of harm, then it may be judged that ben-
ecence overrides non-malecence, at least in that specic
context. In other cases where the facts are different, non-
malecence may trump benecence.
Evidence base
More general mental healthapps that purport to assist with
anxiety, depression and other conditions have been used
with varying levels of success. Leigh and Flatt characterise
the wide range of mental health apps as suffering from a
frequent lack of an underlying evidence base, a lack of sci-
entic credibility and subsequent limited clinical effective-
There are clear risks with hyping technology,
especially for disadvantaged people and without a commen-
surate evidence base to justify the enthusiasm.
These risks
appear at both the individual and population levels, from
shaping individual userspreferences and expectations
Coghlan et al. 5
about service provision to altering how national research
funding is distributed. An insufcient evidence base for
the deployment of chatbots creates risks that are even
more acute for users already suffering various mental
health problems.
At present, the evidence base for various mental health
chatbots is just getting established. Consequently, there
can be uncertainty over whether existing chatbots meet
the requirements of benecence. Furthermore, rolling out
chatbots may deect people from essential mental health
services and encourage governments and other providers
to substitute human for automated care. When such chat-
bots lack a strong evidence base, this may lead to avoidable
harm to people with mental health concerns and thus fail to
meet the principle of non-malecence.
We should stress that it is not possible to say precisely
when benecence and non-malecence will support or
oppose the use of a mental health chatbot, for that will
depend on the context and circumstances. It will depend, for
example, on the relative degree of harm and benet involved
and our knowledge of their probabilities. Uncertainties, after
all, are commonplace in healthcare and in regard to emerging
technologies. What the principles tell us to do, however, is to
make the best judgement we can of the degrees and probabil-
ities of harms and benets from interventions and to exercise
judgement in how we weigh them up to reach conclusions. For
instance, if it is likely that a chatbot carries risks of consider-
able rather than minor harm, the principles will suggest that it
is necessary, before chatbot implementation, to have a rmer
evidence base that the technology can bring benets substan-
tial enough to outweigh the risks.
In addition to benecence and non-malecence, the
above considerations also bring into play the principle of
justice. An insufcient evidence base, especially for higher
stakes interventions, amplies risks for users with mental
health problems. People with mental illness are already
often worse off than others: not only do such people suffer
from the effects of the illness, they may also have more
trouble keeping and nding employment, nd themselves
subject to social stigmas and isolation and so on. There is
thus a risk of violating the principle of justice by exacerbating
their problems with promising but poorly tested technologies.
Such an amplication of inequity in society is prima facie
unfairaswellasmalecent. Furthermore, justice may be vio-
lated if chatbots which lack evidential support are used to
replace investment in and access to mental healthcare provided
by human professionals.
Data Collection, Storage and Use
Some (though not all) chatbots collect large amounts of data
about people, including data useful for commercial pur-
poses or government intervention (which may sometimes
be authorised, e.g. for people at risk of harming themselves
or others). Chatbots are frequently trained on existing data,
such as data arising from client interaction with service pro-
viders. The specic data used shapes those chatbots
responses. When data sets are not sufciently comprehen-
sive or representative of the target group, unintended
biases may occur. Some AI applications have been severely
criticised for producing biases that harm or discriminate
against certain groups and individuals.
High prole exam-
ples include facial (mis)recognition and recidivism predic-
tion based on ML models.
Comparably, trained
mental health chatbots might thus reveal biases against
people with certain features, such as when they fail to
provide correct information to those particular individuals
even though they are reliable overall. Clearly, this
outcome may transgress the principle of justice.
Further questions concern what data is collected, how data
is stored (e.g. on a company-based server like Amazons
versus more localised storage), where data is used and how
it is linked to other data.
Raw chat data, metadata and
even client use behaviour can be tracked and linked with
other online behavioural data. Anonymised data can be
de-anonymised by data triangulation to reveal peoplesiden-
tities. Data security issues canarisefromtheriskofdata
related to mental health being leaked or hacked into by cyber-
criminals. Any resulting privacy loss can result in mental harm
and reduced control over personal information.
Preventing or not providing control over personal informa-
tion can breach the principle of respect for autonomy. Respect
for autonomy involves respect for a persons values (e.g. their
interests in privacy) and their ability to make decisions based
on those values. Obtaining personal and sensitive information
from clients with mental health issues will always be an ethic-
ally laden business. Respect for autonomy generally requires
gaining fully informed consent from individuals before such
information is taken and used.
Where a chatbot user is not given sufcient information
about what data is collected, how it is used and the risks that
such use may generate, the principle of explicability and its
sub-principle of transparency also come into play.
Transparency is ethically important because people in
general want to know how their data is being managed
and what its implications are, including the potential for
harm. However, many people are not aware of the ways
in which new technologies can harvest and recombine
data to make predictions about their identity and behaviour.
As noted, such predictions can sometimes be biased against
individuals from certain populations.
These technolo-
gies may thus require special explanations of the risks
and benets of data collection and use, including being
given clear information about the likelihood that their
data, anonymised or not, could be passed or sold to third
parties. These are important reasons why the principle of
explicability/transparencya principle developed as a
result of the complex, unfamiliar and autonomous nature
of intelligent machinesis a useful addition to the ethical
framework for mental health chatbots.
Unexpected Disclosure of Crimes
The issue of unexpected personal disclosures
is typically
overlooked for chatbots, yet it too raises important questions.
Consider a user apparently disclosing crimes like child abuse
or domestic violence.
As we observed earlier, users can form
quasi-relationships with machines,
and this could promote
the revelation of information that, if disclosed to a human,
might entail ethical or legal duties to report.
It may sometimes be unclear whether such a duty applies
to unsupervised chatbots. In some jurisdictions, mental
health practitioners (and other professionals) may be
legally required to report suspected abuse. But it may be
less clear whether a company that provides the technology
or obtains and keeps the data has such a legal duty.
Principles of justice and benecence suggest at least an
ethical duty of this kind where the disclosure is credible.
However, the issue is fraught since it may be unclear
whether reporting might exacerbate harms to people with
mental health issues and it can be hard to determine what
degree of certainty is required to justify it. Here, the prin-
ciple of justice may conict with the principle of non-
maleence. On the one hand, failures to report may lead
to serious harm to innocent victims; on the other hand, mis-
taken reports may effectively cause injustice. Mistaken
reports may also undermine trust in chatbot use, which
could reduce their overall benets.
The question of reporting apparent disclosure of crime
is, then, yet another occasion on which the ethical principles
need to be carefully considered, weighed and balanced to
determine the right or best course of action. Even so, the
principles give us direction on how to proceed in making
such decisions. Clarication about what the law requires
or might require could also help here, and future research
could benecially explore the legal ramications of crim-
inal disclosure across various jurisdictions.
We are now in a position to offer some recommendations
for the design and use of chatbots for mental health.
Whether and how a chatbot should be developed and imple-
mented requires an overall ethical evaluation that can be
made on the basis of conformity with our prima facie
ethical principles, suitably interpreted and weighed. In add-
ition to complying with existing law, those responsible for
chatbot design and deployment, we suggest, should meet
duties of non-malecence, benecence, autonomy, justice
and explicability (transparency and accountability).
The sub-principle of accountability, which we have not
yet discussed, refers to the roles and duties of responsible
parties to act ethically in their handling of technology. In
effect, this means being responsive to the other ethical princi-
ples in the framework and establishing appropriate mechan-
isms and procedures for upholding them. Accountability is
thus a means of ensuring that the design and use of chatbots
brings benet, avoids or minimises harm, respects autonomy,
remains transparent and is just or fair. To meet the ethical prin-
ciples, relevant parties (e.g. mental health practitioners and
chatbot purveyers) should take the following steps.
Recommendation 1: weigh risk and benet
In this step, relevant parties should clearly dene the
problem they wish to solve and the purpose they want to
aim at to ensure the specic chatbot may justiably be
developed in the rst place.
Sometimes, the risks will
be too high or the benets too low to justify (partially)
replacing human therapists with chatbots which cannot
empathise or provide comprehensive mental healthcare
and which might deect some people from seeking
human care that would be better for them.
If the use of a chatbot is presumptively justied, the above
ethical principles should be used to determine how best to
develop and implement them throughout the technology pipe-
line. When existing systems are repurposed or retired, an
evaluative process of weighing risk and benet should be
repeated. It might also be worth considering patient and
public involvement in mental health chatbot development
and research
to anticipate and respond to risks, and to
maximise the benets, of chatbots for end-users.
Recommendation 2: seek and disclose evidential
As we saw, having a sufcient evidence base
for providing
services for disadvantaged people is required by principles of
non-malecence, benecence and justice. Although their
speed and scalability may be tempting, the use of chatbots
requires an evidence-based approach. Where the stakes are
particularly high (e.g. highly at-risk people with psychological
problems), this may require more substantial evidential
support, such as well-conducted clinical trials. In less risky
situations (e.g. when people are mildly unwell), less evidential
support may be acceptable. The degree of evidential support
should also be transparently disclosed to respect user auton-
omy to engage or decline chatbot assistance. While a lack of
robust evidence does not imply that chatbots lack value, it
does require caution in recommending chatbots and warrants
further research into their benets and risks.
Recommendation 3: approach data collection/use
Because collection, storage and use of personal data create
risks, the relevant facts must be made transparent to users in
order to promote their trust and respect their autonomy.
Special attention must be paid to ensuring transparency and
adequate understanding for users with mental health issues
Coghlan et al. 7
(or other vulnerabilities) that could impair their understanding.
Chatbot developers and owners should ensure that training
data is sufciently representative to mitigate injustice against
individuals and groups. Data use also raises legal and ethical
questions about privacy. Systems must ensure the security
of data to avoid malecence and disrespect for autonomy.
Here, experts in consumer protection, privacy protection and
security of data storage may offer important advice.
Data protection laws in many jurisdictions place strict limits
on what data (particularly sensitive personal data) can be col-
lected and how it is stored and re-used.
Data-related obliga-
tions are increasingly demanded by law, such as the EUs
General Consumer Data Regulation (GDPR). These demands
may increase as societies recognise the implications of big
data and the power it lends organisations.
After chatbot retire-
ment, owners should determine how data will be safely stored or
destroyed and users should be adequately notied.
Recommendation 4: consider possible disclosure of
A failure to report to authorities may create legal risks, and
reporting crimes when others may be at imminent risk is
also a prima facie ethical duty of benecence and justice.
Nonetheless, reporting too carries dangers. Theoretically,
reporting could be done automatically or else with a
human in the loop. One option is to develop a system that
scans all user input for problematic content (e.g. using
keyword analysis or more sophisticated NLP detection
techniques for determining concerning terms/phrases). If a
portion of content were to signal an emergency situation
or deemed to be beyond the chatbots purview, then it
would be automatically passed on to a human content mod-
erator with sufcient experience who could then make the
decision to report or not based on an ethical assessment
of the situation. Either way, those utilising chatbots need
to be aware of possible legal implications and liabilities.
Accountability might require other steps to be taken.
According to Duggal and colleagues, a robust regulatory frame-
work in digital mental health contexts will only emerge when
service users, patients, practitioners and providers collaborate
to design a forward thinking, future proof, and credible regula-
tory framework that can be trusted by all parties.
Without such
accountability, there is a higher risk of costly technologies being
introduced without thoughtful regard for ethical principles like
benecence, non-malecence, transparency and respect for
autonomy. Poor user consultation also increases the likelihood
of wasted resources, which is not only a pragmatic consideration
for developers but sometimes also a matter of justice.
Deliberative, participatory development may also be
important since services and technology tend to emerge
from a concentration of power, such as through government
agencies, venture capital and Big Tech, universities with
large-scale infrastructure for tech development and sizable pro-
fessional associations. To ensure greater justice and benetin
design, development and regulation, some writers have called
for interdisciplinary empirical research on the implications of
these technologies that centres the experiences and knowledge
of those who will be most affected.
Such research should
preferably accommodate diversity amongst end-users in terms
of age, race, gender, socioeconomic status and so forth, as
such factors can shape how users experience technologies.
Undertaking genuinely participatory, community-engaged
and inclusive development is not straightforward. Design of
technology like chatbots that have the potential to both
benet and harm vulnerable groups should be done via
careful consultation with the relevant experts and target
users and always with the key ethical principles in mind.
This paper identied and discussed ethical questions raised by
emerging mental health chatbots. Chatbots can probably
provide benets for people with mental health concerns, but
they also create risks and challenges. The ethical issues we iden-
tied involved the replacement of expert humans, having an
adequate evidence base, data use and security, and the apparent
disclosure of crimes. We discussed how these ethical challenges
can be understood and addressed through the ve principles of
benecence, non-malecence, respect for autonomy, justice and
explicability (transparency and accountability), noting that the
application of such principles, including where they come
into apparent conict with each other, requires contextual judg-
ment. Based on our discussion, we offered several ethical
recommendations for those parties who design and deploy chat-
bots. While we focused on chatbots for mental health, the
ethical considerations we discussed also have broad application
to chatbots in other situations and contexts, especially where the
end-users are particularly vulnerable.
Acknowledgements:We thank two anonymous reviewers for
very helpful feedback and advice.
Contributorship: SC, SD and KL conceptualised and wrote drafts
of the paper. SS provided a literature review and edited drafts. PG
and MC reviewed drafts and made important additions and edits.
All authors reviewed and approved the version submitted.
Declaration of conicting interests: The authors declared no
potential conicts of interest with respect to the research,
authorship, and/or publication of this article.
Funding: The authors received no nancial support for the
research, authorship, and/or publication of this article.
Guarantor: SC.
ORCID iD: Simon DAlfonso
a See: 6 ways Head to Health can help you,Australian
Government Department of Health,;
Woebot Health,; WysaYour 4 am friend
and AI life coach,; Joyable,https://; Talkspace, Accessed
17 December 2022.
1. Brandtzaeg PB and Følstad A. Chatbots: changing user needs
and motivations. interactions 2018; 25: 3843.
2. Lederman R, DAlfonso S, Rice S, et al. Ethical issues in
online mental health interventions. 28th European
Conference on Information Systems (ECIS) 2020: 19.
3. Floridi L, Cowls J, Beltrametti M, et al. AI4Peoplean
ethical framework for a good AI society: opportunities,
risks, principles, and recommendations. Minds & Machines
2018; 28: 689707.
4. Dale R. The return of the chatbots. Nat Lang Eng 2016; 22:
5. Parviainen J and Rantala J. Chatbot breakthrough in the
2020s? An ethical reection on the trend of automated consul-
tations in health care. Med Health Care and Philos 2022; 25:
6. Fiske A, Henningsen P and Buyx A. Your robot therapist will
see you now: ethical implications of embodied articial intel-
ligence in psychiatry, psychology, and psychotherapy. J Med
Internet Res 2019; 21: e13216.
7. Shawar BA and Atwell E. Different measurements metrics to
evaluate a chatbot system. In: Proceedings of the workshop on
bridging the gap: academic and industrial research in dialog
technologies. Stroudsburg, PA, USA: Association for
Computational Linguistics, 2007, pp. 8996.
8. Wykes T, Lipshitz J and Schueller SM. Towards the design of
ethical standards related to digital mental health and all its
applications. Curr Treat Options Psych 2019; 6: 232242.
9. Luxton DD, Anderson SL and Anderson M. Chapter 11 -
ethical issues and articial intelligence technologies in behav-
ioral and mental health care. In: Luxton DD (ed.) Articial
intelligence in behavioral and mental health care. San
Diego: Academic Press, 2016, pp.255276.
10. Boucher EM, Harake NR, Ward HE, et al. Articially intelli-
gent chatbots in digital mental health interventions: a review.
Expert Rev Med Devices 2021; 18: 3749.
11. Almeida Rd and Silva Td. AI Chatbots in mental health: are
we there yet? In: Digital therapies in psychosocial rehabilita-
tion and mental health. Hershey, Pennsylvania: IGI Global,
2022, pp.226243.
12. Beauchamp TL and Childress JF. Principles of biomedical
ethics. 5th ed. Oxford, UK: Oxford University Press, 2001.
13. Powell JA and Menendian S. The problem of othering: towards
inclusiveness and belonging. Othering and Belonging,http://
(2017, accessed 8 November 2022).
14. Morris RR, Kouddous K, Kshirsagar R, et al. Towards an arti-
cially empathic conversational agent for mental health appli-
cations: system design and user perceptions. J Med Internet
Res 2018; 20: e10148.
15. Ho A, Hancock J and Miner AS. Psychological, relational,
and emotional effects of self-disclosure after conversations
with a chatbot. J Commun 2018; 68: 712733.
16. Nadkarni PM, Ohno-Machado L and Chapman WW. Natural
language processing: an introduction. J Am Med Inform Assoc
2011; 18: 544551.
17. Weizenbaum J. Computer power and human reason: from
judgement to calculation. 1976.
18. Bassett C. The computational therapeutic: exploring
Weizenbaums ELIZA as a history of the present. AI & Soc
2019; 34: 803812.
19. Epstein J and Klinkenberg WD. From Eliza to internet: a brief
history of computerized assessment. Comput Human Behav
2001; 17: 295314.
20. Dormehl L. Microsoft Xiaoice: AI that wants to be your friend
| Digital Trends. Digital Trends, 18 November 2018, https://
of-ai-assistants/ (18 November 2018, accessed 8 November
21. Lewis S. Ultimate guide to chatbots 2020 - examples, best
practices & more,
to-chatbots-2020/ (2019, accessed 8 November 2022).
22. Stojanov M. Prospects for chatbots. Izvestia Journal of the Union
of Scientists - Varna Economic Sciences Series 2019;8:1016.
23. Smith EM, Williamson M, Shuster K, et al. Can you put it all
together: evaluating conversational agentsability to blend
skills. Epub ahead of print 17 April 2020. DOI: 10.48550/
24. Shum H, He X and Li D. From Eliza to XiaoIce: challenges
and opportunities with social chatbots. Frontiers Inf Technol
Electronic Eng 2018; 19: 1026.
25. Turing AM. Computing machinery and intelligence. Mind
1950; LIX: 433460.
26. Oppy G and Dowe D. The Turing Test. In: Zalta EN (ed.) The
Stanford Encyclopedia of Philosophy. Stanford, CA:
Metaphysics Research Lab, Stanford University, 2021. https:// (2021,
accessed 16 December 2022).
27. Lister K, Coughlan T, Iniesto F, et al. Accessible conversa-
tional user interfaces: considerations for design. In:
Proceedings of the 17th international web for all conference.
New York, NY, USA: Association for Computing Machinery,
2020, pp.111.
28. Fitzpatrick KK, Darcy A and Vierhile M. Delivering cognitive
behavior therapy to young adults with symptoms of depres-
sion and anxiety using a fully automated conversational
agent (Woebot): a randomized controlled trial. JMIR Ment
Health 2017; 4: e7785.
29. Abd-alrazaq AA, Alajlani M, Alalwan AA, et al. An overview
of the features of chatbots in mental health: a scoping review.
Int J Med Inf 2019; 132: 103978.
30. Vaidyam AN, Wisniewski H, Halamka JD, et al. Chatbots and
conversational agents in mental health: a review of the psychi-
atric landscape. Can J Psychiatry 2019; 64: 456464.
31. Replika., (accessed 8
November 2022).
32. DAlfonso S. AI In mental health. Curr Opin Psychol 2020;
36: 112117.
33. Headspace. Headspace,
phone-support/ (accessed 8 November 2022).
Coghlan et al. 9
34. Calvo RA, Milne DN, Hussain MS, et al. Natural language
processing in mental health applications using non-clinical
texts.Nat Lang Eng 2017; 23: 649685.
35. Sheridan Rains L, Johnson S, Barnett P, et al. Early impacts of
the COVID-19 pandemic on mental health care and on people
with mental health conditions: framework synthesis of inter-
national experiences and responses. Soc Psychiatry
Psychiatr Epidemiol 2021; 56: 1324.
36. Wainberg ML, Scorza P, Shultz JM, et al. Challenges and
opportunities in global mental health: a research-to-practice
perspective. Curr Psychiatry Rep 2017; 19: 28.
37. Sadavoy J, Meier R and Ong AYM. Barriers to access to
mental health services for ethnic seniors: the Toronto study.
Can J Psychiatry 2004; 49: 192199.
38. Nadarzynski T, Miles O, Cowie A, et al. Acceptability of arti-
cial intelligence (AI)-led chatbot services in healthcare: a mixed-
methods study. DIGITAL HEALTH 2019; 5: 205520761987180.
39. Roter D. The enduring and evolving nature of the patient
physician relationship. Patient Educ Couns 2000; 39: 515.
40. Srivastava B, Rossi F, Usmani S, et al. Personalized chatbot
trustworthiness ratings. IEEE Trans Technol Soc 2020; 1:
41. Han X. Am I asking it properly?: designing and evaluating inter-
view chatbots to improve elicitation in an ethical way. In:
Proceedings of the 25th international conference on intelligent
user interfaces companion. Cagliari Italy: ACM, 2020, pp.33
42. Crutzen R, Bosma H, Havas J, et al. What can we learn from a
failed trial: insight into non-participation in a chat-based inter-
vention trial for adolescents with psychosocial problems.
BMC Res Notes 2014; 7: 824.
43. Gulliver A, Grifths KM and Christensen H. Perceived bar-
riers and facilitators to mental health help-seeking in young
people: a systematic review. BMC Psychiatry 2010; 10: 113.
44. Parker L, Bero L, Gillies D, et al. Mental health messages in
prominent mental health apps. Ann Fam Med 2018; 16: 338342.
45. Egger FN. Trust me, Im an online vendor: towards a model of
trust for e-commerce system design. In: CHI 00 extended
abstracts on human factors in computing systems.NewYork,
NY, USA: Association for Computing Machinery, 2000,
46. Floridi L and Cowls J. A unied framework of ve principles
for AI in society. Harvard Data Science Review 2019; 1,
Epub ahead of print 1 July 2019. DOI: 10.1162/
47. Jobin A, Ienca M and Vayena E. The global landscape of AI
ethics guidelines. Nat Mach Intell 2019; 1: 389399.
48. Gillon R. Medical ethics: four principles plus attention to
scope. Br Med J 1994; 309: 184184.
49. Emanuel EJ, Grady CC, Crouch RA, et al. The Oxford textbook
of clinical research ethics. Oxford, UK: Oxford University Press,
50. Tremain H, McEnery C, Fletcher K, et al. The therapeutic alli-
ance in digital mental health interventions for serious mental ill-
nesses: narrative review. JMIR Ment Health 2020; 7: e17204.
51. DAlfonso S, Lederman R, Bucci S, et al. The digital thera-
peutic alliance and human-computer interaction. JMIR Ment
Health 2020; 7: e21895.
52. Kretzschmar K, Tyroll H, Pavarini G, et al. Can your phone be
your therapist? Young peoples ethical perspectives on the use
of fully automated conversational agents (chatbots) in mental
health support. Biomed Inform Insights 2019; 11:
53. Kittay EF. The ethics of care, dependence, and disability*: the
ethics of care, dependence, and disability. Ratio Juris 2011;
24: 4958.
54. Picard RW. Affective computing. Cambridge: MIT Press,
55. Calvo RA, DMello S, Gratch JM, et al. The Oxford handbook
of affective computing. Oxford, UK: Oxford University Press,
56. Leigh S and Flatt S. App-based psychological interventions: friend
or foe?: table 1. Evid Based Mental Health 2015; 18: 9799.
57. Anthes E. Mental health: theres an app for that. Nature 2016;
532: 2023.
58. Eubanks V. Automating inequality: how high-tech tools
prole, police, and punish the poor. New York: St. Martins
Publishing Group, 2018.
59. Asaro PM. AI ethics in predictive policing: from models of threat
to an ethics of care. IEEE Technol Soc Mag 2019; 38: 4053.
60. Raji ID, Gebru T, Mitchell M, et al. Saving face: investigating
the ethical concerns of facial recognition auditing. In:
Proceedings of the AAAI/ACM conference on AI, ethics, and
society. New York, NY, USA: Association for Computing
Machinery, 2020, pp.145151.
61. Jurkiewicz CL. Big data, big concerns: ethics in the digital
age. Public Integrity 2018; 20: S46S59.
62. White G. Child advice chatbots fail to spot sexual abuse. BBC News,
11 December 2018,
46507900 (11 December 2018, accessed 15 December 2022).
63. Ischen C, Araujo T, Voorveld H, et al. Privacy concerns in
chatbot interactions. In: Følstad A, Araujo T, Papadopoulos
S, et al. (eds) Chatbot research and design. Cham: Springer
International Publishing, 2020, pp. 3448.
64. Waycott J, Davis H, Warr D, et al. Co-constructing meaning
and negotiating participation: ethical tensions when giving
voicethrough digital storytelling. Interact Comput 2016;
29(2): iwc;iww025v1.
65. Leins K. AI for better or for worse, or AI at all? Future
pdf/Articial-Intelligence/Kobi-Leins.pdf (2019).
66. DAlfonso S. Patients as domain expertsin articial intelli-
gence mental health research, https://www.nationalelfservice.
67. Duggal R, Brindle I and Bagenal J. Digital healthcare: regulat-
ing the revolution. Br Med J 2018: k6.
68. Midkiff DM and Joseph Wyatt W. Ethical issues in the provi-
sion of online mental health services (etherapy). J Technol
Hum Serv 2008; 26: 310332.
69. Martinez-Martin N and Kreitmair K. Ethical issues for
direct-to-consumer digital psychotherapy apps: addressing
accountability, data protection, and consent. JMIR Ment
Health 2018; 5: e32.
70. Commonwealth of Australia. Privacy Act 1988, https://www. (1988, accessed 15
December 2022).
71. European Union. General data protection regulation (GDPR),
data-protection-regulation-gdpr.html (2016, accessed 16
December 2022).
72. Magalhães JC and Couldry N. Giving by taking away: big
tech, data colonialism, and the reconguration of social
good. International Journal of Communication 2021; 15: 20.
73. Guta A, Voronka J and Gagnon M. Resisting the digital medi-
cine panopticon: toward a bioethics of the oppressed. Am J
Bioeth 2018; 18: 6264.
Coghlan et al. 11
... Future work can be expanded to include healthcare professionals and their roles. Additionally, it is worth acknowledging that prior research identified a range of ethical implications, including privacy, security, and trust [11,16]. While these aspects were not within the scope of this study, future work should delve deeper into these ethical dimensions. ...
Conference Paper
Full-text available
Type 2 diabetes (T2D) has emerged as a significant catalyst for various health conditions. The availability and cost of health professionals pose challenges, limiting access to personalised lifestyle support. To address this issue, utilising Conversational Agents (CAs) presents an opportunity to improve scalability and adoption by improving efficiency and engagement. Thus, this study focuses on the potential impact and response to T2D. This paper aims to develop a framework for designing CA functions that support individuals at risk of T2D to be more active and raise their awareness. The study involves a mixed methods approach, including a survey conducted among 30 participants in Sydney and Jeddah, followed by semi-structured interviews conducted with 10 participants. While descriptive statistics were used to analyse the survey, the interviews were analysed using thematic analysis. Drawing upon the interviews and relevant literature, this study proposes developing a preliminary framework to help design CAs that support individuals with prediabetes to adopt a more active lifestyle.
... Moreover, it is important to thoroughly contemplate the ethical ramifications associated with the implementation of NLP within the realm of mental health. The incorporation of chatbots into therapeutic contexts gives rise to inquiries concerning the ethical parameters of professional conduct, the extent of human engagement necessary, and the possible hazards associated with excessive dependence on automated systems (41). Achieving an optimal equilibrium between human engagement and automation is of utmost importance in safeguarding the welfare of those seeking assistance for their mental health. ...
Full-text available
The domains of mental health and artificial intelligence (AI) are undergoing rapid advancements, exhibiting the capacity to mutually influence one another in significant ways. The increasing prevalence of mental health illnesses has prompted the exploration of potential remedies in the field of AI, which show promise in the areas of early detection, prevention, and therapy. Sophisticated machine learning algorithms possess the capability to evaluate extensive volumes of data, including social media posts and voice patterns, with the objective of detecting patterns and symptoms associated with mental illness. This facilitates the implementation of more focused interventions and individualized treatment strategies. Furthermore, chatbots utilizing AI have the capability to deliver round-the-clock assistance to those undergoing acute distress or grant them access to therapy in cases where waiting lists are extensive. Nevertheless, it is of utmost importance to guarantee the incorporation of ethical issues throughout the use of AI in the field of mental healthcare. In order to achieve successful integration, it is imperative to address many concerns, including but not limited to privacy, bias, and accurate diagnosis. However, the convergence of mental health and AI offers a distinct prospect to transform our approach to mental disease and improve the availability of care for countless individuals globally.
... Yet, bias can be mitigated through thorough analysis of input data and modifying training algorithms. Detecting and eliminating bias via data examination allows chatbots to provide balanced and unbiased responses (Bradley & Alhajjar, n.d.;Coghlan et al., 2023). Diversifying training data, involving stakeholders in the design process, and using explainable AI techniques can also help mitigate bias. ...
Full-text available
This study aimed to investigate the differences between responses generated by ChatGPT and those produced by humans in terms of authenticity, professionalism, and practicality. It involved 140 participants of the age group ranging from 18 to 43 (101 females, 37 males, and 2 preferring not to disclose their gender). The participants were presented with the 10 solution statements against the 10 problem statements, generated by human participants and ChatGPT3.5 (gpt-3.5-turbo) and asked to rate the responses on a 5-point Likert scale, with higher scores indicating higher levels of authenticity, professionalism, and practicality. Paired sample t-test was conducted to compare the scores of the ChatGPT-generated responses and the human-generated responses. The results of the study indicated that there was significant difference between the two types of responses in the given dimensions. These findings suggest that ChatGPT-generated responses can be considered a reliable alternative to human-generated responses in certain applications. Additionally, the use of ChatGPT-generated responses can reduce the response time and workload of human responders, as well as the associated costs.
... Furthermore, an ethical concern goes along with these bots because of the intrinsic generative AI component. The component can generate false information or inference upon personally identifiable information, thus sacrificing user privacy (Coghlan et al., 2023). Transparency can be achieved by either augmenting or incorporating external knowledge. ...
Full-text available
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.
... Another important aspect of the use of AI-driven technologies, for example, chatbots in health behavior change, deals with the ethics and ethical usage surrounding its application. Explicability, a specialized term which involves transparency and accountability, deals with explaining the workings of intelligent tools as well as understanding who is ethically and legally responsible for the interactions between chatbots and humans [6]. Thus, we see that there are additional components to keep in mind when designing and implementing intelligent technologies in health behavior interventions apart from the standard ethical framework of respect for persons, beneficence, non-maleficence, bias and justice [7]. ...
Full-text available
The call for articles for the special section entitled ‘Innovations in Health Behavior Change’ is currently open and is gaining interest from editors and authors worldwide [...]
... Beyond technical limitations, it remains to be decided whether complete automation is an appropriate end goal for behavioral healthcare, due to safety, legal, philosophical, and ethical concerns (e.g., Coghlan et al., 2023). While some evidence indicates that humans can develop a therapeutic alliance with chatbots (e.g., Beatty et al., 2022), the long-term viability of such alliance building, and whether or not it produces undesirable downstream effects (e.g., altering an individual's existing relationships or social skills) remains to be seen. ...
Full-text available
Large language models (LLMs) such as ChatGPT and GPT-3/4, built on artificial intelligence, hold immense potential to support, augment, or even replace psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, potential applications of clinical LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Second, stages of integrating LLMs into psychotherapy (via assistive, collaborative, and fully autonomous LLM applications) are presented, analogous to the development of autonomous vehicle technology. Third, recommendations for the responsible development of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Fourth, recommendations are made for the critical evaluation of clinical LLMs, which psychologists are uniquely positioned to scope and guide. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.
The article provides material on monitoring a person's mental health using modern psychological approaches and information technologies. IT products in the field of mental health can be implemented as Telegram bots, mobile applications, desktop applications, websites, social networks, etc. An analysis of available software for collecting and analyzing data on a person's psychological state, his mood, sleep patterns, the presence of signs of depression, various types of disorders, etc. was carried out. It was revealed that the main drawback of the existing software products is the lack of certification of tests and confirmation of the use of scientifically based methods for the interpretation of the obtained results. The work presents its own software for monitoring a person's mental health, which is based on the methods of cognitive-behavioral therapy. The product is implemented in the format of a desktop application written in Python using standard GUI library Tkinter and additional libraries CustomTkinter and, TTKBOOTSTRAP. The main purpose of the developed software product is to allow the user to monitor his own psychological health, including tracking mood, anxiety level, emotional state, stress level, sleep quality, etc. The app also provides helpful tips and advice on maintaining mental health and reducing stress. Functional capabilities of the application allow a person to conduct preliminary self-diagnosis to detect depression, SAD, OCD, PTSD, anxiety and cognitive disorder, as well as use SMER tables and a notebook for further self-analysis. When creating the program, an object-oriented approach was used with the use of algorithms for optimal button generation and survey generation using frames. To test the developed software and the mathematical model that underlies it, experimental studies were carried out in real time on the recognition of gestures of the sign language
: In recent times, technology has increasingly become a central force in shaping the landscape of mental health care. The integration of various technological advancements, such as teletherapy, virtual care platforms, mental health apps, and wearable devices, holds great promise in improving access to mental health services and enhancing overall care. Technology’s impact on mental health care is multi-faceted. Teletherapy and virtual care have brought about a revolution in service delivery, eliminating geographical barriers and offering individuals convenient and flexible access to therapy. Mobile mental health apps empower users to monitor their emotional wellbeing, practice mindfulness, and access self-help resources on the move. Furthermore, wearable devices equipped with biometric data can provide valuable insights into stress levels and sleep patterns, potentially serving as valuable indicators of mental health status. However, integrating technology into mental health care comes with several challenges and ethical considerations. Bridging the digital divide is a concern, as not everyone has equal access to technology or the necessary digital literacy. Ensuring privacy and data security is crucial to safeguard sensitive client information. The rapid proliferation of mental health apps calls for careful assessment and regulation to promote evidence-based practices and ensure the delivery of quality interventions. Looking ahead, it is vital to consider future implications and adopt relevant recommendations to fully harness technology’s potential in mental health care.
Full-text available
Introduction Increasing demand for mental health services and the expanding capabilities of artificial intelligence (AI) in recent years has driven the development of digital mental health interventions (DMHIs). To date, AI-based chatbots have been integrated into DMHIs to support diagnostics and screening, symptom management and behavior change, and content delivery. Areas covered We summarize the current landscape of DMHIs, with a focus on AI-based chatbots. Happify Health’s AI chatbot, Anna, serves as a case study for discussion of potential challenges and how these might be addressed, and demonstrates the promise of chatbots as effective, usable, and adoptable within DMHIs. Finally, we discuss ways in which future research can advance the field, addressing topics including perceptions of AI, the impact of individual differences, and implications for privacy and ethics. Expert opinion Our discussion concludes with a speculative viewpoint on the future of AI in DMHIs, including the use of chatbots, the evolution of AI, dynamic mental health systems, hyper-personalization, and human-like intervention delivery.
Full-text available
Many experts have emphasised that chatbots are not sufficiently mature to be able to technically diagnose patient conditions or replace the judgements of health professionals. The COVID-19 pandemic, however, has significantly increased the utilisation of health-oriented chatbots, for instance, as a conversational interface to answer questions, recommend care options, check symptoms and complete tasks such as booking appointments. In this paper, we take a proactive approach and consider how the emergence of task-oriented chatbots as partially automated consulting systems can influence clinical practices and expert–client relationships. We suggest the need for new approaches in professional ethics as the large-scale deployment of artificial intelligence may revolutionise professional decision-making and client–expert interaction in healthcare organisations. We argue that the implementation of chatbots amplifies the project of rationality and automation in clinical practice and alters traditional decision-making practices based on epistemic probability and prudence. This article contributes to the discussion on the ethical challenges posed by chatbots from the perspective of healthcare professional ethics.
Full-text available
The therapeutic alliance (TA), the relationship that develops between a therapist and a client/patient, is a critical factor in the outcome of psychological therapy. As mental health care is increasingly adopting digital technologies and offering therapeutic interventions that may not involve human therapists, the notion of a TA in digital mental health care requires exploration. To date, there has been some incipient work on developing measures to assess the conceptualization of a digital TA for mental health apps. However, the few measures that have been proposed have more or less been derivatives of measures from psychology used to assess the TA in traditional face-to-face therapy. This conceptual paper explores one such instrument that has been proposed in the literature, the Mobile Agnew Relationship Measure, and examines it through a human-computer interaction (HCI) lens. Through this process, we show how theories from HCI can play a role in shaping or generating a more suitable, purpose-built measure of the digital therapeutic alliance (DTA), and we contribute suggestions on how HCI methods and knowledge can be used to foster the DTA in mental health apps.
Mind design is the endeavor to understand mind (thinking, intellect) in terms of its design (how it is built, how it works). Unlike traditional empirical psychology, it is more oriented toward the "how" than the "what." An experiment in mind design is more likely to be an attempt to build something and make it work—as in artificial intelligence—than to observe or analyze what already exists. Mind design is psychology by reverse engineering. When Mind Design was first published in 1981, it became a classic in the then-nascent fields of cognitive science and AI. This second edition retains four landmark essays from the first, adding to them one earlier milestone (Turing's "Computing Machinery and Intelligence") and eleven more recent articles about connectionism, dynamical systems, and symbolic versus nonsymbolic models. The contributors are divided about evenly between philosophers and scientists. Yet all are "philosophical" in that they address fundamental issues and concepts; and all are "scientific" in that they are technically sophisticated and concerned with concrete empirical research. Contributors Rodney A. Brooks, Paul M. Churchland, Andy Clark, Daniel C. Dennett, Hubert L. Dreyfus, Jerry A. Fodor, Joseph Garon, John Haugeland, Marvin Minsky, Allen Newell, Zenon W. Pylyshyn, William Ramsey, Jay F. Rosenberg, David E. Rumelhart, John R. Searle, Herbert A. Simon, Paul Smolensky, Stephen Stich, A.M. Turing, Timothy van Gelder
Artificial intelligence (AI) is already having a major impact on society. In this chapter, the authors report the results of a fine‐grained analysis of several of the highest‐profile sets of ethical principles for AI. They assess whether these principles are convergent, with a set of agreed‐upon principles, or divergent, with significant disagreement over what constitutes ‘ethical AI’. The authors then identify an overarching framework consisting of five core principles for ethical AI. In the ensuing discussion, they note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, standards, and best practices for ethical AI in a wide range of contexts. The development and use of AI hold the potential for both positive and negative impact on society, to alleviate or to amplify existing inequalities, to cure old problems, or to cause new ones.
People with mental health problems often struggle in getting the suitable treatment regarding not only the type of interventions available but also the conditions required for a proper treatment, mainly cost, locality, and frequency. The use of AI chatbots for this population is a new trend and can reduce the gap between the need for mental health care making them accessible in a cost-effective way. Although chatbots are not a substitute for formal treatments, they are sometimes used in tandem with other treatments with positive results. This chapter provides a review on the subject, presenting several chatbots for mental health problems and also addressing some concerns such as privacy, data security, AI limitations, and ethical implications. Future research directions are also discussed.
Big Tech companies have recently led and financed projects that claim to use datafication for the “social good”. This article explores what kind of social good it is that this sort of datafication engenders. Through the analysis of corporate public communications and patent applications, it finds that these initiatives hinge on the reconfiguration of social good as datafied, probabilistic, and profitable. These features, the article argues, are better understood within the framework of data colonialism. Rethinking “doing good” as a facet of data colonialism illuminates the inherent harm to freedom these projects produce and why, in order to “give”, Big Tech must often take away.
The adoption of data-driven organizational management - which includes big data, machine learning, and artificial intelligence (AI) techniques - is growing rapidly across all sectors of the knowledge economy. There is little doubt that the collection, dissemination, analysis, and use of data in government policy formation, strategic planning, decision execution, and the daily performance of duties can improve the functioning of government and the performance of public services. This is as true for law enforcement as any other government service.