ArticlePDF Available

Artificial intelligence and counseling: Four levels of implementation

Authors:

Abstract

Artificial Intelligence (AI) is increasingly prominent in public, academic, and clinical provinces. A widening research base is expanding AI’s reach, including to that of the counseling profession. This article defines AI and its relevant subfields, provides a brief history of psychological AI, and suggests four levels of implementation to counseling, corresponding to time orientation and influence. Implications of AI are applicable to counseling ethics, existentialism, clinical practice, and public policy.
https://doi.org/10.1177/0959354319853045
Theory & Psychology
1 –13
© The Author(s) 2019
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0959354319853045
journals.sagepub.com/home/tap
Artificial intelligence and
counseling: Four levels of
implementation
Russell Fulmer
Northwestern University
Abstract
Artificial Intelligence (AI) is increasingly prominent in public, academic, and clinical provinces. A
widening research base is expanding AI’s reach, including to that of the counseling profession.
This article defines AI and its relevant subfields, provides a brief history of psychological AI,
and suggests four levels of implementation to counseling, corresponding to time orientation and
influence. Implications of AI are applicable to counseling ethics, existentialism, clinical practice,
and public policy.
Keywords
artificial intelligence, artificial intelligence and ethics, artificial intelligence and existentialism,
chatbots, psychological artificial intelligence
Artificial Intelligence (AI) is expected to play an influential role in the mental health care
of the future (Luxton, 2014, 2016). Many theorists and researchers predict AI to shape
the existential future of life on earth (Barrat, 2015; Bostrom, 2014; Kurzweil, 2014;
Müller, 2016) with special implications for jobs and careers (Ross, 2017). Late physicist
Stephen Hawking discussed AI potentially bringing about the end of humanity, stressing
the importance of enacting safety measures including raising awareness and a deepened
understanding of the risks, challenges, and short- and long-term impacts of AI develop-
ment (Hawking, Russell, Tegmark, & Wilczek, 2014). In 2016, some of the world’s larg-
est companies formed an alliance to help ensure that AI develops in a beneficent manner.
Amazon, Apple, Deep Mind, Google, Facebook, IBM, and Microsoft are founding part-
ners in the “Partnership on Artificial Intelligence To Benefit People and Society,” a col-
laboration that promotes interdisciplinary inclusiveness in AI and its societal impact
Corresponding author:
Russell Fulmer, Northwestern University, 618 Library Place, Evanston, IL 60201, USA.
Email: russell.fulmer@northwestern.edu
853045TAP0010.1177/0959354319853045Theory & PsychologyFulmer
research-article2019
Article
2 Theory & Psychology 00(0)
(Gaggioli, 2017a). This partnership aims to bring together activists and experts in other
fields including psychology to discuss AI’s current and future role and impact on society.
Efforts are thus being made to approach AI as a societal shift with multidisciplinary
implications. Specifically, the developers of AI are prudently seeking input from mental
health professionals, as the psychological sciences have played a central role in AI devel-
opment since its formal inception (Frankish & Ramsey, 2014).
Counselors have forecasted AI to infiltrate their profession for some time (Illovsky,
1994; Sharf, 1985). But only within the past decade have improvements in computer
processing power and natural language processing ability—along with advancements in
artificial neural networks—brought about a new wave of AI ability (Hirschberg &
Manning, 2015; Kurzweil, 2006; Russell & Norvig, 2003). These advancements have
positioned AI in the spotlight. The Artificial Intelligence Index (2017) Annual Report
states, “Artificial Intelligence has leapt to the forefront of global discourse, garnering
increased attention from practitioners, industry leaders, policymakers, and the general
public” (p. 5). AI research is advancing extremely fast. According to the AI Index Annual
Report, “even experts have a hard time understanding and tracking progress across the
field” (p. 5). AI applications already assist health-care professionals with clinical train-
ing, treatment, assessment, and clinical decision-making (Hamet & Tremblay, 2017;
Luxton, 2014). AI has become a vast, interdisciplinary field that often intersects with
counseling. One purpose of this article is to review AI progress in domains relevant to
clinical counseling.
What AI actually is stands as a deceptively complex question largely because defining
intelligence alone is challenging (Gardner, 2017; Monnier, 2015). Before explaining cur-
rent implementations and future implications for the counseling profession, I will define
and explain relevant terms and concepts associated with AI. Next, I will review the past,
present, and future of AI in relation to counseling. Finally, I will reveal four metalevels
of AI implementation to the counseling profession: one historical, one current, one pos-
sible in the near future, and one conceivable in the long-term. Each theoretical level
shows an increasing amount of relevancy, facility, and influence of AI on the counseling
profession.
Artificial intelligence: Description and explanation
Understanding how AI has and will impact the counseling profession begins with estab-
lishing reliable definitions. Breaking the term down into its component parts means
defining the terms “artificial” and “intelligence.” Artificial implies the synthetic or
human-designed rather than the naturally derived. The “artificial” of AI involves mechan-
ics, electronics, or computers. The concept of intelligence—specifically defining and
measuring it as a variable, combined with its connotations—has been long debated in the
literature (Cherniss, Extein, Goleman, & Weissberg, 2006; Davies, 2002; Fagan, 2000;
Schroeder, 2017; Sternberg, 1985). The confusion also exists within the AI community
(Legg & Hutter, 2007).
Intelligence is thought to extend beyond a strict cognitive capacity into the emotional
realm (Goleman, 2005) and is theorized to have multiple extensions (Gardner, 2006). A
useful synthesis of the myriad conceptions of intelligence is offered by artificial
Fulmer 3
intelligence researcher Max Tegmark (2017), who states that intelligence is the “ability
to accomplish complex goals” (p. 39). Subsequently, I offer the following definition of
AI as the ability of non-biological mechanisms to accomplish goals. The qualifier “com-
plex” is deleted from Tegmark’s definition because intelligence is not a dichotomous
concept; rather, both simple and complex goals can be attained. Intelligence in its rudi-
mentary or advanced states occupies different points on a continuum, encapsulated
within the same category, differing quantitatively. AI is akin to an operating system, like
the human brain. Indeed, neuroscience has informed a substantial portion of prevailing
AI research (Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Lecun, Bengio, &
Hinton, 2015). The embodiment of AI can take various forms, from a computer screen
avatar to a robot.
Machine learning and algorithms
Artificial intelligence brings big-picture, philosophical ramifications, raising ontological
and epistemological questions (Copeland, 1998). Yet, AI begins within the purview of
the diminutive and precise, requiring mathematics and formal logic as demonstrated by
the AI subfield of machine learning. For an AI to progress to the level of a functioning
counselor, it must have the capacity to learn. Machines that learn are paradigmatically
dissimilar from their traditional predecessors. A major point of divergence is in the
agency to possess control. A human who builds a standard machine retains control over
the machine. Accidents occur with machinery—an automobile accident, for example—
but even then the accident is not caused by the vehicle’s agency. Human error in naviga-
tion, human error in construction, or inclement weather may be culprits, but accidents do
not occur because the automobile makes a wrong decision.
Conversely, a machine that learns through its own experiences may possess skills and
abilities unknown to its human originators. One example is AlphaGo, a computer pro-
gram designed to play the board game Go (Gibney, 2016). AlphaGo learned by playing
thousands of games against human competitors and fellow computers, improving to the
point that, in 2016, the program beat world champion Lee Sedol four games to one.
During the match with Sedol, the developers of AlphaGo did not know which move it
would play next. Their best guess would likely be wrong, lest one of the programmers
beat the world champion. The victory of AlphaGo is considered a milestone in the his-
tory of machine learning since Go is known as a game requiring not only rote memoriza-
tion, but strategy and intuition. AlphaGo showed autonomy, acting independently of
human input (albeit in a narrow fashion). Nonetheless, this example of machine learning
demonstrates that “smart” machines can act in unforeseen ways and outperform humans
in tactical proficiency.
Considering that machine learning may only be in its infancy in terms of potential
(Arel, Rose, & Karnowski, 2010), it raises numerous questions for the counseling profes-
sion. For example, if counselors-in-training can learn, improve upon their mistakes, and
eventually cross the threshold to independent practice—and an AI shows the same skill-
set but learns much more quickly—how might autonomous AIs influence the field? Like
Go, counseling too involves intuition and strategy. Would an advanced AI, functioning
as a counselor, make moves questionable to even experienced counselors, but that pay
dividends in the end?
4 Theory & Psychology 00(0)
If AI one day advances to the level of competent counseling practice, it will be through
the underlying mechanisms that drive machine learning called algorithms. What culmi-
nates in a computer program besting a world champion Go player or, potentially, an AI
employing a counseling technique, begins with a set of logic-driven instructions detail-
ing how a task should be performed. The notion of an algorithm does not lend itself well
to a rigorous definition (Gurevich, 2012); however, Pedro Domingos (2015) provides a
constitutive explanation of an algorithm as “a sequence of instructions telling a computer
what to do” (p. 1). AI is a broad area, machine learning is a subfield, and algorithms are
specific operations—like written communications that can both therapeutically inform
and give conversational voice to the AI.
The road to counseling
The term “artificial intelligence” was devised by mathematics professor John McCarthy,
who helped to organize a summer conference at Dartmouth College in 1956 about
whether machines could be made to think (Copeland, 1998). McCarthy’s proposal laid
out the basic premise of AI research: that if a feature of intelligence, such as learning,
could be broken down into its component parts and operationally defined with precision,
then a machine could be made to simulate it (McCarthy, Minsky, Rochester, & Shannon,
2006). The conference attendees set out to discover how to make machines use language
(see McCarthy et al., 2006, for a complete discussion).
That conference is known as one of many AI milestones of the modern era, including
the first meeting of AI and counseling. In many respects, counselors are in the business
of communication and depend on various forms: oral, written, non-verbal, art, and music
therapy. In 1956, those AI researchers set out to learn how machines can be made to com-
municate. Ten years after the Dartmouth conference appeared, the first chatterbot capa-
ble of communicating in a way reminiscent of a human counselor was developed. Also
known as chatbots or virtual agents, chatterbots are computer programs designed to
simulate human conversation (Deryugina, 2010). This debut bot, named Eliza, was final-
ized in 1966 (Weizenbaum, 1966). Designed to replicate a Rogerian therapist, Eliza was
known for answering questions with questions (Mauldin, 1994). In their output, machines
capable of communication give the appearance of machine-level cognitive ability. At
present, chatbots do not literally think, but rather give the illusion of intelligent conversa-
tion by imitating it (Abdul-Kader & Woods, 2015; Mauldin, 1994; Warwick & Shah,
2014).
While the metaphysics of AI may be of indirect interest to counselors, a question
proposed by AI founding father, Alan Turing, is directly relevant. Turing (1950) pro-
posed a scientific research question: How well can a machine imitate human conversa-
tion? The question brought the debate paradoxically more into the empirical and
subjective realms. The Turing test places a computer system against human subjective
experience. Known as The Imitation Game, the test asks human participants to interact
through text with an unknown entity (Saygin, Cicekli, & Akman, 2000). The entity could
be a computer program or a human being, typing. If the participant guesses that he or she
is conversing with a computer, the computer program fails. If the computer imitates
human conversation sufficiently and convinces the participant, the program passes. In a
Fulmer 5
field heavily invested in human conversation, the Turing test may prove pivotal when
considering counseling implementation, ethics, working conditions, and accessibility.
Perception is reality to many people. Counselors would be well served to monitor
public perception about psychological artificial intelligence. In doing so, counselors
could decide that using psychological AI as a supplement to traditional counseling may
benefit clients and the profession alike. To a small degree, chatbots like Eliza have mim-
icked counseling skills for some time. Counselors themselves may disagree. However, if
or when the public views psychological AI as relatively synonymous with counseling,
counselors would be wise to pay heed.
Four levels of implementation in counseling
The American Counseling Association (ACA) defines counseling as “a professional
relationship that empowers diverse individuals, families, and groups to accomplish men-
tal health, wellness, education, and career goals” (Kaplan, Tarvydas, & Gladding, 2014,
p. 366). The definition can be broken down into three pillars of counseling: (a) forming
a professional relationship, (b) empowering, and (c) accomplishing goals.
The act of counseling requires the fulfillment of all three pillars. However, we might
say that if one or two of the requirements are met by an AI, then that AI is getting closer
to functioning as, if not being, a counselor. For example, an AI capable of empowering
an individual towards accomplishing a wellness goal is partially functioning as a coun-
selor because two of the three requirements are met. If AI takes on a more prominent role
in counseling, we should expect to see the functions of a counselor met—or potentially
exceeded—by artificial intelligence.
Based on the premise that AI has been and will continue to be applicable to counseling,
I describe four levels of implementation: historical, contemporary, near future, and long-
term. The levels propose to help navigate an AI-infused reality by correlating them with
time orientation and influence on the field of counseling and comparing them to the ACA-
sanctioned definition of counseling. Where the first level, historical, shows that AI’s past
involvement with counseling was minimal, the final level has yet to happen but is marked
by AI showing sophisticated and highly influential involvement in the field.
Level 1: Historical
Historical AI implementations in counseling did not establish a professional relationship
and likely neither empowered nor helped people accomplish their goals to any signifi-
cant degree. Traditionally, counselors have made little use of artificial intelligence.
Connections drawn between the two fields are indistinct and indirect. First-level interac-
tion involved chatbots showcasing rudimentary applications of natural language process-
ing (NLP), a field of AI concerned with understanding and modeling human language
(Tanana, Hallgren, Imel, Atkins, & Srikumar, 2016). The field of NLP has advanced
from its 1960s inception in that now complex models can be applied via powerful com-
puter-generated statistical processors to assess statistical probabilities of sequences of
words, inflection, and semantics in large samples of natural language (Tanana et al.,
2016). These progressions have led to AI-assisted programs designed for therapeutic use,
6 Theory & Psychology 00(0)
in which AIs have been programmed to simulate mental health patients, for example.
While being imperfect, these programs do show some therapeutic efficacy and warrant
further research (Dalfonso et al., 2017; Luxton, 2014).
Level 2: Contemporary
Modern AI implementations in counseling do not establish a professional relationship and
empower to an unknown degree, but likely help clients accomplish their goals to some
degree. Level two is marked by AI-assisted implementations in counseling backed by
research. Contemporary implementations take two major forms. The first is through text-
based bots like Woebot, a text-based agent that employs Cognitive Behavioral Therapy
(CBT) by conveying CBT self-help techniques in conversation-like interactions with users.
Woebot has been shown to alleviate symptoms of depression and anxiety in young adults
(Fitzpatrick, Darcy, & Vierhile, 2017). Another example is Tess, a psychological AI using an
integrative theoretical orientation which included conversational, informational, and CBT-
like approaches. Research suggests that Tess can reduce depression and anxiety in college
students by providing interventions applicable to real life through AI-generated conversations
(Fulmer, Joerin, Gentile, Lakerink, & Rauws, 2018). The second form is through virtual real-
ity. Ellie, termed a virtual human interviewer, combines virtual reality with affective comput-
ing (Gaggioli, 2017b). Appearing on a screen as a virtual human, Ellie is capable of analyzing
a client’s verbal responses, facial expressions, and vocal intonations (Darcy, Louie, & Roberts,
2016). In many respects, Ellie represents the higher end of today’s therapeutic AI applica-
tions. Noteworthy are Ellie’s abilities in assessment, as her capacity to identify distress indi-
cators may prove beneficial in the diagnosis and treatment of Posttraumatic Stress Disorder
(PTSD), in addition to depression and anxiety (DeVault et al., 2014).
Today’s AI implementations show the utility of a wide range of counseling theories,
with CBT being most prominent. There is movement beyond strictly text-based com-
munication into visual and auditory domains as well as AI-based assessments that may
lead to greater reliability in diagnosis (DeVault et al., 2014; Hahn, Nierenberg, &
Whitfield-Gabrieli, 2016). Research is leading to improvements in data sensors, NLP,
and general machine learning by applying more complex models when computing com-
municative and behavioral input and output, and continuing to elucidate the processes
underlying human sensory and perception systems as well as learning paradigms so that
they may be implemented in computers. Coupled with research attesting to therapeutic-
AI efficacy, AI may play a greater role in the counseling of the future. Levels three and
four represent how that future may come to fruition.
Level 3: The medium to distant future, i.e., the dawn of artificial general
intelligence
Level three is characterized by the onset of Artificial General Intelligence (AGI). AIs
at this level may possess the expertise necessary to form professional relationships
with clients. Additionally, an AGI would have the capability of empowering and help-
ing clients accomplish their goals. Modern AI is known as having narrow intelligence
Fulmer 7
because it is designed to accomplish singular goals, like providing psychoeducation. In
contrast, an AGI would be versatile, able to reach many goals and complete tasks in a
way reminiscent of, or superior to, a human being (Yampolskiy & Fox, 2012). AGI has
not been developed yet and experts differ on their predictions of when it will happen,
with some suggesting we are a few decades away while others predict a century or
longer (Tegmark, 2017). Consequently, the previous level, two, may encompass an
extended period.
There is a stark difference between second and third level AI implementations to
counseling. Computers typically learn much more quickly than humans. The advent of
an AGI built for the purpose of counseling would likely learn the art and science of the
profession in its totality and swiftly. With a high-level skillset, and a capability of seeing
a vast range of clientele, “AGI Counselors” would incite a host of ethical, legal, and
philosophical questions. A prominent question will be if the AGI Counselor is indeed
establishing a professional relationship, with all the responsibilities and protections that
implies. To practicing counselors, this may sound implausible. Nevertheless, there is
already copious discussion in the literature about the moral rights of conscious robots,
including what constitutes consciousness and the moral responsibilities tied to it, and
whether AIs can be developed to represent evaluative diversity (Gerdes, 2016; Lin,
Abney, & Bekey, 2014; MacDorman & Kahn, 2007; Malle, 2015; Santos-Lang, 2015;
Tavani, 2018; Wallach & Allen, 2010).
There may be a contrast between a body of research that suggests the AGI is effective
at counseling (sometimes more so than human counselors) and those who fear a takeover
and job loss from the AGI. The fear of job loss from automation and, eventually, AI is
growing in many fields (Kaplan, 2015; Ross, 2017). It is conceivable that the same fear
would exist among counselors who may feel that their AGI counterparts have assumed
the same level of communicative and empathetic skills to completely replace them.
Level three implementations of AI in counseling will constitute a fundamental change to
the profession. For the first time, counselors may be more than human.
Level 4: The age of superintelligence
Level four is characterized by “superintelligence.” Such an AI would easily meet all
three counseling criteria—relationship, empowerment, and goal accomplishment—
along with other, possibly more helpful and effective criteria not yet established by
humans. The idea of a superintelligence was proposed by philosopher Nick Bostrom
(2014) and refers to a high-level AI that far surpasses human-level intelligence.
Superintelligence represents the time when AGI learns to the point of accomplishing
goals of a caliber impossible for human beings. The proficiency of such an AI is
unfathomable at this point. Some suggest the onset of high-level intelligence will usher
in the next stage of human evolution (Reese, 2018), others fear its consequences for
humanity (Bostrom, 2014), while others believe these fears to be unfounded (Agar,
2016).
The age of superintelligence remains conjecture. Nonetheless, Müller and Bostrom
(2016) and the Future of Humanity Institute at Oxford University who surveyed theorists
and researchers doing technical work on AI found:
8 Theory & Psychology 00(0)
The median estimate of respondents was for a one in two chance that high-level machine
intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075.
Experts expect that systems will move on to superintelligence in less than 30 years thereafter.
They estimate the chance is about one in three that this development turns out to be “bad” or
“extremely bad” for humanity. (p. 555)
If or when such developments occur, the field of counseling—and indeed civilization—
will be transformed.
Summary
Each implementation level sees AI growing more into the fabric of counseling (see
Table 1). The past saw nominal AI implementation to the counseling field, but the
present has seen an AI resurgence. There are strong indications of more AI research in
the future as the European Commission, U.S., and China devote billions of dollars to
funding such endeavors (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2018; Kelly,
2018; Larson, 2018). Whether the research surge brings about levels three and four
remains to be seen.
Discussion
This article intended to define and explain AI concepts, to discuss how AI pertains to
clinical counseling, and to present AI-in-counseling implementation levels from a theo-
retical viewpoint. Four metalevels of implementation were presented. The levels corre-
spond to time orientation, with level one relating to historical and level four to future
implementations affecting humanity in the long-term. I acknowledge that the future is
unknowable to some degree, but as climate scientists forecast a hotter world due to global
warming based on data patterns, so too are AI prognostications grounded in current
research (Hulme, 2016).
Artificial intelligence and counseling already interface. In the future, the extent to
which they interweave will depend largely on AI’s rate of growth, which, if current
trends continue, will fall somewhere between sequential and exponential. With exponen-
tial growth, for example, an AI capable only of posing elementary questions one day
could learn advanced assessment, diagnosis, and ways to embody the ethical, cognitive,
emotional, and relational characteristics of expert therapists (Jennings, Sovereign,
Table 1. Impact of AI level implementation on pillars of counseling process.
Level 1:
Historical
Level 2:
Contemporary
Level 3: Artificial
General Intelligence
Level 4:
Superintelligence
Pillar of
Counseling
Professional
relationship
No No Central ethical
question
Yes
Empowers Likely no Unknown Yes Yes
Helps accomplish
goals
Likely no Likely yes Yes Yes
Fulmer 9
Bottorff, Mussell, & Vye, 2005; Skovholt & Jennings, 2004) essentially overnight.
Exponential growth is not certain, but explosive growth is certainly plausible (Pratt,
2015; see Kurzweil, 2006, for a technical explanation of how this might occur).
The presence of AI and high-technology in counseling looks to continue, and even
current-level AI implementations in counseling raise a host of practice-oriented and ethi-
cal questions regarding how and when AI use is appropriate or effective, to which degree
it can be used in place of a human counselor, how it may affect a person seeking human
connection via counseling, whether data produced during AI use could be stored in a
hacker-proof manner, and whether counselor and client AI are adequately trained and
informed on AI practices.
At present, the counseling literature contains a paucity of articles addressing AI from
a descriptive, correlative, or experimental basis. More research could inform clinical
practice if clinicians employ AI-assisted supplements, such as the psychological AI Tess,
to help their clients. Research could also inform thought-leadership if a need arises for
the ACA to address AI at a public policy level. Perhaps the most immediate need for
research is in counseling ethics.
Using Green’s (2018) outline of ethical concerns surrounding AI as a guide, research
must focus on the ways in which AI counseling services can avoid negative side effects,
overgeneralizations, and potentially harmful exploration in strategies and techniques.
Further, attention must be dedicated to ensuring AI functional transparency, or ensuring
that AI actions can be understood by those designing, manufacturing, implementing, and
interacting with it. Another ethical concern revolves around data security and privacy
practices when implementing AI services. Finally, investigations should seek to deter-
mine the extent to which both counselors and clients need to be versed in AI technology
and implementation to ensure fairness, beneficence, and non-maleficence in practice and
counselor and client safety and wellbeing (Green, 2018).
The counseling community needs further information about the effect AI services
could have on people specifically seeking out human interactions because they feel
unheard, unseen, and unworthy of the care of others. The shift from human to human-like
interactions in counseling, as well as other fields, may bring about a plethora of unchar-
tered existential questions. Coupled with the onslaught of induced unemployment, socio-
economic inequality, growing technological dependency, and human de-skilling, these
existential questions may warrant closer attention and preparation by researchers and
those who specialize in human emotion and crisis, such as counselors (Green, 2018). AI
brings power and influence that can be abused. Research helps prepare the profession to
address ethical questions when they arise.
More research is needed about psychological artificial intelligence. Considering its
burgeoning nature, there is a dearth of research on the topic and noteworthy is the absence
of literature about ethical ramifications. This article fills a research gap at the theoretical
level, offering a taxonomy with the proposed levels of implementation and providing
structure for forthcoming literature. For example, the nature of a clinical ethical dilemma
will look different at level one compared to level four. Theoretical pieces carry inherent
advantages and limitations. Advantages include providing constitutive definitions to
guide future inquiry and high-level context to frame AI implementation and influence on
the field. A limitation is the lack of specificity and clinical examples found in an abstract,
10 Theory & Psychology 00(0)
categorical offering. Further, as AI is developing into a vast interdisciplinary field with
weekly or even daily developments, no single article can capture its actual reach and
consequence. Examining AI’s impact on a diverse clientele in clinical counseling and
identifying ways to prevent bias and discrimination from creeping into AI is a necessary,
but yet unexplored focus of research. The intersection of AI and counseling is growing,
and a corresponding body of research is needed to match.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this
article.
ORCID iD
Russell Fulmer https://orcid.org/0000-0002-4582-5167
References
Abdul-Kader, S. A., & Woods, J. (2015). Survey on chatbot design techniques in speech conversa-
tion systems. International Journal of Advanced Computer Science and Applications, 6(7),
72–80.
Agar, N. (2016). Don’t worry about superintelligence. Journal of Evolution & Technology, 26(1),
73–82.
Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Deep machine learning—A new frontier in arti-
ficial intelligence research [research frontier]. IEEE Computational Intelligence Magazine,
5(4), 13–18. doi: 10.1109/mci.2010.938364
Artificial Intelligence Index. (2017). 2017 Annual Report. Stanford, CA: Author.
Barrat, J. (2015). Our final invention: Artificial intelligence and the end of the human era. New
York, NY: Thomas Dunne Books.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford, UK: Oxford University
Press.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and
the “good society”: The US, EU, and UK approach. Science and Engineering Ethics, 24(2),
505–528.
Cherniss, C., Extein, M., Goleman, D., & Weissberg, R. P. (2006). Emotional intelligence: What
does the research really indicate? Educational Psychologist, 41(4), 239–245. doi: 10.1207/
s15326985ep4104_4
Copeland, B. J. (1998). Artificial intelligence: A philosophical introduction. Malden, MA:
Blackwell.
Dalfonso, S., Santesteban-Echarri, O., Rice, S., Wadley, G., Lederman, R., Miles, C., . . . Alvarez-
Jimenez, M. (2017). Artificial intelligence-assisted online social therapy for youth mental
health. Frontiers in Psychology, 8(796). doi: 10.3389/fpsyg.2017.00796
Darcy, A. M., Louie, A. K., & Roberts, L. W. (2016). Machine learning and the profession of
medicine. Jama, 315(6), 551–552. doi: 10.1001/jama.2015.18421
Davies, P. H. (2002). Ideas of intelligence. Harvard International Review, 24(3), 62–66.
Fulmer 11
Deryugina, O. V. (2010). Chatterbots. Scientific and Technical Information Processing, 37(2),
143–147.
DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., . . . Lucas, G. (2014, May).
Simsensei kiosk: A virtual human interviewer for healthcare decision support. In A.
Lomuscio, P. Scerri, A. Bazzan, & M. Huhns (Eds.), Proceedings of the 13th international
conference on autonomous agents and multiagent systems (AAMAS 2014) (pp. 1061–
1068). Richland, SC: International Foundation for Autonomous Agents and Multiagent
Systems.
Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will
remake our world. New York, NY: Basic Books.
Fagan, J. F. (2000). A theory of intelligence as processing: Implications for society. Psychology,
Public Policy, and Law, 6(1), 168–179. doi: 10.1037//1076–8971.6.1.168
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young
adults with symptoms of depression and anxiety using a fully automated conversational agent
(Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. doi: 10.2196/men-
tal.7785
Frankish, K., & Ramsey, W. M. (2014). The Cambridge handbook of artificial intelligence.
Cambridge, UK: Cambridge University Press.
Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological arti-
ficial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized con-
trolled trial. JMIR Mental Health, 5(4). doi: 10.2196/mental.9782
Gaggioli, A. (2017a). Bringing more transparency to artificial intelligence. Cyberpsychology,
Behavior, and Social Networking, 20(1), 68.
Gaggioli, A. (2017b). Artificial intelligence: The future of cybertherapy? Cyberpsychology,
Behavior, and Social Networking, 20(6), 402–403. doi: 10.1089/cyber.2017.29075.csi
Gardner, H. E. (2006). Multiple intelligences: New horizons in theory and practice. New York,
NY: Basic Books.
Gardner, H. (2017). Taking a multiple intelligences (MI) perspective. Behavioral and Brain
Sciences, 40(e203). doi: 10.1017/S0140525X16001631
Gerdes, A. (2016). The issue of moral consideration in robot ethics. ACM SIGCAS Computers and
Society, 45(3), 274–279. doi: 10.1145/2874239.2874278
Gibney, E. (2016). Google AI algorithm masters ancient game of Go. Nature News, 529(7587),
445–446.
Goleman, D. (2005). Emotional intelligence. New York, NY: Bantam Dell.
Green, B. P. (2018). Ethical reflections on artificial intelligence. Scientia et Fides, 6(2), 9–31.
Gurevich, Y. (2012). What is an algorithm? In M. Bieliková, G. Friedrich, G. Gottlob, S.
Katzenbeisser, & G. Turán (Eds.), SOFSEM 2012: Theory and practice of computer science.
Lecture notes in computer science: Vol. 7147 (pp. 31–42). Berlin, Germany: Springer. doi:
10.1007/978–3–642–27660–6_3
Hahn, T., Nierenberg, A. A., & Whitfield-Gabrieli, S. (2016). Predictive analytics in mental health:
Applications, guidelines, challenges and perspectives. Molecular Psychiatry, 22(1), 37–43.
doi: 10.1038/mp.2016.201
Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism, 69, S36–S40.
doi: 10.1016/j.metabol.2017.01.011
Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired arti-
ficial intelligence. Neuron, 95(2), 245–258. doi: 10.1016/j.neuron.2017.06.011
Hawking, S., Russell, S., Tegmark, M., & Wilczek, F. (2014, May 1). Stephen Hawking:
“Transcendence looks at the implications of artificial intelligence—but are we taking AI
seriously enough?”. The Independent. Retrieved from https://www.independent.co.uk/news/
12 Theory & Psychology 00(0)
science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-
but-are-we-taking-9313474.html
Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing. Science,
349(6245), 261–266.
Hulme, M. (2016). 1.5 C and climate research after the Paris Agreement. Nature Climate Change,
6(3), 222–224.
Illovsky, M. E. (1994). Counseling, artificial intelligence, and expert systems. Simulation &
Gaming, 25(1), 88–98. doi: 10.1177/1046878194251009
Jennings, L., Sovereign, A., Bottorff, N., Mussell, M. P., & Vye, C. (2005). Nine ethical values of
master therapists. Journal of Mental Health Counseling, 27(1), 32–47.
Kaplan, D. M., Tarvydas, V. M., & Gladding, S. T. (2014). 20/20: A vision for the future of coun-
seling: The new consensus definition of counseling. Journal of Counseling & Development,
92(3), 366–372. doi: 10.1002/j.1556–6676.2014.00164.x
Kaplan, J. (2015). Humans need not apply: A guide to wealth and work in the age of artificial intel-
ligence. New Haven, CT: Yale University Press.
Kelly, É. (2018, April 26). EU to boost artificial intelligence research spend to €1.5B. Science
Business. Retrieved from https://sciencebusiness.net/framework-programmes/news/eu-
boost-artificial-intelligence-research-spend-eu15b
Kurzweil, R. (2006). The singularity is near: When humans transcend biology. London, UK: Penguin.
Kurzweil, R. (2014). How to create a mind: The secret of human thought revealed. New York,
NY: Penguin Books.
Larson, C. (2018, February 8). China’s massive investment in artificial intelligence has an insidi-
ous downside. Science. Retrieved from http://www.sciencemag.org/news/2018/02/china-s-
massive-investment-artificial-intelligence-has-insidious-downside
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. doi:
10.1038/nature14539
Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds
and Machines, 17(4), 391–444. doi: 10.1007/s11023–007–9079-x
Lin, P., Abney, K., & Bekey, G. A. (2014). Robot ethics: The ethical and social implications of
robotics. Cambridge, MA: MIT Press.
Luxton, D. D. (2014). Artificial intelligence in psychological practice: Current and future applica-
tions and implications. Professional Psychology: Research and Practice, 45(5), 332–339.
Luxton, D. D. (2016). Artificial intelligence in behavioral and mental health care. Amsterdam, the
Netherlands: Elsevier.
MacDorman, K. F., & Kahn, P. J. (2007). Introduction to the special issue on psychological bench-
marks of human-robot interaction. Interaction Studies: Social Behaviour and Communication
in Biological and Artificial Systems, 8(3), 359–362. doi: 10.1075/is.8.3.02mac
Malle, B. F. (2015). Integrating robot ethics and machine morality: The study and design of moral
competence in robots. Ethics and Information Technology, 18(4), 243–256. doi: 10.1007/
s10676–015–9367–8
Mauldin, M. L. (1994, August). ChatterBots, TinyMuds, and the Turing test: Entering the Loebner
prize competition. Proceedings of the twelfth national conference on artificial intelligence
(AAAI-94) (pp. 16–21). Menlo Park, CA: AAAI Press. Retrieved from https://pdfs.semantic-
scholar.org/bdd4/9b4a0b7de03b00412e3b807a855504e1d3af.pdf
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth
summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12.
doi: 10.1609/aimag.v27i4.1904
Monnier, M. (2015). Difficulties in defining social-emotional intelligence, competences and
skills—A theoretical analysis and structural suggestion. International Journal of Research
for Vocational Education and Training, 2(1), 59–84.
Fulmer 13
Müller, V. C. (2016). Risks of artificial intelligence. Boca Raton, FL: Chapman & Hall.
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert
opinion. Fundamental Issues of Artificial Intelligence, SYLI 376, 555–572. doi: 10.1007/978–
3–319–26485–1_33
Pratt, G. A. (2015). Is a Cambrian explosion coming for robotics? Journal of Economic
Perspectives, 29(3), 51–60. doi: 10.1257/jep.29.3.51
Reese, B. (2018). The fourth age: Smart robots, conscious computers, and the future of humanity.
New York, NY: Atria Books.
Ross, A. (2017). The industries of the future. London, UK: Simon & Schuster.
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper
Saddle River, NJ: Prentice Hall.
Santos-Lang, C. C. (2015). Moral ecology approaches to machine ethics. In S. P. van Rysewyk
& M. Pontier (Eds.), Machine medical ethics (pp. 111–127). Cham, Switzerland: Springer
International. doi: 10.1007/978–3–319–08108–3_8
Saygin, A. P., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and Machines,
10(4), 463–518.
Schroeder, M. J. (2017). The case of artificial vs. natural intelligence: Philosophy of information as a
witness, prosecutor, attorney, or judge? Proceedings, 1(3), 111. doi: 10.3390/is4si-2017–03972
Sharf, R. S. (1985). Artificial intelligence: Implications for the future of counseling. Journal of
Counseling & Development, 64(1), 34–37. doi: 10.1002/j.1556–6676.1985.tb00999.x
Skovholt, T. M., & Jennings, L. (2004). Master therapists exploring expertise in therapy and
counseling. Boston, MA: Pearson/Allyn & Bacon.
Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of
Personality and Social Psychology, 49(3), 607–627. doi: 10.1037//0022–3514.49.3.607
Tanana, M., Hallgren, K. A., Imel, Z. E., Atkins, D. C., & Srikumar, V. (2016). A comparison
of natural language processing methods for automated coding of motivational interviewing.
Journal of Substance Abuse Treatment, 65, 43–50. doi: 10.1016/j.jsat.2016.01.006
Tavani, H. (2018). Can social robots qualify for moral consideration? Reframing the question
about robot rights. Information, 9(4), 73. doi: 10.3390/info9040073
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. New York, NY:
Random House.
Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460.
Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford,
UK: Oxford University Press.
Warwick, K., & Shah, H. (2014). Good machine performance in Turing’s imitation game. IEEE
Transactions on Computational Intelligence and AI in Games, 6(3), 289–299.
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language commu-
nication between man and machine. Communications of the ACM, 9(1), 36–45.
Yampolskiy, R. V., & Fox, J. (2012). Artificial general intelligence and the human mental model.
In A. H. Eden, J. H. Soraker, & E. Steinhart (Eds.), Singularity hypotheses (pp. 129–145).
Berlin, Germany: Springer.
Author biography
Russell Fulmer is a faculty member with the Counseling@Northwestern program through The
Family Institute at Northwestern University. His central research interests involve psychological
artificial intelligence (AI) and the psychodynamic system. He recently published a randomized
controlled trial that showed the efficacy of an AI mental health support agent (Tess) to help college
students battle anxiety and depression. His current work examines ethical issues faced by clini-
cians when using psychological AI in practice.
... This article intends to make an original contribution to the debate in the interconnection of cognitive human bias and advancements of emerging technologies such as artificial intelligence and its subset of machine learning (Fulmer 2019;McCarthy 2007). The article's purpose concerns opening the artificial intelligence black box, understanding the controversies of artificial intelligence mechanisms, and analysing the risks involved in the interest of national security and intelligence analysis. ...
... Since perception is a reality to many individuals (Fulmer 2019), the main argument is that humans induce the results in the development of 'thinking' computer systems connected to their circumstances and perceptions. Artificial intelligence mechanisms programmed by human hands in the creation and end process incorporate human bias in artificial intelligence machine exports. ...
Article
Full-text available
Technology advances neither enhances nor detracts from cognitive human bias since its existence over thousands of years. However, skewness in conventional artificial intelligence mechanisms leads to algorithmic mishaps. Understanding the interconnection between human and machine, that algorithms are subject to bias through interactions, and the output of algorithmic bias forms a source of inspiration to the U.S. Intelligence Community (IC). As a result of emerging technologies such as artificial intelligence (AI) and its subset of machine learning, complex cybercrimes raises security concerns and highlights pertinent considerations for national security; infectious diseases can be a national security concern and facilitate bioterrorism and the production of biological weapons; and automation systems in the form of unmanned aerial systems (UAS) can have national security implications as an attractive tool for criminals and nefarious actors. This article uses grounded theory to analyse congressional hearing reports and related official documents. Increasing our understanding of the process leads to output attainment since AI operates at the speed of light and more data than we can humanly manage accessible. The present research intends to contribute to international relations, international studies, security and strategic studies, and science and technology studies.
... Recently, online mental health interventions for university students showed rapid and promising developments (Abd-Alrazaq et al., 2019). AI-enhanced chatbots, in particular, can offer personalised experiences and support (Fulmer, 2019). Most interventions aimed at improving mental health of students, however, bear a stigma and do not reach the right students in time (Clement et al., 2015). ...
... health chatbot(Abd-alrazaq et al., 2019;Provoost et al., 2017; Vaidyam et al., 2019). A chatbot is a computer program designed to simulate human conversation and is able to create the illusion of intelligent conversation(Abdul-Kader & Woods, 2015;Warwick & Shah, 2014) (for a review, seeFulmer, 2019). ...
Thesis
Full-text available
Academic thriving stands for a combination of academic outcomes as well as success in other relevant domains, such as well-being or finding the right job. What causes students to thrive academically? The studies in this dissertation contributed to this question with the use of experimental, interdisciplinary and longitudinal studies, and a critical theoretical examination of the arguments against evidence-based education. A large-scale field experiment showed that first-year students who reflected on their desired future, prioritized goals, and wrote detailed plans on how to reach these goals, performed significantly better (in study credits and retention) than students who made a control assignment. This low-cost and scalable goal-setting assignment was made at the start of college and only took the students two hours to complete. Personalized follow-up feedback delivered by an AI-enhanced chatbot could further improve benefits to study outcomes as well as well-being. The final study in this dissertation tracked the effects of different types of work on the study progress of teacher education students over a four-year span. This longitudinal study showed that student who had a paid job in education gained more study credits than students with other types of work or without a job. Additionally it showed that working 8 hours per week relates with the most study progress in the first and third semester of college.
... O využití chatbotů v oblasti duševního zdraví se mluví stále častěji, ale studie mají spíše pilotní charakter, a i když výsledky s ohledem na praktickou využitelnost, proveditelnost a přijetí chatbotů pro podporu duševního zdraví jsou slibné, zatím nejsou přímo přenositelné do psychoterapeutického kontextu (Bendig et al., 2019). To, do jaké míry se budou tyto oblasti prolínat, bude do značné míry záviset na rychlosti vývoje umělé inteligence (Fulmer, 2019). ...
Thesis
Full-text available
The pace of technology development is accelerating and AI-based programs are increasingly a part of our lives and they are finding a place in the mental health care industry. Our thesis focuses on chatbots and explores the impact of an individual's characteristics on their willingness to interact with a chatbot to resolve a crisis. We chose a quantitative design to provide a basic overview. We had 610 people completed a questionnaire mapping the basic characteristics that might influence an individual's willingness to engage with a chatbot, including basic personality traits measured by the BFI-2. For gender, age, or area of interest, we did not prove an association, but for educational attainment we did. We found highly significant relationships for individuals who described more inhibitions in sharing, as well as for those who scored higher on the negative emotionality scale or were currently experiencing loneliness or suicidal ideations. The willingness was also influenced by variables such as relationship with technology, higher mean scores on the open-mindedness scale, or lower mean scores on the conscientiousness scale. At the same time, respondents willing to use a chatbot were more likely to have sought some form of professional help previously, supporting our hypothesis that individuals willing to use a chatbot cannot rely entirely on the social relationships within their environment. We consider this an important issue for further research, as it is essential that chatbots do not become a mere substitute, but a guide into the world of deeper and more intimate relationships, and strengthen the user's ability to build such relationships.
... Die Kommunikation und der Informationsaustausch, den mobile Integrationsanwendungen ermöglichen, stärkten zudem auch Vertrauen, Freude, Beruhigung und ein Gefühl der Sicherheit von Neuzugewanderten im Aufnahmeland. Der Einsatz von KI-Systemen wird deshalb weiter an Bedeutung in der migrationsspezifischen Beratung gewinnen, wie es sich auch in anderen Handlungsfeldern der Sozialen Arbeit bereits zeigt (Fulmer, 2019). Hier gilt es zukünftig, sozialarbeiterische Kompetenzen im Arbeitsfeld Migration (vgl. ...
Book
Full-text available
Diese Publikation wurde im Rahmen des Projektes Fem.OS von Minor - Projektkontor für Bildung und Forschung gemeinnützige GmbH veröffentlicht und ist online verfügbar unter: https://minor-kontor.de/kuenstliche-intelligenz-in-der-migrationsberatung/ Das Projekt wird von Beauftragte der Bundesregierung für Migration, Flüchtlinge und Integration und Beauftragte der Bundesregierung für Antirassismus gefördert und findet in Kooperation mit der Bundesagentur für Arbeit statt.
... Like human coaches, AI can ask questions to initiate clients' self-reflection on what may be the core problem. However, present AI chatbots do not actually think, but rather imitate and create the illusion of an intelligent conversation (Abdul-Kader & Woods, 2015;Fulmer, 2019). So, AI currently cannot understand the clients' intention and explore what a client's core problem could be. ...
Article
Artificial intelligence (AI) has brought rapid innovations in recent years, transforming both business and society. This paper offers a new perspective on whether, and how, AI can be employed in coaching as a key HRD tool. We provide a definition of the concept of AI coaching and differentiate it from related concepts. We also challenge the assumption that AI coaching is feasible by challenging its capability to lead through a systematic coaching process and to establish a working alliance to clients. Based on these evaluations, AI coaching seems to encounter the greatest difficulties in the clients’ problem identification and in delivering individual feedback, which may limit its effectiveness. However, AI generally appears capable of guiding clients through many steps in the coaching process and establishing working alliances. We offer specific recommendations for HRD professionals and organizations, coaches, and developers of AI coaching programs on how AI coaching can contribute to enhance coaching practice. Combined with its lower costs and wider target group, AI coaching will likely transform the coaching profession and provide a future HRD tool.
Article
Full-text available
Açık ve uzaktan öğrenme (AUÖ) ortamlarında öğrenme sürecinin verimli bir şekilde yürütülmesi için sistemin öğrenen ihtiyaçlarına yönelik destek hizmetlerini sağlaması beklenir. Yapay zekâ teknolojileri ve eğitim alanındaki uygulamaları sayesinde AUÖ ortamlarında öğrenenlerin destek hizmetlerine yönelik ihtiyaçlarının daha etkili ve sürdürülebilir şekilde karşılanması mümkündür. Bu doğrultuda bu çalışmanın amacı, AUÖ ortamlarında yapay zekâ teknolojisinin öğrenen destek hizmetleri kapsamında nasıl kullanılabileceğini incelemektir. Bu amaca yönelik olarak çalışmada destek hizmetleri; akademik destek, idari destek, danışmanlık desteği, teknik destek ve kütüphane desteği olmak üzere beş başlık altında ele alınmıştır. Araştırmada geleneksel literatür taraması tercih edilmiş ve var olan çalışmalar ve uygulamalar incelenerek destek hizmetleri bağlamında bütüncül bir bakış açısı sunulmuştur. Öğrenme süreciyle ve öğrenenin öğrenme deneyimiyle doğrudan ilişkilendiren akademik destek, yapay zekâ teknolojileri ve sağladığı olanaklar kapsamında oldukça önemli bir paya sahiptir. Bununla birlikte AUÖ’de öğrenenlerin kitlesel olması ve öğrenenlerin bireysel beklenti ve ihtiyaçlarında görülen çeşitlilik öğrenme sürecinde kişiye özel ve zamanında geribildirim alınmasını zorlaştırmış, dolayısıyla yapay zekanın AUÖ’de akademik destek hizmetlerinin yanında diğer destek hizmetlerinde kullanımını gerekli kılmıştır. Araştırmada, gelecek çalışmalar için uygulamaya yönelik çalışmaların yapılması, destek hizmetlerinin her bir destek türünde ayrıntılı incelenmesi, teknoloji entegrasyonu bağlamında kurumsal çerçevelerin oluşturulması önerilmektedir.
Article
Full-text available
This article explores how AI might impact upon the coaching relationship. It considers how the relationship might be impacted by technology and considers some of the ethical issues that need to be considered as AI developers start to increase offerings in this domain. For example, if there is a negative outcome from an AI coaching engagement, where does the liability for this sit? Whilst noting there is a role for technology the article also questions if the coaching profession needs to resolve some of the 'fault lines' in current working practices before we complicate matters further with AI.
Chapter
The extended mind thesis maintains that the functional contributions of tools and artifacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally “extend” into the tools. Several extended mind theorists have argued that this “extended” view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this chapter, we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other “artificially intelligent” tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges.KeywordsCognitive extensionExtended mindEnhancementAI ethicsMental healthCognitive disorderCognitive capabilityAlzheimer’s diseaseMemoryFunction
Article
Full-text available
Psychological artificial intelligence has a growing research base but often overlooks ethical considerations. Drawing from a review of the literature, the experiences of three counselor educators, and an industry insider, this article names six issues relevant to psychological artificial intelligence. The clinical implications of each issue are discussed, some at the national level, some at the practice level. Suggestions for the counseling profession, with an emphasis on prevention, are offered.
Chapter
The rise of artificial intelligence (AI) and related new technologies have been widely discussed in recent years, especially on its benefits and threats to society. This study aims to explore the impact of AI on work and human value at different levels of AI capability. Adopting a qualitative approach, we conducted a focus group discussion with six scholars who have AI research experience in different social disciplines and in various global regions. Four main topics were discussed in the focus group, which are: (1) attitude toward AI, (2) types of businesses, industries, or workers that benefit or are threatened the most by AI, (3) willingness to work with a robot with different levels of intelligence, and (4) how to find human’s value and get prepared for future workplace. The discussion was recorded under the consent of participants and transcribed verbatim. ATLAS.ti version 8 software was used for the textual data analysis. The findings reveal that scholars: (1) are optimistic toward AI in general, (2) believe that most industries will benefit from AI, (3) are divided in attitude toward robots with empathetic intelligence, (4) argue that humans need to get prepared for the future workplace. Implications and future research suggestions are provided.
Article
Full-text available
Artificial Intelligence (AI) technology presents a multitude of ethical concerns, many of which are being actively considered by organizations ranging from small groups in civil society to large corporations and governments. However, it also presents ethical concerns which are not being actively considered. This paper presents a broad overview of twelve topics in ethics in AI, including function, transparency, evil use, good use, bias, unemployment, socio-economic inequality, moral automation and human de-skilling, robot consciousness and rights, dependency, social-psychological effects, and spiritual effects. Each of these topics will be given a brief discussion, though each deserves much deeper consideration.
Article
Full-text available
Background Students in need of mental health care face many barriers including cost, location, availability, and stigma. Studies show that computer-assisted therapy and 1 conversational chatbot delivering cognitive behavioral therapy (CBT) offer a less-intensive and more cost-effective alternative for treating depression and anxiety. Although CBT is one of the most effective treatment methods, applying an integrative approach has been linked to equally effective posttreatment improvement. Integrative psychological artificial intelligence (AI) offers a scalable solution as the demand for affordable, convenient, lasting, and secure support grows. Objective This study aimed to assess the feasibility and efficacy of using an integrative psychological AI, Tess, to reduce self-identified symptoms of depression and anxiety in college students. Methods In this randomized controlled trial, 75 participants were recruited from 15 universities across the United States. All participants completed Web-based surveys, including the Patient Health Questionnaire (PHQ-9), Generalized Anxiety Disorder Scale (GAD-7), and Positive and Negative Affect Scale (PANAS) at baseline and 2 to 4 weeks later (T2). The 2 test groups consisted of 50 participants in total and were randomized to receive unlimited access to Tess for either 2 weeks (n=24) or 4 weeks (n=26). The information-only control group participants (n=24) received an electronic link to the National Institute of Mental Health’s (NIMH) eBook on depression among college students and were only granted access to Tess after completion of the study. Results A sample of 74 participants completed this study with 0% attrition from the test group and less than 1% attrition from the control group (1/24). The average age of participants was 22.9 years, with 70% of participants being female (52/74), mostly Asian (37/74, 51%), and white (32/74, 41%). Group 1 received unlimited access to Tess, with daily check-ins for 2 weeks. Group 2 received unlimited access to Tess with biweekly check-ins for 4 weeks. The information-only control group was provided with an electronic link to the NIMH’s eBook. Multivariate analysis of covariance was conducted. We used an alpha level of .05 for all statistical tests. Results revealed a statistically significant difference between the control group and group 1, such that group 1 reported a significant reduction in symptoms of depression as measured by the PHQ-9 (P=.03), whereas those in the control group did not. A statistically significant difference was found between the control group and both test groups 1 and 2 for symptoms of anxiety as measured by the GAD-7. Group 1 (P=.045) and group 2 (P=.02) reported a significant reduction in symptoms of anxiety, whereas the control group did not. A statistically significant difference was found on the PANAS between the control group and group 1 (P=.03) and suggests that Tess did impact scores. Conclusions This study offers evidence that AI can serve as a cost-effective and accessible therapeutic agent. Although not designed to appropriate the role of a trained therapist, integrative psychological AI emerges as a feasible option for delivering support. Trial Registration International Standard Randomized Controlled Trial Number: ISRCTN61214172; https://doi.org/10.1186/ISRCTN61214172.
Article
Full-text available
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights—civil, legal, moral, etc.—should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is “yes,” I draw from some insights in the writings of Hans Jonas to defend my position.
Book
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
Article
The theory of multiple intelligences (MI) seeks to describe and encompass the range of human cognitive capacities. In challenging the concept of general intelligence, we can apply an MI perspective that may provide a more useful approach to cognitive differences within and across species.
Article
The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.