Conference PaperPDF Available

Conversational Agent to Treat Depression in Youth and Young Adults -A Transdisciplinary Design Science Research Project

Authors:

Abstract and Figures

Depression is a large-scale and consequential problem in youth and young adults. Conversational agents (CAs) can contribute to addressing current barriers to seeking treatment, such as long waiting lists, and reduce the high dropout rates reported for other digital health interventions. However, existing CAs have not considered differences between youth and adults and are primarily designed based on a ‘one-size-fits-all’ approach that neglects individual symptoms and preferences. Therefore, we propose a theory-driven design for personalized CAs to treat depression in youth and young adults. Based on interviews with patients (i.e., people diagnosed with depression), we derive two design principles to personalize the character of the CA and its therapeutic content. These principles are instantiated in prototypes and evaluated in interviews with experts experienced in delivering psychotherapy and potential nondiagnosed users. Personalization was perceived as crucial for treatment success, and autonomy and transparency emerged as important themes for personalization. We contribute by providing design principles for personalized CAs for mental health that extend previous CA research in the context of mental health.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source:
Kuhlmeier, Florian Onur, Gnewuch, Ulrich, Lüttke, Stefan, Brakemeier, Eva-Lotta,
Mädche, Alexander. A Personalized Conversational Agent to Treat Depression in Youth
and Young Adults A Transdisciplinary Design Science Research Project’. International
Conference on Design Science Research in Information Systems and Technology.
Springer, Cham, 2022.
DOI: 10.1007/978-3-031-06516-3_3
Please note: Copyright is owned by the author and / or the publisher. Commercial
use is not allowed.
© 2017. This manuscript version is made available under
the CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/
A Personalized Conversational Agent to Treat Depression
in Youth and Young Adults A Transdisciplinary
Design Science Research Project
Florian Onur Kuhlmeier1,2, Ulrich Gnewuch2, Stefan Lüttke1, Eva-Lotta Brakemeier1,
Alexander Mädche2
1 Department of Psychology, University of Greifswald, Germany
{stefan.luettke, eva-lotta.brakemeier}@uni-greifswald.de
2 Institute of Information Systems and Marketing, Karlsruhe Institute of Technology, Germany
{florian.kuhlmeier, ulrich.gnewuch, alexander.maedche}@kit.edu
Abstract. Depression is a large-scale and consequential problem in youth and
young adults. Conversational agents (CAs) can contribute to addressing current bar-
riers to seeking treatment, such as long waiting lists, and reduce the high dropout
rates reported for other digital health interventions. However, existing CAs have
not considered differences between youth and adults and are primarily designed
based on a one-size-fits-all approach that neglects individual symptoms and pref-
erences. Therefore, we propose a theory-driven design for personalized CAs to treat
depression in youth and young adults. Based on interviews with patients (i.e., peo-
ple diagnosed with depression), we derive two design principles to personalize the
character of the CA and its therapeutic content. These principles are instantiated in
prototypes and evaluated in interviews with experts experienced in delivering psy-
chotherapy and potential nondiagnosed users. Personalization was perceived as cru-
cial for treatment success, and autonomy and transparency emerged as important
themes for personalization. We contribute by providing design principles for per-
sonalized CAs for mental health that extend previous CA research in the context
of mental health.
Keywords: Conversational Agent, Mental Health, Personalization, Transdiscipli-
nary Research.
1 Introduction
Depression is one of the most common mental disorders in adolescence and early adult-
hood. Approximately 5.6% of young people worldwide are affected by depression [1].
The individual and social consequences are enormous. Affected individuals are more
likely to exhibit physical impairment and substance abuse, have poorer academic results,
and have an elevated risk of suicide [24]. Furthermore, depression causes high health
economic costs [3]. Psychotherapy, delivered by human therapists, is an effective treat-
ment and often the first choice to mitigate the individual and social consequences associ-
ated with depression [5, 6]. However, treatment resources are scarce: On average, people
seeking help have to wait almost five months to start psychotherapy treatment [7]. In
2
addition, young people experience two additional barriers when seeking treatment: First,
they are significantly less likely to use professional support [8] due to feelings of shame,
insecurity, and a greater desire to solve problems themselves [8]. Second, weekly in-per-
son sessions with an adult therapist may not match the technology-driven lifestyle of
youth and young adults. Although digital health interventions (DHI) are available and
effective, studies have shown high dropout rates [5, 9]. Using a conversational agent (CA)
may have great potential to tackle this problem. CAs are software systems that mimic
human conversational behavior [10]. In contrast to other DHI, CAs can not only realize
(1) the specific effects of therapy [11] by delivering therapeutic content, such as providing
information on depression and working through exercises but also (2) the common factors
of therapy [11], such as the alliance between patient and therapist, because CAs offer an
interactive, conversational format that mimics human-delivered therapy [1214]. By add-
ing the realization of common factors, CAs seem thus promising to increase engagement
and reduce dropout rates to match human-delivered therapy and ultimately improve treat-
ment success. CAs in the context of mental health, such as the highly cited [13, 14] and
successful commercial apps Woebot (woebothealth.com) and Wysa (wysa.io), provide
self-guided therapy based on the principles of cognitive behavioral therapy (CBT), inter-
personal therapy (IPT), or dialectical therapy and have shown promising effectiveness in
reducing symptoms of depression [13, 14]. Moreover, users of mental health CAs report
experiencing relationship building [15] and feelings of social support [16], which sup-
ports the argument that mental health CAs can also realize common factors of therapy
and may thus be better suited than other DHI to treat mental health problems. Although
preliminary evidence shows promising potential for CAs to reduce depressive symptoms,
there are several limitations. First, the majority were tested in pilot studies with a focus
on adults. However, youth differ from adults in terms of cognitive and emotional devel-
opment, social relationships, and problem behavior [17]. In addition, neither the develop-
ment nor the evaluation included participants diagnosed with clinical depression. Thus,
the development and evaluation of CAs for youth (13-17 years) and young adults (18-25
years) must consider these aspects. Second, existing CAs are designed primarily based on
a one-size-fits-all approach that neglects individual symptoms and preferences [18].
This is particularly important for youth and young adults because they are used to person-
alizing the content and appearance of digital applications according to their own needs
and preferences. Therefore, it is necessary to consider how CAs can be designed in a way
that allows for personalization.
Against this backdrop, our research focuses on the question of how to design a person-
alized CA to treat depression in youth and young adults. To address this research question,
we are conducting a comprehensive transdisciplinary design science research (DSR) pro-
ject [19, 20]. In the first cycle, we first conducted interviews with youth suffering from
depression to gain an in-depth understanding of the problem, their needs, and preferences.
Based on the interviews, CBT and IPT, and theories of personalization [18, 21], we de-
rived two initial design principles (DPs) for personalized CAs to treat depression. Next,
we instantiated these two initial design principles in four prototypes, which were evalu-
ated in interviews with five experts and five potential users. Our results suggest that per-
sonalizing character and content is crucial to designing effective CAs to treat depression.
3
In addition, transparency and agency are the most important aspects to consider when
implementing personalization.
2 Related Work
2.1 Conversational Agents for Mental Health
The use of CAs to provide self-help psychotherapy interventions has been explored in
several studies [22]. For example, a 2-week use of Woebot, a CA developed based on the
theoretical foundations of CBT to work on depression-typical, dysfunctional thoughts or
behaviors of depression, significantly reduced symptoms of depression [13]. Symptom
reduction was also shown after using Wysa [14]. Recent reviews of mental health CAs
reported high user satisfaction, sufficient effectiveness, and safety to conduct research
with clinical populations [22]. In summary, CAs seem more suitable than other DHI, as
users have reported experiencing social support [16] and a stronger working alliance [15].
2.2 Personalization
In the context of information technology, personalization has been defined as a 'process
that changes the functionality, interface, information access, and content, or distinctive-
ness of a system to increase its relevance to an individual or a category of individuals [12,
p. 183]. Users appreciate personalization features because they can improve ease of use,
efficiency, and provide users with a feeling of being in control [23]. Our work draws on
the frameworks of personalization approaches of Fan and Poole [21] and Kocaballi et al.
[18]. Depending on the specific field of research and discipline, personalization is often
used synonymously with adaptation, customization, and tailoring [21]. We decided to use
the term personalization because it is commonly used in the medical and health literature
[17]. Fan and Poole [21] conceptualize personalization along three dimensions: (1) what
is personalized, i.e. the elements of the system that are being changed, (2) for whom is
the personalization, i.e., the target: individual vs. group, and (3) who is in control of per-
sonalization, i.e. the user or the system. Within dimension (3), the authors differentiate
between implicit (i.e., executed by the system) and explicit personalization (i.e., executed
by the user), Kocaballi et al. [18] extended Fan and Poole’s framework with (4) the pur-
pose of personalization. Table 1 below illustrates the dimensions of personalization that
serve as the basis for our proposed design.
Table 1. Dimensions of Personalization (based on [18, 21])
Dimension
Question
Values (examples)
Purpose
What is the purpose of personalization?
- Increased user motivation
Elements
What is personalized?
- Content
- Functionality
Target
To whom is personalized?
- Single-User vs. Group of Users
Agency
Who is in control of personalization?
- System: implicit/adaptive
- User: explicit/adaptable
- Mixed initiative
4
In their review of personalization features in health CAs, Kocaballi et al. [18] pointed out
that several CAs implemented personalization, such as tailoring content or interaction
styles to individuals. However, they also identified a lack of investigating personalization
within a theoretically grounded and evidence-based framework [18]. In our work, we
mainly focus on the dimensions of purpose, elements, and agency.
3 Methodology
Our research project follows DSR approach [19] to solve an important real-world prob-
lem and design a personalized conversational agent to treat depression in youth and young
adults. We chose this research approach because it allows iterative design [19, 25] and
the participation of users and experts in the design and evaluation phases [19]. We con-
duct a transdisciplinary project due to (1) the focus on a complex problem, (2) the inclu-
sion of an interdisciplinary team consisting of researchers from information systems, clin-
ical psychology, and psychotherapists, and (3) involving societal actors (i.e., patients) as
process participants [20]. A transdisciplinary approach is particularly important given that
poorly designed mental health interventions can have fatal consequences. The DSR pro-
ject is based on the well-established approach suggested by Kuechler and Vaishnavi [25]
and divided into three design cycles to incrementally improve the functionality and im-
pact of our artifact. In this paper, we report the results of the first design cycle, which
focused on understanding the problem space (i.e., treating depression in youth and young
adults using CAs) and exploring personalization to improve treatment success.
Table 2. Overview of our DSR Approach
DSR Project Phases
1. Design Cycle
2. Design Cycle
3. Design Cycle
Awareness of Problem
Interviews with patients
Analysis of Initial Evalua-
tion
Analysis of prior evaluations
Suggestion
Formulation of the initial
design principles
Refinement of
DPs
Refinement of
DPs
Development
Implementation of
first prototype
Implementation of a fully
functional prototype
Implementation of final soft-
ware artifact
Evaluation
Interviews with experts and
potential users (N=10)
Online experiment
with potential users.
Field experiment
with patients
Conclusion
Reflection of initial design
and evaluation results
Reflection of fully functional
prototype and evaluation re-
sults
Formulation of
nascent design theory
In the problem awareness phase, we reviewed the literature on mental health CAs in
clinical psychology and conducted interviews with 15 youth diagnosed with depression,
which we analyzed by first creating a coding scheme and then deriving higher-order
themes. In the suggestion phase, we drew upon frameworks of personalization approaches
[18, 21] as well as CBT and IPT to propose two design principles on how to personalize
mental health CAs for the treatment of depression. Subsequently, we instantiated design
principles in four different prototypes of text-based mental health CAs (i.e., chatbots) de-
veloped with Figma (figma.com) and Botsociety (botsociety.io). These prototypes were
evaluated in interviews with five experts, experienced in clinical psychology and psycho-
therapy, and five potential users. For the evaluation, we selected the technical risk and
efficacy strategy [26] due to the sensitive context of depression: We decided to first
5
evaluate the proposed DPs with a group of experts and potential users to get feedback and
improve our design before evaluating a fully functional prototype in a more naturalistic
setting.
As shown in Table 1, we plan two more design cycles. We will first use the open-
source conversational AI framework Rasa to develop a fully functional prototype. Subse-
quently, we will refine the DPs and improve the prototype based on studies in an online
and naturalistic setting.
4 Design Science Research Project
4.1 Problem Awareness
To improve our understanding of the problem space, we first conducted interviews with
youth diagnosed with depression. We recruited 15 participants between 14 and 17 years
of age, all female, through local clinical psychologists and psychiatrists. The previous
experience of the participants with psychotherapy varied. In line with the literature [7],
all participants previously struggled to find professional treatment due to long waiting
lists. Some participants were frustrated by the lack of interventions to bridge the waiting
time. One participant stated: [I] signed up for this study, because there were no other
forms of treatment when I was on a waiting list. So, [I] wanted to help creating one.
Another participant expressed her dissatisfaction with a self-help book she had tried. Add-
ing to the literature [8], multiple participants reported feelings of insecurity, stigma, and
the desire to solve their problems on their own as barriers to seeking treatment. The par-
ticipants also identified several advantages of CAs compared to face-to-face psychother-
apy. For example, participants mentioned that CAs would be neutral, non-judgmental,
and anonymous, which facilitates sharing sensitive information. In addition, they appre-
ciated that they could rely on CAs being continuously available and not limited to a single
therapy session per week. In summary, there is evidence that CAs can address some of
the issues raised in the introduction, particularly bridging waiting times.
Regarding the design, the participants expressed a wide variety of needs and prefer-
ences, revealing the importance of personalization. Some participants desired CAs to be
like a friend, that uses similar language. Yet, others wanted the CA to resemble a human
therapist due to the distant, professional relationship, which facilitates conversations
about sensitive topics. Another frequently mentioned topic was the usage of emojis.
While some participants wanted the mental health CA to include emojis (and gifs) in its
messages, others stated that this would look unprofessional and counteract the seriousness
of depression. While some preferred to access the CA through instant messaging apps
such as WhatsApp, others suggested a standalone app. For a standalone app, the design
preferences ranged from a very colorful appearance to a 'professional' black-grey-white
appearance, which was associated with professionalism. Yet, current mental health CAs
do not accommodate the wide-ranging needs and preferences mentioned by our partici-
pants [18]. In addition, our participants explicitly requested personalization features re-
garding the character and the content: I would like to choose a name, change the avatar
and select the topics I want to work on. One participant wanted the CA to automatically
6
adapt to her therapeutic needs and language style. Taken together, our findings suggest
that a 'one-size-fits-all' approach to designing CAs to treat depression may not be able to
reach its full potential. Although our interviews revealed potential advantages of CAs
compared to human therapists and other interventions, they also emphasized the crucial
role of personalization to improve the user experience and subsequently improve therapy
outcomes.
4.2 Suggestion
From the interviews, we obtained substantial evidence for the importance of personaliza-
tion. However, personalization is complex due to its elusive and multifaceted nature and
the variety of definitions assigned to it by scholars from different fields (e.g., information
systems, health, computer science). To guide our design, we, therefore, drew upon estab-
lished frameworks of personalization [18, 21] that were introduced in Section 2.1. Ac-
cording to these frameworks, the fundamental dimension of personalization is the element
of personalization (i.e. what is being personalized). In the context of CAs, these elements
primarily include the CA’s character (i.e., gender, age, social role etc.) and the content
(i.e., the content of the messages, knowledge base, etc.) [27]. In the interviews, 8 out of
15 participants expressed the desire to personalize the name, gender, and social role of a
CA, suggesting that personalizing the character should represent a major design principle
(DP). Therefore, we propose DP1: To improve treatment outcomes for depressed youth
and young adults, provide the conversational agent with the capability to personalize its
character to match user needs and preferences because a personalized character helps
users to form a stronger relationship with the CA. The second key element of personali-
zation is the CA’s (therapeutic) content. According to the health literature, personalized
content improves the use [28] and the perceived helpfulness of DHI [29]. Thus, we pro-
pose DP2: To improve treatment outcomes for depressed youth and young adults, provide
the conversational agent with the capability to personalize the therapeutic content to
match user needs and preferences because personalized content increases the relevance
and efficiency of the CA. As introduced above, the second dimension of personalization
is agency (i.e., who controls the personalization). As our participants expressed their in-
terest in both adaptable CAs, in which they are in control of personalization, and adaptive
CAs, in which CAs control personalization, we integrate adaptable, adaptive, and mixed-
initiative personalization into our DPs. By instantiating prototypes that demonstrate all
these approaches, we aimed to evaluate and prioritize these approaches and then refine
the DPs accordingly.
4.3 Development
To instantiate our initial DPs, we developed four prototypes. As the participants’ prefer-
ences varied substantially, we aimed to explore different elements and degrees of agency
of personalization in our prototypes. Based on the evaluation results, we aim to find the
most important features and refine the DPs accordingly. The first two prototypes instan-
tiated the personalization of the CA’s character (DP1). The first prototype provided the
user with the opportunity to personalize the name, gender, typing speed, avatar, and social
7
role. These characteristics were selected based on our findings from the interviews with
patients. The second prototype showcased the possibility for the CA to automatically
adapt to the users’ use of emojis, since the use of emojis emerged as a polarizing element
during the interviews.
Fig. 1. DP1 Personalization of Character: Prototypes 1 (left) and 2 (center and right).
The other two prototypes instantiated the personalization of the content (DP2). In CBT and
IPT, content comes in the form of modules (e.g., behavioral activation, sleep hygiene). We in-
stantiated two prototypes that reflect the personalization of these modules in different ways.
Prototype three contained the task to respond to items from a
Fig. 2. DP2 Personalization of Content: Prototypes 3 (left) and 4 (center and right).
depression scale and the relevant modules were selected based on their responses.
For instance, the module on sleep improvement is only integrated if a user reports sleep
problems. Prototype four instantiated a more flexible version of the second design prin-
ciple. Here, instead of personalizing the content once in the beginning, a matching mod-
ule is suggested when users report specific issues on a particular day. For example,
CADY suggests the module sleep hygiene if users report sleep problems during daily
check-in.
8
4.4 Evaluation
To evaluate our prototypes, we conducted interviews with five independent experts with
experience in delivering psychotherapy (3 female, Mage=29) and five potential users (3
female, Mage=24). By including experts, our objective was to understand whether our pro-
posed design is consistent with established principles of psychotherapy. We decided to
recruit non-diagnosed individuals as potential users to first ensure the safety of the proto-
types before including young people diagnosed with depression. In each interview, we
first explained the concept of CAs and introduced our research project. Subsequently, we
explained the DPs and demonstrated their instantiations. During the presentation and af-
terwards, participants were asked to evaluate the prototypes and to provide ideas for fur-
ther personalization. The interviews lasted 40 minutes on average. All interviews were
recorded and transcribed. To analyze the feedback from the participants, we used a bot-
tom-up approach to synthesize the interviews into higher-order themes.
Results and Discussion.
All participants appreciated the personalization of the CA to suit their own needs and
preferences (or those of their clients), providing evidence of the utility of both DPs.
Moreover, all participants emphasized personalization as a crucial feature for the suc-
cess of mental health CAs. In terms of DP1 and prototype 1, every participant supported
the idea of personalizing the agent’s name, gender, and avatar as a mechanism for rela-
tionship building. Especially gender was identified as an important characteristic for
users to feel safe and comfortable in case they’ve had negative experiences regarding
one gender in the past. Using a robot or an animal avatar was suggested as an additional
gender-neutral and nonhuman version to satisfy users who prefer to talk with a robot
instead of a human. The participants also suggested adding age as a variable to choose
from. Instead of personalizing each aspect separately, multiple participants suggested
combining variations of gender, avatar, age, and social role into 3-4 different characters,
from which users can choose. They argued that presenting a few characters instead of
each characteristic separately would decrease the variables to choose from, which could
otherwise be overwhelming and result in annoyance or dropout. In addition, partici-
pants suggested comprehensive information (e.g., brief introductory videos) about each
character, so users can imagine what interacting with them would feel like. In terms of
the specific social role, participants expressed interest in a non-human, agender robot,
an older therapist-like role and a younger coach-like role. Most experts advised against
implementing a friend-like role (like in prototype 1) as they feared that the lack of a
professional relationship could endanger the therapeutic process. Therefore, they sug-
gested that one should be able to choose between professional roles that encompass
different personality traits: For example, I would suggest that social roles differ be-
tween warm, understanding, empathic versus rather cool, rational, direct. Regarding
prototype 2, experts and users generally valued the idea of providing the CA with the
agency to adapt to their use of emojis and language more generally, as experts explained
that adapting to the clients’ language resembles therapist-client relationship building in
the context of psychotherapy. In addition, potential users indicated that they regularly
adapt the emoji and language use to their friends and that this could improve the human-
9
chatbot relationship. However, some participants were concerned with implementing
the feature before it had reached sufficient accuracy. They stated that an insufficient
automated adaptation would be worse than a non-adaptive system. Participants also
requested the feature to turn off the automated adaption and information on how the
CA adapts to them. Instead of automatically regulating emoji and language usage, one
participant suggested integrating different language styles and emoji use into the dif-
ferent characters to give users control and counter potential technical limitations.
In terms of DP2, experts and potential users perceived the personalization of the
therapeutic content, i.e. the purpose of the personalization, to be crucial for the success
of a CA to treat depression and more important than DP1. Regarding prototype 3, ex-
perts and potential users liked the idea of personalizing content at the beginning based
on responses to a depression scale: I think it is important that the agent asks about the
symptoms of depression. And it's also important that it's highly structured because most
of the time it's very, very difficult for my clients to verbalize their issues. One expert
suggested an extension of prototype 3: In addition to the depression scale, it should be
possible for a user to openly state the most pressing issue. If users feel that the agent
listens and prioritizes this issue, it will increase their motivation, which is crucial for
the treatment success.
When evaluating prototypes 3 and 4, a trade-off between flexible personalization
and a structured plan emerged. On the one hand, experts and potential users emphasized
the need for autonomy, i.e., the ability to flexibly choose or change a module instead of
a fixed schedule, and its potential to increase motivation and engagement. On the other
hand, experts emphasized the importance of a plan with compulsory modules and a
fixed sequence. The fixed sequence was deemed important because some modules can
be tiring and difficult but play a crucial role in achieving treatment success and there-
fore need to be completed. Experts mentioned that a structured plan also provides users
with certainty and transparency, which makes CAs more reliable and the treatment
goals more visible. However, an inflexible plan, which does not sufficiently integrate
individual needs and preferences, could reduce motivation, user engagement, and thus
lead to dropout. Consequently, the challenge is a compromise between personalizing
therapeutic content flexibly and maintaining a structured program, which one expert
summarized: Some content should be fixed, but users should still feel that they can
decide for themselves. But not only depending on the momentary mood. If users only
choose based on the momentary mood, then there will probably not be much change.
You will have to build some feature that makes sure users are also doing the exercises
and consume the information no matter what their mood is like. A possible solution
emerged from combining prototypes 3 and 4: Experts suggested keeping the personal-
ization of the therapy modules in the beginning based on psychometric data and pre-
senting these results as a personalized structured program while being able to deviate
when a specific issue (like sleep problems or low energy) arises. However, when devi-
ating, it should be explicitly framed as a deviation from the personalized structured
treatment plan. In prototype 4, the CA suggested a module because it recognized sleep
problems in the users’ text messages during daily check-in. Although participants ap-
preciated that the CA was able to handle an acute problem, experts reiterated that young
people often cannot verbally express their problems. Therefore, one expert suggested
10
personalizing the daily check-in: Maybe it is helpful to ask ‘how are you today’ in
different ways because there are people who just never know an answer to this question.
You could work with something like a thermometer or emojis. So, the agent could first
ask 'I would like to know how you are doing, in what way do you want to tell me today?
and then the user can select a thermometer, choose an emotion from a list, or select to
write a text message.'
Based on feedback from our participants, we identified several opportunities to im-
prove the prototypes. While both DPs received positive feedback, the feedback also
revealed that the automatic personalization of the character may be less promising than
initially expected. Combining this feedback with the technical challenges of making the
CA’s character adaptive, we have decided to no longer pursue automatic adaptation.
Regarding DP1, we will focus on user-controlled personalization of the mental health
CA’s character and regarding DP2, we will implement explicit personalization and
mixed-initiative. This refinement and the suggested improvements for the prototypes
serve as the entry point into the second cycle. In general, participants discussed two
themes the most: (1) autonomy, i.e., giving user control over personalization features,
and (2) transparency, i.e., being transparent about what is being personalized and how
it is done.
5 Conclusion
This paper presents insights from our ongoing transdisciplinary DSR project to design
a personalized CA to treat depression in youth. Based on interviews with our target
group, we corroborated the need to integrate personalization features into the design
process. We proposed two DPs to guide the design of a personalized CA and instanti-
ated the DPs in four prototypes. We evaluated the prototypes in interviews with experts
and potential users. Overall, the feedback was positive, and the importance of person-
alization was confirmed. However, participants also expressed concerns about auto-
mated personalization performed by a CA since they were sceptical of the technical
feasibility and emphasized the loss of control. In general, autonomy and transparency
emerged as important themes guiding the design of personalization efforts. Finally, our
participants gave valuable feedback for (1) refining and extending the proposed per-
sonalization features and (2) suggesting additional personalization features (e.g. per-
sonalized reminders), which we will incorporate into our next DSR cycle. In summary,
our results show that personalized mental health CAs are a promising approach to ac-
commodate users’ symptoms and preferences. However, to comprehensively evaluate
the impact of personalization, more research is needed that compares CAs with and
without personalization features. Although our research follows established guidelines
for conducting DSR [19, 25], we need to highlight some limitations. First, the samples
for the problem awareness and the evaluation interviews were relatively small. In addi-
tion, the evaluation interviews included only nondiagnosed individuals. Consequently,
for the results to be more comprehensive and generalizable, larger sample sizes are
necessary. Second, we used an interactive prototype and brief prototype videos to
demonstrate our proposed design. Although we argue that this approach is appropriate
11
for a first DSR cycle, further research based on a fully functional prototype is crucial.
Therefore, in our second DSR cycle, we will implement the most important personali-
zation features in a fully functional prototype. Evaluating our DPs again in the second
DSR cycle will also contribute to further refining and validating our DPs, which is a
crucial next step. With our research presented in this article, we contribute valuable
design knowledge that serves as a starting point for future research on the design of
personalized mental health CAs.
References
1. Jane Costello, E., Erkanli, A., Angold, A.: Is there an epidemic of child or adolescent de-
pression? J. Child Psychol. Psychiatry. 47, 12631271 (2006).
2. Ellsäßer, G.: Unfälle, Gewalt, Selbstverletzung bei Kindern und Jugendlichen 2017. Ergeb-
nisse der amtlichen Statistik zum Verletzungsgeschehen 2014. Fachbericht. (2017).
3. Greiner, W., Batram, M., Witte, J.: Kinder- und Jugendreport 2019. Gesundheitsversorgung
von Kindern und Jugendlichen in Deutschland. Schwerpunkt: Ängste und Depressionen bei
Schulkindern, in Beiträge zur Gesundheitsökonomie und Versorgungsforschung. , Bielefeld
und Hamburg (2019).
4. Thapar, A., Collishaw, S., Pine, D.S., Thapar, A.K.: Depression in adolescence. Lancet.
379, 10561067 (2012).
5. Oud, M., de Winter, L., Vermeulen-Smit, E., Bodden, D., Nauta, M., Stone, L., van den
Heuvel, M., Taher, R.A., de Graaf, I., Kendall, T., Engels, R., Stikkelbroek, Y.: Effective-
ness of CBT for children and adolescents with depression: A systematic review and meta-
regression analysis. Eur. Psychiatry J. Assoc. Eur. Psychiatr. 57, 3345 (2019).
6. Cuijpers, P., Noma, H., Karyotaki, E., Vinkers, C.H., Cipriani, A., Furukawa, T.A.: A net-
work meta-analysis of the effects of psychotherapies, pharmacotherapies and their combi-
nation in the treatment of adult depression. World Psychiatry. 19, 92107 (2020).
7. Bundespsychotherapeutenkammer: Ein Jahr nach der Reform der Psychotherapie-Richtli-
nie. (2018).
8. Gulliver, A., Griffiths, K.M., Christensen, H.: Perceived barriers and facilitators to mental
health help-seeking in young people: a systematic review. BMC Psychiatry. 10, 113 (2010).
9. Leech, T., Dorstyn, D., Taylor, A., Li, W.: Mental health apps for adolescents and young
adults: A systematic review of randomised controlled trials. Child. Youth Serv. Rev. 127,
106073 (2021).
10. Dale, R.: The return of the chatbots. Nat. Lang. Eng. 22, 811817 (2016).
11. Cuijpers, P., Reijnders, M., Huibers, M.J.H.: The Role of Common Factors in Psychother-
apy Outcomes. Annu. Rev. Clin. Psychol. 15, 207231 (2019).
12. Ahmad, R., Siemon, D., Gnewuch, U., Robra-Bissantz, S.: Designing Personality-Adaptive
Conversational Agents for Mental Health Care. Inf. Syst. Front. (2022).
13. Fitzpatrick, K.K., Darcy, A., Vierhile, M.: Delivering Cognitive Behavior Therapy to
Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Con-
versational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health. 4, e19
(2017).
12
14. Inkster, B., Sarda, S., Subramanian, V.: An Empathy-Driven, Conversational Artificial In-
telligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation
Mixed-Methods Study. JMIR MHealth UHealth. 6, e12106 (2018).
15. Darcy, A., Daniels, J., Salinger, D., Wicks, P., Robinson, A.: Evidence of Human-Level
Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective
Observational Study. JMIR Form. Res. 5, e27868 (2021).
16. Brandtzaeg, P., Skjuve, M., Dysthe, K., Følstad, A.: When the Social Becomes Non-Hu-
man: Young People’s Perception of Social Support in Chatbots Social Support in Chatbots.
Presented at the April 3 (2021).
17. Lohaus, A. ed: Entwicklungspsychologie des Jugendalters. Springer-Verlag, Berlin Heidel-
berg (2018).
18. Kocaballi, A.B., Berkovsky, S., Quiroz, J.C., Laranjo, L., Tong, H.L., Rezazadegan, D.,
Briatore, A., Coiera, E.: The Personalization of Conversational Agents in Health Care: Sys-
tematic Review. J. Med. Internet Res. 21, e15360 (2019).
19. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems re-
search. MIS Q. 75105 (2004).
20. Lawrence, M.G., Williams, S., Nanz, P., Renn, O.: Characteristics, potentials, and chal-
lenges of transdisciplinary research. One Earth. 5, 4461 (2022).
21. Fan, H., Poole, M.S.: What Is Personalization? Perspectives on the Design and Implemen-
tation of Personalization in Information Systems. J. Organ. Comput. Electron. Commer. 16,
179202 (2006).
22. Vaidyam, A.N., Linggonegoro, D., Torous, J.: Changes to the Psychiatric Chatbot Land-
scape: A Systematic Review of Conversational Agents in Serious Mental Illness: Change-
ments du paysage psychiatrique des chatbots: une revue systématique des agents conversa-
tionnels dans la maladie mentale sérieuse. Can. J. Psychiatry. 0706743720966429 (2020).
23. Blom, J. o, Monk, A.F.: Theory of Personalization of Appearance: Why Users Personalize
Their PCs and Mobile Phones. HumanComputer Interact. 18, 193228 (2003).
24. Huibers, M.J.H., Lorenzo-Luaces, L., Cuijpers, P., Kazantzis, N.: On the Road to Personal-
ized Psychotherapy: A Research Agenda Based on Cognitive Behavior Therapy for Depres-
sion. Front. Psychiatry. 11, (2021).
25. Kuechler, B., Vaishnavi, V.: On theory development in design science research: anatomy
of a research project. Eur. J. Inf. Syst. 17, 489504 (2008).
26. Venable, J., Pries-Heje, J., Baskerville, R.: FEDS: a framework for evaluation in design
science research. Eur. J. Inf. Syst. 25, 7789 (2016).
27. Diederich, S., Brendel, A., Morana, S., Kolbe, L.: On the Design of and Interaction with
Conversational Agents: An Organizing and Assessing Review of Human-Computer Inter-
action Research. J. Assoc. Inf. Syst. 23, 96138 (2022).
28. Radomski, A.D., Wozney, L., McGrath, P., Huguet, A., Hartling, L., Dyson, M.P., Bennett,
K., Newton, A.S.: Design and Delivery Features That May Improve the Use of Internet-
Based Cognitive Behavioral Therapy for Children and Adolescents With Anxiety: A Realist
Literature Synthesis With a Persuasive Systems Design Perspective. J. Med. Internet Res.
21, e11128 (2019).
29. Garrido, S., Cheers, D., Boydell, K., Nguyen, Q.V., Schubert, E., Dunne, L., Meade, T.:
Young People’s Response to Six Smartphone Apps for Anxiety and Depression: Focus
Group Study. JMIR Ment. Health. 6, e14385 (2019).
... Additionally, 4 studies [50-53] focused on "Cognitive and Emotional Support," highlighting the importance of mobile apps in providing cognitive and emotional assistance. Furthermore, 4 studies [54][55][56][57] were dedicated to "Examining Symptoms or Interventions for Depression," underscoring a sustained interest in apps aimed at the thorough exploration and management of depressive symptoms. ...
... The target audience for these apps included young people (aged 13-25 years; n=12) [5,41,43,45,46,48,49,52,[54][55][56][57] and adults (aged 18 years and older; n=11) [42][43][44][45][46][47][52][53][54][55][56] (Table 1). ...
... The primary purpose of mobile apps is to provide personalized support [5,[50][51][52]54,56,57]. In addressing stress, depression, and insomnia, these chatbots offer effective tools and resources to help users improve their mental well-being. ...
Article
Background The COVID-19 pandemic has profoundly affected mental health, leading to an increased prevalence of depression and insomnia. Currently, artificial intelligence (AI) and deep learning have thoroughly transformed health care–related mobile apps, offered more effective mental health support, and alleviated the psychological stress that may have emerged during the pandemic. Early reviews outlined the use of mobile apps for dealing with depression and insomnia separately. However, there is now an urgent need for a systematic evaluation of mobile apps that address both depression and insomnia to reveal new applications and research gaps. Objective This study aims to systematically review and evaluate mobile apps targeting depression and insomnia, highlighting their features, effectiveness, and gaps in the current research. Methods We systematically searched PubMed, Scopus, and Web of Science for peer-reviewed journal articles published between 2017 and 2023. The inclusion criteria were studies that (1) focused on mobile apps addressing both depression and insomnia, (2) involved young people or adult participants, and (3) provided data on treatment efficacy. Data extraction was independently conducted by 2 reviewers. Title and abstract screening, as well as full-text screening, were completed in duplicate. Data were extracted by a single reviewer and verified by a second reviewer, and risk of bias assessments were completed accordingly. Results Of the initial 383 studies we found, 365 were excluded after title, abstract screening, and removal of duplicates. Eventually, 18 full-text articles met our criteria and underwent full-text screening. The analysis revealed that mobile apps related to depression and insomnia were primarily utilized for early detection, assessment, and screening (n=5 studies); counseling and psychological support (n=3 studies); and cognitive behavioral therapy (CBT; n=10 studies). Among the 10 studies related to depression, our findings showed that chatbots demonstrated significant advantages in improving depression symptoms, a promising development in the field. Additionally, 2 studies evaluated the effectiveness of mobile apps as alternative interventions for depression and sleep, further expanding the potential applications of this technology. Conclusions The integration of AI and deep learning into mobile apps, particularly chatbots, is a promising avenue for personalized mental health support. Through innovative features, such as early detection, assessment, counseling, and CBT, these apps significantly contribute toward improving sleep quality and addressing depression. The reviewed chatbots leveraged advanced technologies, including natural language processing, machine learning, and generative dialog, to provide intelligent and autonomous interactions. Compared with traditional face-to-face therapies, their feasibility, acceptability, and potential efficacy highlight their user-friendly, cost-effective, and accessible nature with the aim of enhancing sleep and mental health outcomes.
... This study responds to a very recent call to explore and develop transdisciplinary design research, which is characterised by wicked problems, emergent research methods and extensive collaboration 4 (Hevner et al., 2022). The emerging literature base around this topic acknowledges that challenges are to be expected in transdisciplinary design research (Legner et al., 2022), even if some solutions to these challenges have been offered (Dolata & Aleya, 2022;Kuhlmeier et al., 2022;Möller et al., 2022;Rajamany et al., 2022). However, this literature base lacks in-depth accounts of what transdisciplinary design research practice entails, and what being a design researcher in transdisciplinary design research projects involves. ...
... The field is seen to derive inspiration from and contribute to many different disciplines (Peffers et al., 2007;Purao et al., 2008). Recently, design research in IS has shown interest in transdisciplinary design research (Hevner et al., 2022) but this literature base is in a very early stage, only describing some individual design research projects considered transdisciplinary from a limited perspective (e.g., Dolata & Aleya, 2022;Kuhlmeier et al., 2022;Möller et al., 2022;Rajamany et al., 2022). The challenges involved are not scrutinised in detail. ...
... This study renews the design research discourse in IS, in which the relevance of working with multiple disciplines and the need to engage in transdisciplinary design research, characterised by wicked problems, emergent methods and extensive collaboration (Hevner et al., 2022;Legner et al., 2022;Monson, 2023) have been realised. Some studies have already reported on how to go about doing design research with multiple disciplines (Dolata & Aleya, 2022;Kuhlmeier et al., 2022;Möller et al., 2022;Rajamany et al., 2022), but none have yet provided strong empirical evidence or in-depth insight into the complexities in this emerging context. This study offers conceptual clarification as well as a framework for exploring and developing the emerging transdisciplinary design research practice. ...
Article
Full-text available
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues.
Article
Full-text available
Resolving the grand challenges and wicked problems of the Anthropocene will require skillfully combining a broad range of knowledge and understandings—both scientific and non-scientific—of Earth systems and human societies. One approach to this is transdisciplinary research, which has gained considerable interest over the last few decades, resulting in an extensive body of literature about transdisciplinarity. However, this has in turn led to the challenge that developing a good understanding of transdisciplinary research can require extensive effort. Here we provide a focused overview and perspective for disciplinary and interdisciplinary researchers who are interested in efficiently obtaining a solid understanding of transdisciplinarity. We describe definitions, characteristics, schools of thought, and an exemplary three-phase model of transdisciplinary research. We also discuss three key challenges that transdisciplinary research faces in the context of addressing the broader challenges of the Anthropocene, and we consider approaches to dealing with these specific challenges, based especially on our experiences with building up transdisciplinary research projects at the Institute for Advanced Sustainability Studies.
Article
Full-text available
Background There are far more patients in mental distress than there is time available for mental health professionals to support them. Although digital tools may help mitigate this issue, critics have suggested that technological solutions that lack human empathy will prevent a bond or therapeutic alliance from being formed, thereby narrowing these solutions’ efficacy. Objective We aimed to investigate whether users of a cognitive behavioral therapy (CBT)–based conversational agent would report therapeutic bond levels that are similar to those in literature about other CBT modalities, including face-to-face therapy, group CBT, and other digital interventions that do not use a conversational agent. Methods A cross-sectional, retrospective study design was used to analyze aggregate, deidentified data from adult users who self-referred to a CBT-based, fully automated conversational agent (Woebot) between November 2019 and August 2020. Working alliance was measured with the Working Alliance Inventory-Short Revised (WAI-SR), and depression symptom status was assessed by using the 2-item Patient Health Questionnaire (PHQ-2). All measures were administered by the conversational agent in the mobile app. WAI-SR scores were compared to those in scientific literature abstracted from recent reviews. Results Data from 36,070 Woebot users were included in the analysis. Participants ranged in age from 18 to 78 years, and 57.48% (20,734/36,070) of participants reported that they were female. The mean PHQ-2 score was 3.03 (SD 1.79), and 54.67% (19,719/36,070) of users scored over the cutoff score of 3 for depression screening. Within 5 days of initial app use, the mean WAI-SR score was 3.36 (SD 0.8) and the mean bond subscale score was 3.8 (SD 1.0), which was comparable to those in recent studies from the literature on traditional, outpatient, individual CBT and group CBT (mean bond subscale scores of 4 and 3.8, respectively). PHQ-2 scores at baseline weakly correlated with bond scores (r=−0.04; P<.001); however, users with depression and those without depression had high bond scores of 3.45. Conclusions Although bonds are often presumed to be the exclusive domain of human therapeutic relationships, our findings challenge the notion that digital therapeutics are incapable of establishing a therapeutic bond with users. Future research might investigate the role of bonds as mediators of clinical outcomes, since boosting the engagement and efficacy of digital therapeutics could have major public health benefits.
Article
Full-text available
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice, due to improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts like people's private life, education, and healthcare, as well as in organizations, to innovate and automate tasks, for example in marketing and sales or customer service. In addition to these application contexts, such agents take on different forms concerning their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are not able to fulfill expectations and to foster a positive user experience is a challenging endeavor. To better understand how CAs can be designed to fulfill their intended purpose, and how humans interact with them, a multitude of studies focusing on human-computer interaction have been carried out. These have contributed to our understanding of this technology. However, currently a structured overview of this research is missing, which impedes the systematic identification of research gaps and knowledge on which to build on in future studies. To address this issue, we have conducted an organizing and assessing review of 262 studies, applying a socio-technical lens to analyze CA research regarding the user interaction, context, agent design, as well as perception and outcome. We contribute an overview of the status quo of CA research, identify four research streams through a cluster analysis, and propose a research agenda comprising six avenues and sixteen directions to move the field forward.
Conference Paper
Full-text available
Although social support is important for health and well-being, many young people are hesitant to reach out for support. The emerging uptake of chatbots for social and emotional purposes entails opportunities and concerns regarding non-human agents as sources of social support. To explore this, we invited 16 participants (16–21 years) to use and reflect on chatbots as sources of social support. Our participants first interacted with a chatbot for mental health (Woebot) for two weeks. Next, they participated in individual in-depth interviews. As part of the interview session, they were presented with a chatbot prototype providing information to young people. Two months later, the participants reported on their continued use of Woebot. Our findings provide in-depth knowledge about how young people may experience various types of social support—appraisal, informational, emotional, and instrumental support—from chatbots. We summarize implications for theory, practice, and future research.
Article
Full-text available
In this conceptual paper, we outline the many challenges on the road to personalized psychotherapy, using the example of cognitive behavior therapy (CBT) for depression. To optimize psychotherapy for the individual patient, we need to find out how therapy works (identification of mechanisms of change) and for whom it works (identification of moderators). To date, psychotherapy research has not resulted in compelling evidence for or against common or specific factors that have been proposed as mechanisms of change. Our central proposition is that we need to combine the “how does it work?”-question with the “for whom does it work?”-question in order to advance the field. We introduce the personalized causal pathway hypothesis that emphasizes the links and distinction between individual patient differences, therapeutic procedures and therapy processes as a paradigm to facilitate und understand the concept of personalized psychotherapy. We review the mechanism of change literature for CBT for depression to see what we have learned so far, and describe preliminary observational evidence supporting the personalized causal pathway hypothesis. We then propose a research agenda to push the ball forward: exploratory studies into the links between individual differences, therapeutic procedures, therapy processes and outcome that constitute a potential causal pathway, making use of experience sampling, network theory, observer ratings of therapy sessions, and moderated mediation analysis; testing and isolation of CBT procedures in experiments; and testing identified causal pathways of change as part of a personalized CBT package against regular CBT, in order to advance the application of personalized psychotherapy.
Article
Full-text available
No network meta‐analysis has examined the relative effects of psychotherapies, pharmacotherapies and their combination in the treatment of adult depression, while this is a very important clinical issue. We conducted systematic searches in bibliographical databases to identify randomized trials in which a psychotherapy and a pharmacotherapy for the acute or long‐term treatment of depression were compared with each other, or in which the combination of a psychotherapy and a pharmacotherapy was compared with either one alone. The main outcome was treatment response (50% improvement between baseline and endpoint). Remission and acceptability (defined as study drop‐out for any reason) were also examined. Possible moderators that were assessed included chronic and treatment‐resistant depression and baseline severity of depression. Data were pooled as relative risk (RR) using a random‐effects model. A total of 101 studies with 11,910 patients were included. Depression in most studies was moderate to severe. In the network meta‐analysis, combined treatment was more effective than psychotherapy alone (RR=1.27; 95% CI: 1.14‐1.39) and pharmacotherapy alone (RR=1.25; 95% CI: 1.14‐1.37) in achieving response at the end of treatment. No significant difference was found between psychotherapy alone and pharmacotherapy alone (RR=0.99; 95% CI: 0.92‐1.08). Similar results were found for remission. Combined treatment (RR=1.23; 95% CI: 1.05‐1.45) and psychotherapy alone (RR=1.17; 95% CI: 1.02‐1.32) were more acceptable than pharmacotherapy. Results were similar for chronic and treatment‐resistant depression. The combination of psychotherapy and pharmacotherapy seems to be the best choice for patients with moderate depression. More research is needed on long‐term effects of treatments (including cost‐effectiveness), on the impact of specific pharmacological and non‐pharmacological approaches, and on the effects in specific populations of patients.
Article
Objective. Smartphone applications (‘apps’) have the potential to improve the scalability of mental health interventions for young people, however, the effectiveness of stand-alone apps in mental health management remains unclear. This systematic review, with meta-analysis, provides an up-to-date summary of the available high-quality evidence. Methods. Eleven randomized controlled trials, involving a pooled sample of 1706 adolescents and young adults (age range 10 to 35 years), were identified from the Cochrane Library, Embase, Google Scholar, PsycINFO and PubMed databases. The reporting quality of studies was evaluated using the Cochrane Risk of Bias Tool 2.0 (RoB 2.0). Hedges’ g effect sizes with 95% confidence intervals, p values and heterogeneity statistics were additionally calculated using a random effects model. Results. Study reporting quality was sound, with no trials characterized as ‘high’ risk. App interventions produced significant symptom improvement across multiple outcomes, compared to wait-list or attention control conditions (depression gw = 0.52 [CI: 0.18 -0.84], p = .01, k = 8; stress gw = 0.30 [CI: 0.06 to 0.53], p = .02, k = 2). Longer-term benefits could not be established (k = 4), although individual studies reported positive trends up to 6 months post. Age was not identified as a significant moderator. Conclusions. Smartphone apps hold promise as a stand-alone self-management tool in mental health service delivery. Further controlled trials with follow-up data are needed to confirm these findings as well as determine treatment engagement and effectiveness across diverse groups of participants.
Article
Objective The need for digital tools in mental health is clear, with insufficient access to mental health services. Conversational agents, also known as chatbots or voice assistants, are digital tools capable of holding natural language conversations. Since our last review in 2018, many new conversational agents and research have emerged, and we aimed to reassess the conversational agent landscape in this updated systematic review. Methods A systematic literature search was conducted in January 2020 using the PubMed, Embase, PsychINFO, and Cochrane databases. Studies included were those that involved a conversational agent assessing serious mental illness: major depressive disorder, schizophrenia spectrum disorders, bipolar disorder, or anxiety disorder. Results Of the 247 references identified from selected databases, 7 studies met inclusion criteria. Overall, there were generally positive experiences with conversational agents in regard to diagnostic quality, therapeutic efficacy, or acceptability. There continues to be, however, a lack of standard measures that allow ease of comparison of studies in this space. There were several populations that lacked representation such as the pediatric population and those with schizophrenia or bipolar disorder. While comparing 2018 to 2020 research offers useful insight into changes and growth, the high degree of heterogeneity between all studies in this space makes direct comparison challenging. Conclusions This review revealed few but generally positive outcomes regarding conversational agents’ diagnostic quality, therapeutic efficacy, and acceptability, which may augment mental health care. Despite this increase in research activity, there continues to be a lack of standard measures for evaluating conversational agents as well as several neglected populations. We recommend that the standardization of conversational agent studies should include patient adherence and engagement, therapeutic efficacy, and clinician perspectives.
Book
Dieses Lehrbuch befasst sich mit der Entwicklungspsychologie des Jugendalters. Es vermittelt in verständlich geschriebenen Kapiteln Grund- und Anwendungswissen zu allen relevanten Entwicklungsbereichen und geht speziell auf für diese Lebensphase zentrale Themen ein, wie z.B. Medienkonsum, Sozialbeziehungen, Problemverhalten oder Berufswahl. Neben den wissenschaftlichen Grundlagen werden jeweils Bezüge zu konkreten Anwendungskontexten hergestellt. Die begleitende Website www.lehrbuch-psychologie.springer.com enthält kostenlose Zusatzmaterialien für Lernende und Lehrende. Der Inhalt • Entwicklung in wichtigen Funktionsbereichen: Körperliche Veränderungen, Kognition, Emotion, Selbstkonzept und Selbstwert, Sozialbeziehungen zur Herkunftsfamilie und zu Gleichaltrigen • Zentrale Themen des Jugendalters: Problemverhalten, Medienkonsum, Berufswahl • Psychische und physische Störungen • Institutionelle Unterstützung Die Zielgruppen • fortgeschrittene Studierende im Psychologiestudium • angehende Kinder- und Jugendlichenpsychotherapeuten • andere berufliche Gruppen, die ihren Schwerpunkt im Jugendbereich haben Der Herausgeber Arnold Lohaus ist Professor für Entwicklungspsychologie und Entwicklungspsychopathologie an der Universität Bielefeld und Autor zahlreicher Fachpublikationen.