Article

The Next Generation: Chatbots in Clinical Psychology and Psychotherapy to Foster Mental Health – A Scoping Review

Authors:
  • University of Ulm, Institute of Psychology and Education
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background and Purpose: The present age of digitalization brings with it progress and new possibilities for health care in general and clinical psychology/psychotherapy in particular. Internet- and mobile-based interventions (IMIs) have often been evaluated. A fully automated version of IMIs are chatbots. Chatbots are automated computer programs that are able to hold, e.g., a script-based conversation with a human being. Chatbots could contribute to the extension of health care services. The aim of this review is to conceptualize the scope and to work out the current state of the art of chatbots fostering mental health. Methods: The present article is a scoping review on chatbots in clinical psychology and psychotherapy. Studies that utilized chatbots to foster mental health were included. Results: The technology of chatbots is still experimental in nature. Studies are most often pilot studies by nature. The field lacks high-quality evidence derived from randomized controlled studies. Results with regard to practicability, feasibility, and acceptance of chatbots to foster mental health are promising but not yet directly transferable to psychotherapeutic contexts. ­Discussion: The rapidly increasing research on chatbots in the field of clinical psychology and psychotherapy requires corrective measures. Issues like effectiveness, sustainability, and especially safety and subsequent tests of technology are elements that should be instituted as a corrective for future funding programs of chatbots in clinical psychology and psychotherapy.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Agent-based systems/ Conversational agents: Also known as Chatbots, they are automated computer programs that can hold, e.g., a script-based conversation with a human being [19]. These systems can converse and interact with human users using natural language, either written or voice, and visual language. ...
... Chatbots have become popular in health care, specifically mental health, in the past five years. According to a review of chatbot use in mental healthcare generally [7], [21] and clinical psychology and psychotherapy (precisely) [19], different chatbots were identified for purposes such as therapy, training, education, counseling, and screening. They also effectively improved mental health disorders such as depression, stress, and acrophobia. ...
... Using natural language processing rules, it generates appropriate textual responses to users' typed inputs through questions and answers. Despite its technical simplicity, ELIZA can generate convincing dialogues and evidence of therapeutic effectiveness [19], [22]. However, there has been no significant attempt at fully automating its approach for treating mental health problems. ...
Article
Full-text available
Mental health disorders have affected people's everyday lives globally, showing rapid growth. Effective detection, diagnosis, and treatment of MSDs can occur by utilizing increasingly substantial amounts of available health data from diverse sources. However, there are many challenges in developing effective treatment models for this condition. The challenges are further complicated by the volume, heterogeneity, interoperability, propagation, and complexity of data, especially with the emergence of big data. Knowledge management and knowledge-based systems have significantly impacted healthcare quality and delivery, especially patient self-management. In this work, we review knowledge-based applications for mental health self-management. The research efforts are synthesized, discussing shortcomings and future research directions.
... This human-computer interaction technology is believed to be intelligent enough to comprehend the conversation between a patient and a chatbot therapist based on machine learning (ML) algorithms [4,10]. Applications could help prevent, treat, and prevent relapses in behavioral and psychiatric issues [11]. According to Bendig et al. [11], AI chatbots, also known as conversational or relational agents, are machine conversation systems that interact with human users using various AI technologies. ...
... Applications could help prevent, treat, and prevent relapses in behavioral and psychiatric issues [11]. According to Bendig et al. [11], AI chatbots, also known as conversational or relational agents, are machine conversation systems that interact with human users using various AI technologies. Responses can be generated using a rule-based model (predefined rules or decision tree), natural language processing (NLP), or ML through text-based or speech-enabled conversations [4,7]. ...
... Responses can be generated using a rule-based model (predefined rules or decision tree), natural language processing (NLP), or ML through text-based or speech-enabled conversations [4,7]. AI chatbots try to talk like humans, including the emotional, social, and relational parts of natural conversation [11]. They do this to imitate a therapeutic conversational style that can help users transfer therapeutic content and mirror therapeutic processes [7,12]. ...
Article
Full-text available
Background: Artificial intelligence (AI)–based psychotherapeutic interventions may bring a new and viable approach to expanding psychiatric care. However, evidence of their effectiveness remains scarce. We evaluated the efficacy of AI-based psychotherapeutic interventions on depressive, anxiety, and stress symptoms at postintervention and follow-up assessments. Methods: A three-step comprehensive search via nine electronic databases (PubMed, Embase, CINAHL, Cochrane Library, Scopus, IEEE Xplore, Web of Science, PsycINFO, and ProQuest Dissertations and Theses) was performed. Results: Thirty randomized controlled trials (RCTs) in 31 publications involving 6100 participants from nine countries were included. The majority (79.1%) of trials with intention-to-treat analysis but less than half (48.6%) of trials with perprotocol analysis were graded as low risk. Meta-analyses showed that interventions significantly reduced depressive symptoms at the postintervention assessment (t = −4.40, p=0.001) with medium effect size (g = −0.54, 95% CI: −0.79 to −0.29) and at 6–12 months of assessment (t = −3.14, p<0.016) with small effect size (g = −0.23, 95% CI: −0.40 to −0.06) in comparison with comparators. Our subgroup analyses revealed that the depressed participants had a significantly larger effect size in reducing depressive symptoms than participants with stress and other conditions. At postintervention and follow-up assessments, we discovered that AI-based psychotherapeutic interventions did not significantly alter anxiety, stress, and the total scores of depressive, anxiety, and stress symptoms in comparison to comparators. The random-effects univariate meta-regression did not identify any significant covariates for depressive and anxiety symptoms at postintervention. The certainty of evidence ranged between moderate and very low. Conclusions: AI-based psychotherapeutic interventions can be used in addition to usual treatments for reducing depressive symptoms. Well-designed RCTs with long-term follow-up data are warranted. Trial Registration: CRD42022330228
... Emerging disciplines such as neuro-symbolic AI, retrieval-based AI, and affective computing [5] can enhance the reasoning capabilities of ED chatbots and minimize unintended, unethical harm. To combat the risky nature of data protection, it is crucial to have strong regulatory frameworks [19], inform users about how their data is being handled, and develop procedures for storing and/or destroying user data once a chatbot is no longer in use [5]. To add, an exploratory idea that may enhance user protection is establishing labels or certifications to differentiate secure digital interventions from those that are non-therapeutic or low in security [19]. ...
... To combat the risky nature of data protection, it is crucial to have strong regulatory frameworks [19], inform users about how their data is being handled, and develop procedures for storing and/or destroying user data once a chatbot is no longer in use [5]. To add, an exploratory idea that may enhance user protection is establishing labels or certifications to differentiate secure digital interventions from those that are non-therapeutic or low in security [19]. Designers should anticipate and manage risk throughout the duration of a chatbot's lifespan to remove unethical behaviour [20]. ...
... For instance, these chatbots can be valuable tools outside of the clinic, helping to gather continuous assessment on patients and aiding with therapeutic homework [7,23]. Researchers posit that conversational agents could carry out multiple aspects of psychotherapy for therapists, saving time and enabling greater productivity [19]. Given the limited capabilities of ED chatbots, they are being proposed as potential supplements-not replacements-to traditional medical action [7,19]. ...
Article
Full-text available
Eating disorders (ED) have the highest case mortality rate among all mental health issues, and their prevalence has increased significantly in recent years. However, fewer than 20% of sufferers receive treatment. Chatbots, or conversational agents, that target EDs are a promising way to bridge this gap and supplement existing support by being cost-effective, accessible, and non-human. The technical compositions behind these chatbots vary, each with their own strengths and weaknesses. Generally, the field of ED chatbots is emerging and only a handful of platforms have come out, including one designed to replace ED helpline support and one to support youth body image. While many ethical and practical challenges affect ED chatbots, there are ways to make progress, and the collaboration between developers, clinicians, ethicists, and the public will be vital in this pursuit. Integrating ED chatbots into healthcare and social media platforms are future directions that could significantly advance ED healing.
... The conceptual articles however, included exploration on the empathetic robots based on the theory and processes of empathy observed in humankind ; summarizing the last twelve months of healthcare research in AI over several clinical specialties, and to highlight the present capabilities and deficiencies, as well as challenges, associated with this new innovation (Loh, 2018); exploring the competencies of three chatbots for mental health assistance (Kretzschmar et al., 2019); concentrating on three of the most common approaches that AI is often used in psychological health: natural language processing of medical writings and social networking information, personalized sensing or digital behavioural genetics and AI chatbots (D'Alfonso, 2020); using a case based method based on global health care sector for attaining a trustworthy AI (Baerøe et al., 2020b); applying a sentimental analysis based on videosupported emotion identification and voice-based emotion identification (Devaram, 2020); designing a scoping review article of the AI chatbots in healthcare psychology and psychotherapy (Bendig et al., 2019); and conducting an in-depth interviews with 25 medical practitioners (Liu et al., 2021). Ortony-Clore-Collins theory The idea proposed by Ortony, Clore and Collins posits that emotions arise because of specific cognitions and perceptions. ...
... The cluster defined conversational agents, such as chatbots and digital/virtual assistants, as technological tools that facilitate user interactions using natural language. Chatbots equipped with natural language processing can comprehend the patterns in patient conversations and assess the underlying mood of the message by using contextual cues from voice, video or text input (Fitzpatrick et al., 2017;Bendig et al., 2019;Koulouri et al., 2022). A study conducted by Koulouri et al. (2022) involved interviews with counsellors who specialize in working with young adults. ...
... In regard to future research avenues, the cluster illustrates that it would be beneficial for researchers to create a survey that can collect detailed information on many aspects of mental health and mental health technology (Koulouri et al., 2022). Additional study needs to be conducted to enhance the psychotherapeutic content of chatbots and to assess their effectiveness via clinical trials (Bendig et al., 2019(Bendig et al., , 2022. Needs assessments might significantly improve the study literature on the creation of chatbots for treatment. ...
Article
Full-text available
Purpose The current research elucidates the role of empathy in design of artificial intelligence (AI) systems in healthcare context, through a structured literature review, analysis and synthesis of academic literature published between 1990 and 2024. Design/methodology/approach This study aims to advance the domain of empathy in AI by adopting theory constructs context method approach using the PRISMA 2020 framework. Findings The study presents a current state-of-the-art literature to review the connections between empathy and AI and identifying four clusters showing the emerging trajectories in the field of AI and empathy in healthcare setting. Originality/value Despite a rise in empirical research, the potential pathways enhancing AI accountability by incorporation of empathy is unclear. The research aims to contribute to the existing literature on AI and empathy in the healthcare sector by carving out four distinct clusters depicting the future research avenues.
... [5] Despite the need for further high-quality research, initial studies on mental health chatbots have demonstrated positive outcomes, including favorable acceptance, satisfaction, feasibility, and low risk of harm. [6,7] Research suggests that chatbots have the potential to benefit patients with conditions such as anxiety, distress, and depression in terms of diagnosis, therapy, and adherence. [8,9] However, the existing body of research has some limitations, including small sample sizes and short durations, which weaken the generalizability of the findings. ...
... While chatbots pose a low risk of harm, there are potential risks of misinterpretation of user responses, technical errors leading to harmful advice, repetitive interactions making it feel less human-like, and overreliance on chatbots. [6,7,13,21] Legal frameworks that protect chatbot users, particularly in terms of data privacy, are crucial. ...
... This result is important because global mental health research indicates the effectiveness of chatbots in preventing and reducing symptoms of mental health conditions, including psychological distress, anxiety, and depression. [6,7,9,22,23] According to therapists, chatbots have the potential to enhance patient engagement in therapy goals and assist in completing assigned tasks, leading to faster progress. [24] Chatbots have been employed across multiple settings to help in diagnosing and screening patients, thereby identifying at-risk individuals and reducing the workload on medical workers. ...
Article
Background: Sudan’s political and economic challenges have increased mental health issues among university students, but access to mental healthcare is limited. Digital health interventions, such as chatbots, could provide a potential solution to inadequate care. This study aimed to evaluate the level of acceptance of a mental health chatbot prototype among university students in Khartoum, Sudan. Materials and Methods: This qualitative study investigated the perspectives of university students regarding a mental health chatbot prototype designed specifically for this research and deployed on Telegram. Twenty participants aged 18+, owning smartphones, and not receiving mental health treatment tested the prototype. Data was collected through individual, face-to-face, in-depth, semi-structured interviews. The data was analysed using both deductive and inductive content analysis methods. Results: Most of the participants acknowledged the importance of mental health but felt that it was an overlooked issue in Sudan. Participants considered the chatbot to be a unique and innovative concept, offering valuable features. They viewed the chatbot as a user-friendly and accessible tool, with advantages such as convenience, anonymity, and accessibility, and potential cost and time savings. However, most participants agreed that the chatbot has many limitations and should not be seen as a substitute for seeing a doctor or therapist. Conclusion: The mental health chatbot was viewed positively by participants in the study. Chatbots can be promising tools for providing accessible and confidential mental health support for university students in countries like Sudan. Long-term studies are required to assess chatbot’s mental health benefits and risks. Keywords: mental health, chatbots, university students, Sudan, young adults
... This is achieved through the use of AI-based services such as Yoga, which employs motivational interviewing and micro-actions to help users improve their mental resilience skills and feel better. (Bendig et al. 2022). Furthermore, they have the potential to provide additional therapeutic interventions (Bendig et al. 2022). ...
... (Bendig et al. 2022). Furthermore, they have the potential to provide additional therapeutic interventions (Bendig et al. 2022). ...
Article
Full-text available
With the development of artificial intelligence technologies, changes have also begun to be seen in psychotherapy. Although artificial intelligence does not currently have a major impact on the therapy field, it raises major questions about the nature of therapy and the value of the relationship future between people and therapists in understanding how artificial intelligence can be included in the therapy process, foreseeing the future, and being proactive are gaining importance. This article will examine current artificial intelligence applications used in psychotherapy fields with a literature review. Artificial intelligence can be used to increase the effectiveness of psychotherapy. However, it should not be forgotten that excessive reliance on artificial intelligence can overshadow the human aspect of psychotherapy and that the human factor is important. Although there are still uncertainties about how the profession will be affected by the use of artificial intelligence in psychotherapy and how it will be incorporated into therapy processes, it is envisaged that artificial intelligence can play a versatile role in psychotherapy. ÖZ Gelişen yapay zeka teknolojileri ile psikoterapi alanında da değişimler görülmeye başlamıştır. Yapay zeka, terapi alanında şu anda büyük bir etkiye sahip olmasa da, gelecekte terapinin niteliği ve insanlar ile terapistler arasındaki ilişkinin değeri konusunda büyük sorular ortaya çıkartmaktadır. Yapay zekanın terapi sürecine nasıl dahil edilebileceğini anlamaya çalışmak geleceği öngörmek, proaktif olmak önem kazanmaktadır. Bu makalede psikoterapi alanlarında, kullanılan mevcut yapay zeka uygulamaları literatür taraması ile irdelenecektir. Psikoterapinin etkinliğini artırmak için yapay zekanın kullanılabilinmektedir. Ancak yapay zekaya aşırı güvenmenin psikoterapinin insan yönünü gölgeleyebileceği ve insan faktörünün önemli olduğunun unutulmaması gerekmektedir. Psikoterapide yapay zeka kullanımında mesleğin nasıl etkileneceği ve terapi süreçlerine nasıl dahil edileceği konusunda hala belirsizlikler olmakla birlikte yapay zekanın psikoterapide çok yönlü bir rol oynayabileceği öngörülmektedir.
... These chatbot psychologists leverage AI technologies and CBT techniques to provide accessible and effective mental health support to individuals experiencing various psychological challenges [24]. Additionally, using chatbots in clinical psychology and psychotherapy has shown promise in fostering mental health and well-being [25]. Chatbot interventions have been effective in combating depression and promoting mental health through innovative technological approaches [25]. ...
... Additionally, using chatbots in clinical psychology and psychotherapy has shown promise in fostering mental health and well-being [25]. Chatbot interventions have been effective in combating depression and promoting mental health through innovative technological approaches [25]. Neuropsychological aspects have been increasingly integrated into health promotion strategies to enhance outcomes in diverse populations [26]. ...
Article
Full-text available
Background and Objectives: This systematic review examines the integration of gamified health promotion strategies in school settings, with a focus on their potential to positively influence health behaviors and promote well-being among adolescents. This study explores the incorporation of cognitive behavioral therapy (CBT), artificial intelligence, and neuropsychological principles in gamified interventions, aiming to enhance engagement and effectiveness. Materials and Methods: A narrative synthesis of 56 studies, following PRISMA guidelines, underscores the significant impact of these gamified interventions on mental health outcomes, emphasizing reductions in anxiety, depression, and burnout while improving coping skills and lifestyle habits. The focus of key areas in mental health outcomes, emotional regulation, cognitive flexibility, and adherence mechanisms is explored through quantitative and qualitative syntheses to underscore intervention effectiveness and design principles. Results: This review highlights the high-quality evidence supporting the use of gamification in educational settings and calls for further research to optimize design elements and address implementation barriers. The findings propose that well-designed gamified health interventions can effectively engage students, promote healthy behaviors, and improve mental well-being while acknowledging the need for further studies to explore underlying mechanisms and long-term effects. Conclusions: Gamified health interventions that embed CBT and neuropsychological principles are promising for promoting the mental well-being of schoolchildren. Although the evidence indicates that they are effective in improving psychological and behavioral outcomes, further research is needed to optimize design features and overcome implementation challenges to ensure wider and more sustainable application.
... Fig. 1 shows a list of potential applications for LLMs in the daily practice of mental health researchers and practitioners. We draw some inspiration from the recent review of Bendig et al. [24], which covers existing applications of chatbots, and extended with tasks that can be performed by LLMs, but go beyond that to showcase a greater variety of possible applications. We broadly distinguish between therapist-facing and patient-facing tasks. ...
... While their recent introduction has not allowed for large-scale studies, the impressive conversational capabilities of LLMs point to a future where they will be utilised as autonomous 'psychotherapists', whether by initiative from the patients themselves, or after consultation with their therapist. Nevertheless, the recent systematic review of Li et al. [18] shows a significant reduction [24]. Psychologists may use an LLM to offload mundane tasks, such as note or report summarisation, or to obtain feedback regarding a particular therapy plan or even a single therapy session. ...
Preprint
Digital technologies have long been explored as a complement to standard procedure in mental health research and practice, ranging from the management of electronic health records to app-based interventions. The recent emergence of large language models (LLMs), both proprietary and open-source ones, represents a major new opportunity on that front. Yet there is still a divide between the community developing LLMs and the one which may benefit from them, thus hindering the beneficial translation of the technology into clinical use. This divide largely stems from the lack of a common language and understanding regarding the technology's inner workings, capabilities, and risks. Our narrative review attempts to bridge this gap by providing intuitive explanations behind the basic concepts related to contemporary LLMs.
... The use of AI chatbots in various real-life applications is rapidly increasing due to their ability to reduce reliance on humans, lower costs, improve efficiency, and streamline service experiences (Adamopoulou & Moussiades, 2020;Bendig et al., 2019;Chan & Li, 2023;Greer et al., 2019;He et al., 2022;Liu et al., 2022;Omarov et al., 2023;Shah et al., 2017;Tamayo et al., 2020;Tanana et al., 2019;Xu & Zhuang, 2022). ...
... Various studies have demonstrated this capability well (Brown et al., 2020;Wei et al., 2021;Wu et al., 2021). GPT-based chatbots, for example ChatGPT, can understand natural language inputs and produce responses that are contextually appropriate and coherent, thereby improving interactivity and efficiency (Bendig et al., 2019;Nath et al., 2021;Shah et al., 2017;Xu & Zhuang, 2022). ...
Article
Full-text available
Background Researchers are leading the development of AI designed to conduct interviews. These developments imply that AI's role is expanding from mere data analysis to becoming a tool for social researchers to interact with and comprehend their subjects. Yet, academic discussions have not addressed the potential impacts of AI on narrative interviews. In narrative interviews, the method of collecting data is a collaborative effort. The interviewer also contributes to exploring and shaping the interviewee's story. A compelling narrative interviewer has to display critical skills, such as maintaining a specific questioning order, showing empathy, and helping participants delve into and build their own stories. Methods This case study configured an OpenAI Assistant on WhatsApp to conduct narrative interviews with a human participant. The participant shared the same story in two distinct conversations: first, following a standard cycle and answering questions earnestly, and second, deliberately sidetracking the assistant from the main interview path as instructed by the researcher, to test how well the metrics could reflect the deliberate differences between different conversations. The AI's performance was evaluated through conversation analysis and specific narrative indicators, focusing on its adherence to the interview structure, empathy, narrative coherence, complexity, and support for human participant agency. The study sought to answer these questions: 1) How can the proposed metrics help us, as social researchers without a technical background, understand the quality of the AI-driven interviews in this study? 2) What do these findings contribute to our discussion on using AI in narrative interviews for social research? 3) What further research could these results inspire? Results The findings show to what extent the AI maintained structure and adaptability in conversations, illustrating its potential to support personalized, flexible narrative interviews based on specific needs. Conclusions These results suggest that social researchers without a technical background can use observation-based metrics to gauge how well an AI assistant conducts narrative interviews. They also prompt reflection on AI's role in narrative interviews and spark further research.
... However, a recent review has pointed out that the evidence supporting the effectiveness of chatbots in improving conditions such as depression, distress, and stress remains weak (Abd-Alrazaq et al., 2020). Furthermore, a considerable amount of chatbot technology remains in the development or experimental stage, with a notable presence of pilot studies within the research field (Bendig et al., 2022). As such, it is imperative for ongoing research to thoroughly evaluate and summarize the evidence concerning their effectiveness and acceptability (Abd-Alrazaq et al., 2019). ...
... These findings tentatively support the idea of developing AI chatbots that can offer initial help to clients experiencing mild to moderate psychological problems. However, this study remains a pilot study as chatbot technology in mental health care is still in the developmental or experimental phase (Bendig et al., 2022). ...
... A UTOMATIC question generation (AQG) is a subdomain of Natural Language Processing (NLP) that aims to automatically generate relevant and meaningful questions based on a given text and an answer. Since the last decade, AQG has been utilized in a variety of applications such as education [1], health [2], psychology [3], chatbot [4], and customer service [5]. In all of these applications, certain types of information have been questioned such as location, time, symptoms, feedback. ...
... Previous analyses have shown that BLEU fails to detect semantic similarity due to relying on the exact occurrence of words and the exclusion of synonyms [49]. Figure 7 shows the distribution of BLEU n-gram score (n ∈ [1,2,3,4]) metrics between the generated questions and reference questions crafted by humans. It demonstrates slightly higher 1-gram and 2-gram lexical overlapping with the reference question acquired by flan-T5 compared to the GPT, which means flan-T5 fine-tuned models were able to generate questions with a higher number of exact correlated words compared to GPT. ...
Article
Full-text available
This paper presents a comprehensive study on generating subjective inquiries for news media posts to empower public engagement with trending media topics. While previous studies primarily focused on factual and objective questions with explicit or implicit answers in the text, this research concentrates on automatically generating subjective questions to directly elicit personal preference from individuals based on a given text. The research methodology involves the application of fine-tuning techniques across multiple iterations of flan-T5 and GPT3 architectures for the task of Seq2Seq generation. This approach is meticulously evaluated using a custom dataset comprising 40,000 news articles along with human-generated questions. Furthermore, a comparative analysis is conducted using zero-shot prompting via ChatGPT, juxtaposing the performance of fine-tuned models against a significantly larger language model. The study grapples with the inherent challenges tied to evaluating opinion-based question generation due to its subjective nature and the inherent uncertainty in determining answers. A thorough investigation and comparison of two transformer architectures are undertaken utilizing conventional lexical overlap metrics such as BLEU, ROUGE, and METEOR, alongside semantic similarity metrics encompassing BERTScore, BLEURT, and answerability scores such as QAScore, and RQUGE. The findings underscore the marked superiority of the flan-T5 model over GPT3, substantiated not only by quantitative metrics but also through human evaluations. The paper introduces Opinerium based on the open-source flan-T5-Large model, identified as the pacesetter in generating subjective questions. Additionally, we challenge and assess all aforementioned metrics thoroughly by investigating the pairwise Spearman correlation analysis to identify robust metrics.
... In the literature on therapy robots, there are robots with a physical form embodied with artificial intelligence and chatbots in the form of artificial intelligence-supported software and applications. It is known that the use of chatbots has increased in recent years, and they have widespread use in the field of mental health (Abd-Alrazaq et al. 2019, Bendig et al. 2019. Chatbots are used in mental health interventions for many conditions (e.g., depression, autism, anxiety) based on various counseling theories/approaches, particularly Cognitive Behavioral Therapy (Abd-Alrazaq et al. 2019). ...
... Studies indicated that Woebot achieved effective results on depression and anxiety symptoms (Fitzpatrick et al. 2017). Bendig et al. (2019) researched to examine the current status of chatbots. Although they demonstrated the effectiveness of robots for conditions such as well-being, depression, and stress, studies in the relevant literature are mostly pilot ones. ...
Article
Full-text available
Robots are becoming increasingly common in many areas of human life as technology advances. Considering the usage areas, robots appear in a wide range, from entertainment to psychotherapy. In addition to its role in facilitating human life, its use in the health field has recently been quite remarkable. In this study, interactive robots are evaluated in general and their use in the mental health field is discussed on a large scale. Accordingly, the primary purpose of this study is to examine the need for the development of interactive and therapy robots, their areas of use, and studies on their effectiveness as well as therapy robots that are generally accepted in the relevant literature. The results of the examination show that interactive robots are classified into six groups: social, entertainment, educational, rehabilitation, sex, and therapy robots. In the related literature, Eliza, Woebot, Youper, Wysa, Simsensei Kiosk, Paro, NeCoRo, Kaspar, Bandit, and Pepper have generally been accepted as therapy robots. The results of the studies demonstrate the effectiveness and the usage of interactive therapy robots in therapy for different groups and needs, especially for disadvantaged individuals. On the other hand, it is considered that more research on the effectiveness of robots is needed. Considering the effects on mental health and quality of life, it is believed that the usage of robots in therapy is important and its widespread use will have a significant positive effect in the field.
... The potential of AI chatbots in mental health interventions has been further supported by systematic reviews, such as the one conducted by Abd-Alrazaq et al., which highlighted the effectiveness and safety of using chatbots to improve mental health outcomes (Abd-Alrazaq et al., 2020). Moreover, Bendig et al. discussed the emerging role of chatbots in clinical psychology and psychotherapy, indicating their potential to foster mental health through innovative therapeutic approaches (Bendig et al., 2022). These studies collectively underscore the importance of integrating AI technologies into mental health care while also recognizing the limitations and the need for ongoing professional support to ensure effective outcomes (Daley et al., 2020). ...
Article
This study aims to investigate the potential benefits and efficacy of incorporating Artificial Intelligence into mental health interventions for individuals battling social anxiety. The objective is to evaluate how different AI interventions influence the reduction of symptoms, social functioning, and overall quality of life. The research will examine the feasibility of AI as a means of delivering Cognitive-Behavioral Therapy (CBT) and compare its effectiveness with established therapeutic approaches. The methodology involves a systematic literature review and comprehensive database searches. The findings suggest that AI therapy chatbots, which use machine learning to deliver individualized interventions, present an accessible and scalable alternative for mental health assistance, with the potential to relieve anxiety and depression symptoms. Virtual Reality Exposure Therapy (VRET) efficiently tackles social anxiety, although further study on AI's effectiveness in this setting is needed.
... The chatbot sent a notification message twice a day to collect in situ information on participants' well-being or discomfort. During such interactions, ourchatbot aimed to identify underlying emotions and suggested exercises based on positive psychology and cognitivebehavioral therapy, selected from psychological manuals and research articles reporting their effectiveness (Bendig et al., 2019;Gabrielli et al., 2021). ...
Article
Full-text available
Background Digital technologies, including smartphones, hold great promise for expanding mental health services and improving access to care. Digital phenotyping, which involves the collection of behavioral and physiological data using smartphones, offers a novel way to understand and monitor mental health. This study examines the feasibility of a psychological well-being program using a telegram-integrated chatbot for digital phenotyping. Methods A one-month randomized non-clinical trial was conducted with 81 young adults aged 18–35 from Italy and the canton of Ticino, a region in southern Switzerland. Participants were randomized to an experimental group that interacted with a chatbot, or to a control group that received general information on psychological well-being. The chatbot collected real-time data on participants’ well-being such as user-chatbot interactions, responses to exercises, and emotional and behavioral metrics. A clustering algorithm created a user profile and content recommendation system to provide personalized exercises based on users’ responses. Results Four distinct clusters of participants emerged, based on factors such as online alerts, social media use, insomnia, attention and energy levels. Participants in the experimental group reported improvements in well-being and found the personalized exercises, recommended by the clustering algorithm useful. Conclusion The study demonstrates the feasibility of a digital phenotyping-based well-being program using a chatbot. Despite limitations such as a small sample size and short study duration, the findings suggest that digital phenotyping and personalized recommendation systems could improve mental health care. Future research should include larger samples and longer follow-up periods to validate these findings and explore clinical applications.
... Users develop a conversation with a technical system through interactive contact, allowing users access to functions and data of the application itself. Chatbots are used in customer communication in e-shopping, teaching, mental health promotion, the game industry, and especially in the financial sector [2]. It is worth noting that chatbot design aims to facilitate humans in their work and their interaction with computers, using natural language, without undermining the human factor [41]. ...
Article
Full-text available
Background/Objectives: The evolution of digital technology enhances the broadening of a person's intellectual growth. Research points out that implementing innovative applications of the digital world improves human social, cognitive, and metacognitive behavior. Artificial intelligence chatbots are yet another innovative human-made construct. These are forms of software that simulate human conversation, understand and process user input, and provide personalized responses. Executive function includes a set of higher mental processes necessary for formulating, planning, and achieving a goal. The present study aims to investigate executive function reinforcement through artificial intelligence chatbots, outlining potentials, limitations, and future research suggestions. Specifically, the study examined three research questions: the use of conversational chatbots in executive functioning training, their impact on executive-cognitive skills, and the duration of any improvements. Methods: The assessment of the existing literature was implemented using the systematic review method, according to the PRISMA 2020 Principles. The avalanche search method was employed to conduct a source search in the following databases: Sco-pus, Web of Science, PubMed, and complementary Google Scholar. This systematic review included studies from 2021 to the present using experimental, observational, or mixed methods. It included studies using AI-based chatbots or conversationalists to support executive functions, such as anxiety, stress, depression, memory, attention, cognitive load, and behavioral changes. In addition, this study included general populations with specific neurological conditions, all peer-reviewed, written in English, and with full-text access. However, the study excluded studies before 2021, the literature reviews, systematic reviews, non-AI-based chatbots or conversationalists, studies not targeting the range of executive skills and abilities, studies not written in English, and studies without open access. The criteria aligned with the study objectives, ensuring a focus on AI chatbots and the impact of conversational agents on executive function. The initial collection totaled n = 115 articles; however, the eligibility requirements led to the final selection of n = 10 studies. Results: The findings of the studies suggested positive effects of using AI chatbots to enhance and improve executive skills. Although, several limitations were identified, making it still difficult to generalize and reproduce their effects. Conclusions: AI chatbots are an innovative artificial intelligence tool that can function as a digital assistant for learning and expanding executive skills, contributing to the cognitive, metacognitive, and social development of the individual. However, its use in executive skills training is at a primary stage. The findings highlighted the need for a unified framework for reference and future studies, better study designs, diverse populations, larger sample sizes of participants, and longitudinal studies that observe the long-term effects of their use.
... In their opinion, the sector's competitive advantage was supported by the combination of the low level of conflict, open communication, organizational flexibility and the integration of IT planning within each company. According to Schwab (20), we are living in the fourth industrial revolution (IR4.0), an era characterized by breakthroughs in emerging technologies in areas such as robotics, artificial intelligence (AI), nanotechnology, quantum computing, IoT, the Internet of Things, fifth generation wireless technologies, self-driving vehicles, all of which will impact how we create and distribute value and change the way we live, work and interact (21). ...
Article
A dolgozat annak az elméleti paradoxonnak kíván utánajárni, amit fenntartható turizmus és a smart (okos) turizmus koncepciójának egyidejűsége jelenít meg a digitális átalakulás korszakában. A téma az ipari innováció és a stratégiai szükségszerűség elméleti koncepcióival igyekszik alátámasztani azt, hogy az új informatikai eszközök révén a turizmus smart teljesítménye nem feltétlenül válik hatékonyabbá, de a fenntarthatóság fogalmi kereteit elkerülhetetlenül kitágítja. A digitális technológia fejlesztése a turizmusban a kényelem biztosítására, az elérhetetlent valósággá alakításának céljából bővül, miközben növeli a termelékenységet, a fenntarthatóság előmozdítása és az életminőség javítása játszott szerepet.
... Although CBT has been shown to be effective, its accessibility is limited since some individuals are embarrassed to enter a therapist's office, admit they need treatment, or pay for its cost. In response to these limitations, in 1966, the first chatbot psychotherapist, ELIZA, was created using pattern matching (Bendig et al., 2022) and template-based answers to implement a therapy approach called Socratic questioning. Today, Woebot (Woebot Health, Inc.), Youper (Youper, Inc.), Wysa (Wysa, Ltd), Replika (Luka, Inc.), Unmind (Unmind, Inc.), and Shim (Shim, Inc.) are a few examples of the health-focused chatbots publicly available as mobile app features (Xu et al., 2021). ...
Article
Full-text available
According to the World Health Organization, approximately 280 million people have depression. However, mental health diagnostic and intervention tools remain largely inaccessible, unaffordable, and stigmatized. To address this global need, a novel, accessible, and nonintrusive diagnostic and assistive system was created through the development of two machine learning (ML) models and a web app. Psych2Go uses two ML models to nonintrusively detect depression (model 1) and emotion (model 2). Both achieve their respective goals by analyzing prosodic features in speech rather than the content. The first ML model achieves a depression detection accuracy of 75.54% and the second achieves an emotion detection accuracy of 77.60%. The assistive system is powered by the GPT-3.5 Turbo API. The API, using a custom prompt template, tailors responses and therapy techniques to the user-provided demographic information (name, gender, age) and the detected emotion from the second ML model. The prompt enables the GPT-3.5 Turbo API to apply cognitive behavioral therapy principles, identifying and addressing depression-related negative thoughts. Adhering to strict privacy standards, the chatbot eschews storage of personal conversations, focusing instead on session-specific data (the user’s time of a session, depression score, emotion, name, age, and gender). Psych2Go was deemed successful in providing privacy-focused, personalized emotional support and depression and emotion detection. The chatbot’s unprecedented privacy-focused approach and personalization allows for it to act as an aid for therapists to monitor progress and a support system for the user between therapy sessions.
... For instance, Vaidyam et al. (2019) reviewed the growing field of conversational agents in psychiatry, noting their potential for providing on-demand support and psychoeducation. Similarly, Bendig et al. (2022) found that chatbots can effectively deliver cognitive-behavioral therapy techniques, which appears consistent with ChatGPT's ability to remind the user of therapeutic strategies. ...
... Technology-enabled assisting approaches, such as chatbots and dialogue systems, have been proposed as ways to support and augment therapeutic labor to scale and improve mental health support (Althoff et al., 2016;Bendig et al., 2019;Caceres Najarro et al., 2023;Cameron et al., 2017;Hua et al., 2024;Jin et al., 2023;Lai et al., 2023;Lee et al., 2020;Li et al., 2023;Maddela et al., 2023;Malgaroli et al., 2023;Vaidyam et al., 2019;van Heerden et al., 2023). Such approaches may be particularly useful in cultures in which seeking professional help is still stigmatized or otherwise inaccessible. ...
Preprint
Full-text available
We introduce a general-purpose, human-in-the-loop dual dialogue system to support mental health care professionals. The system, co-designed with care providers, is conceptualized to assist them in interacting with care seekers rather than functioning as a fully automated dialogue system solution. The AI assistant within the system reduces the cognitive load of mental health care providers by proposing responses, analyzing conversations to extract pertinent themes, summarizing dialogues, and recommending localized relevant content and internet-based cognitive behavioral therapy exercises. These functionalities are achieved through a multi-agent system design, where each specialized, supportive agent is characterized by a large language model. In evaluating the multi-agent system, we focused specifically on the proposal of responses to emotionally distressed care seekers. We found that the proposed responses matched a reasonable human quality in demonstrating empathy, showing its appropriateness for augmenting the work of mental health care providers.
... Conversational artificial agents (e.g., chatbots) are employed regularly in behavioural studies, aimed at engaging users in social interactions and elicit affective responses. Such interactions with human users across various contexts have shown how these agents can change users' perceptions and behaviours towards technology [e.g., 2,35,36,38], while also demonstrating the impact that engagement with these agents can have on users' emotions and well-being [10,29,58]. The introduction of Large Language Models (LLMs) into conversational artificial agents, ranging from chatbots [6] to robots [57] represents a meaningful step in the research and development of human-agent interactions (HAI). ...
... Additionally, chatbots can analyze user inputs in order to identify indications of distress, anxiety, or depression (Bendig et al., 2019). AI chatbots can potentially overcome obstacles in seeking mental healthcare by providing personalized, easily accessible, cost-effective, and stigma-free confidential support. ...
Article
Full-text available
Anxiety disorders are psychiatric conditions characterized by prolonged and generalized anxiety experienced by individuals in response to various events or situations. At present, anxiety disorders are regarded as the most widespread psychiatric disorders globally. Medication and different types of psychotherapies are employed as the primary therapeutic modalities in clinical practice for the treatment of anxiety disorders. However, combining these two approaches is known to yield more significant benefits than medication alone. Nevertheless, there is a lack of resources and a limited availability of psychotherapy options in underdeveloped areas. Psychotherapy methods encompass relaxation techniques, controlled breathing exercises, visualization exercises, controlled exposure exercises, and cognitive interventions such as challenging negative thoughts. These methods are vital in the treatment of anxiety disorders, but executing them proficiently can be demanding. Moreover, individuals with distinct anxiety disorders are prescribed medications that may cause withdrawal symptoms in some instances. Additionally, there is inadequate availability of face-to-face psychotherapy and a restricted capacity to predict and monitor the health, behavioral, and environmental aspects of individuals with anxiety disorders during the initial phases. In recent years, there has been notable progress in developing and utilizing artificial intelligence (AI) based applications and environments to improve the precision and sensitivity of diagnosing and treating various categories of anxiety disorders. As a result, this study aims to establish the efficacy of AI-enabled environments in addressing the existing challenges in managing anxiety disorders, reducing reliance on medication, and investigating the potential advantages, issues, and opportunities of integrating AI-assisted healthcare for anxiety disorders and enabling personalized therapy.
... However, similarly to self-disclosures between humans [see 25,31], people might also engage in self-disclosure with artificial agents and social robots due to social and emotional reasons and not only for economic reasons. There is a substantial body of literature using embodied and disembodied artificial agents for eliciting self-disclosure in a variety of settings, reporting that selfdisclosing to artificial agents positively affects people's feelings and emotional well-being [see reviews and meta-analysis 94,95,96,97]. For example, in a recent study, 115 participants shared emotional experiences with an artificial agent who provided either emotional or cognitive support messages. ...
Article
Full-text available
Self-disclosure and the social sharing of emotions facilitate social relationships and can positively affect people's well-being. Nevertheless, individuals might refrain from engaging in these interpersonal communication behaviours with other people, due to socio-emotional barriers, such as shame and stigma. Social robots, free from these human-centric judgements, could encourage openness and overcome these barriers. Accordingly, this paper reviews the role of self-disclosure and social sharing of emotion in human-robot interactions (HRIs), particularly its implications for emotional well-being and the dynamics of social relationship building between humans and robots. We investigate the transition of self-disclosure dynamics from traditional human-to-human interactions to HRI, revealing the potential of social robots to bridge socio-emotional barriers and provide unique forms of emotional support. This review not only highlights the therapeutic potential of social robots but also raises critical ethical considerations and potential drawbacks of these interactions, emphasising the importance of a balanced approach to integrating robots into emotional support roles. The review underscores a complex but promising frontier at the intersection of technology and emotional well-being, advocating for careful consideration of ethical standards and the intrinsic human need for connection as we advance in the development and application of social robots.
... Picked up in particular by the positive psychology approach, identifying and especially mobilizing people's resources are seen as important vectors for change and well-being [71]. These principles are also echoed in internet therapies and chatbots [72]. ...
Chapter
Full-text available
mHealth psychological interventions have gained popularity among both researchers and the general public as a means to address a variety of psychological problems or disorders. However, despite the increasing use of these interventions, there is a lack of clear guidelines on how to implement them successfully. This chapter focuses on LIVIA 2.0, a mHealth psychological intervention developed to address prolonged grief symptoms experienced after bereavement or romantic dissolution. Drawing on empirical sources, the program included several innovations aimed at improving engagement and outcomes compared to its former version, LIVIA-FR. These innovations included providing guidance on demand, sending automated reminders, tailoring the intervention to the specific needs of each user, assessing and promoting personal resources, and targeting autobiographical memory and identity adjustment. This chapter describes each innovation and presents the descriptive results regarding the usefulness of each strategy that were obtained within a randomized controlled trial. The chapter concludes by examining the outcomes of these innovations and provides practical recommendations for researchers looking to develop mHealth psychological interventions.
... There have been increasing numbers of commercially available AI digital applications for mental health care [19,47]. AI has been used in various areas of mental health, such as psychiatric diagnosis [8,12,32,44], self-monitoring [23], relevant content delivery about therapeutic techniques [8], and psychotherapy [5,8,13,26,39,56]. The characteristics of Gen AI are relevant in conversational psychotherapy since it enables personalized conversations tailored to the patient's needs and symptoms [19,24,44,46]. ...
Conference Paper
Full-text available
Artificial intelligence (AI) mental health chatbots have become popular alternative tools in mental care by enhancing the accessibility and scalability of psychotherapy. However, integrating human-centric AI technology and mental health services poses challenges, as safety, reliability, and trust issues have not been fully addressed. This study conducted a case study with two AI mental health apps, Wysa and Youper, to explore the implications of using AI mental health chatbots as therapeutic tools to provide accessible, immediate, and personalized mental health support. The study emphasizes the importance of a human-centric AI approach by examining whether these apps are ethically aligned and reliable enough to address diverse symptoms effectively. Through a comprehensive analysis, this paper outlines design suggestions for developing human-centric AI mental health chatbots while also examining potential challenges and limitations, such as maintaining partnerships with human therapists, ensuring privacy, and the necessity for continuous improvement through feedback loops. With careful design and ethical consideration, AI mental health chatbots can significantly contribute to the mental health field by offering solutions to reduce the burdens of manual tasks of human therapists and providing accessibility to patients.
... In civil aviation passenger service, chatbots provide services such as flight inquiries, bagage tracks and Inquiries in emergency situations [11]. The application of chatbots in the medical field is generally in the prevention, treatment, follow-up, and recurrence prevention of psychological problems and mental disorders [12]. These application cases provide valuable experience and reference for the application of chatbots. ...
Article
Full-text available
With the continuous development and popularization of artificial intelligence technology, chatbots, as an innovative service tool, are gradually demonstrating their significant application potential in the field of civil aviation. Chatbots can engage in real-time interactions with passengers through natural language processing technology, providing personalized services. This scoping review aims to provide a preliminary analysis of the current status, advantages, challenges, and future prospects of chatbot applications in the field of civil aviation passenger services. This review was conducted in accordance with the PRISMA extension for scoping reviews guidelines. Studies were identified by searching five prominent databases - Scopus, ACM, IEEE Xplore, Springer Link, and Web of Science - covering the period from 2015 to 2024. Chatbots have demonstrated significant potential in various applications within civil aviation passenger service, including flight inquiries, ticket booking, customer support, and baggage tracking. Existing literature indicates that chatbots can significantly enhance passenger satisfaction and loyalty by providing personalized and convenient service experiences. Additionally, they can reduce operating costs and improve service efficiency for civil aviation participants. However, challenges such as technical limitations in natural language processing and privacy protection issues remain to be addressed.
... Chatbot may include also the delivery of psychotherapeutic contents with a set of persuasive techniques, mimicking a therapist-patient interaction (Smith et al., 2019). The effectiveness of these therapeutic properties was evaluated in 86% of studies showing that chatbots may offer useful tools for depression or anxiety disorders based on therapeutic conversations (Bendig et al., 2022). A meta-analysis conducted by Lim and colleagues showed that chatbot-delivered psychotherapy greatly improved depressive symptoms among adults with diagnosis of depression or anxiety disorders (Lim et al., 2022). ...
Article
Modern psychiatry aims to adopt precision models and promote personalized treatment within mental health care. However, the complexity of factors underpinning mental disorders and the variety of expressions of clinical conditions make this task arduous for clinicians. Globally, major depression is a common mental disorder and encompasses a constellation of clinical manifestations and a variety of etiological factors. In this context, the use of Artificial Intelligence might help clinicians in the screening and diagnosis of depression on a wider scale and could also facilitate their task in predicting disease outcomes by considering complex interactions between prodromal and clinical symptoms, neuroimaging data, genetics, or biomarkers. In this narrative review, we report on the most significant evidence from current international literature regarding the use of Artificial Intelligence in the diagnosis and treatment of major depression, specifically focusing on the use of Natural Language Processing, Chatbots, Machine Learning, and Deep Learning.
... Most currently available mental health chatbots aim to reduce symptoms of psychological distress (stress, depression, anxiety) or promote wellbeing though improved self-awareness and coping skills. They also have the potential to provide higher-level therapeutic interventions (Bendig et al., 2022). ...
Preprint
Full-text available
The increasing rates, severity, and complexity of mental health problems are putting immense strain on Australia’s mental healthcare system. The rapidly advancing field of Machine Learning (ML) offers a promising pathway to more efficient and effective mental healthcare. Currently, however, there are multiple ethical and practical barriers to real-world implementation of ML-based tools. This report aims to introduce mental health clinicians to the opportunities and challenges involved with bringing ML into practice.
... Conversational artificial agents (e.g., chatbots) are employed regularly in behavioural studies, aimed at engaging users in social interactions and elicit affective responses. Such interactions with human users across various contexts have shown how these agents can change users' perceptions and behaviours towards technology [e.g., 2,34,35,37], while also demonstrating the impact that engagement with these agents can have on users' emotions and well-being [10,29,56]. The introduction of Large Language Models (LLMs) into conversational artificial agents, ranging from chatbots [6] to robots [55] represents a meaningful step in the research and development of human-agent interactions (HAI). ...
Preprint
Full-text available
The recent developments in Large Language Models (LLM), mark a significant moment in the research and development of social interactions with artificial agents. These agents are widely deployed in a variety of settings, with potential impact on users. However, the study of social interactions with agents powered by LLM is still emerging, limited by access to the technology and to data, the absence of standardised interfaces, and challenges to establishing controlled experimental setups using the currently available business-oriented platforms. To answer these gaps, we developed LEXI, LLMs Experimentation Interface, an open-source tool enabling the deployment of artificial agents powered by LLM in social interaction behavioural experiments. Using a graphical interface, LEXI allows researchers to build agents, and deploy them in experimental setups along with forms and questionnaires while collecting interaction logs and self-reported data. %LEXI is aimed at improving human-agent interaction (HAI) empirical research methodology while allowing researchers with diverse backgrounds and technical proficiency to deploy artificial agents powered by LLM in HAI behavioural experiments. The outcomes of usability testing indicate LEXI's broad utility, high usability and minimum mental workload requirement, with distinctive benefits observed across disciplines. A proof-of-concept study exploring the tool's efficacy in evaluating social HAIs was conducted, resulting in high-quality data. A comparison of empathetic versus neutral agents indicated that people perceive empathetic agents as more social, and write longer and more positive messages towards them.
... This will require staying abreast of research in this area, which is evolving rapidly. A recent scoping review of mental health chatbots revealed a dearth of randomized controlled trials (RCTs) in this area, serving as a reminder that, while promising, this technology is in its infancy and much more research is needed to demonstrate the feasibility, acceptability, and transferability of chatbots to the therapeutic context (Bendig et al., 2022). In the educational domain, there is growing recognition of the importance of genAI competence among educators and students, which has resulted in the introduction of several genAI competency frameworks (e.g., Ng et al., 2023;Su & Yang, 2023;United Nations Educational, Scientific and Cultural Organization, 2023). ...
Article
Full-text available
Psychology has changed considerably over the past several decades in response to technological advances. Changes to the profession accelerated during COVID-19, a time during which there was a rapid increase in the use of cloud-based virtual collaboration platforms (e.g., Zoom, Microsoft Teams) for telehealth services as well as virtual teaching and research activities. Technology continues to advance swiftly, and the introduction of artificial intelligence (AI), particularly generative AI (genAI), is creating profound changes in all aspects of society, including the psychology profession. With these changes come questions about how to practice ethically as a clinician, as a teacher, and as a researcher. In this article, we explore the ethical principles and standards of most relevance to genAI use in the activities of psychologists (clinical, teaching, research) across settings (private practice, hospitals, colleges/universities, research centers). Ethical issues and questions are presented within the context of the current American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct (2017), which is under revision. Recommendations are provided for approaching ethical concerns amidst the rapid technological advances that are changing the way psychologists do their work.
... In recent years, conversational artificial intelligence (AI), exemplified by Open AI's ChatGPT, has ushered in a new era of human-machine interaction (Brown et al., 2020;Wei et al., 2021). The transformative potential of these technologies spans a diverse range of applications and capabilities (Adamopoulou & Moussiades, 2020;Bendig et al., 2019;Chan & Li, 2023;Shah et al., 2017;Xu & Zhuang, 2022). ...
Article
Background Researchers are leading the development of AI designed to conduct interviews. These developments imply that AI's role is expanding from mere data analysis to becoming a tool for social researchers to interact with and comprehend their subjects. Yet, academic discussions have not addressed the potential impacts of AI on narrative interviews. In narrative interviews, the method of collecting data is a collaborative effort. The interviewer also contributes to exploring and shaping the interviewee's story. A compelling narrative interviewer has to display critical skills, such as maintaining a specific questioning order, showing empathy, and helping participants delve into and build their own stories. Methods This case study configured an OpenAI Assistant on WhatsApp to conduct narrative interviews with a human participant. The participant shared the same story in two distinct conversations: first, following a standard cycle and answering questions earnestly, and second, deliberately sidetracking the assistant from the main interview path as instructed by the researcher, to test how well the metrics could reflect the deliberate differences between different conversations. The AI's performance was evaluated through conversation analysis and specific narrative indicators, focusing on its adherence to the interview structure, empathy, narrative coherence, complexity, and support for human participant agency. The study sought to answer these questions: 1) How can the proposed metrics help us, as social researchers without a technical background, understand the quality of the AI-driven interviews in this study? 2) What do these findings contribute to our discussion on using AI in narrative interviews for social research? 3) What further research could these results inspire? Results The findings show to what extent the AI maintained structure and adaptability in conversations, illustrating its potential to support personalized, flexible narrative interviews based on specific needs. Conclusions These results suggest that social researchers without a technical background can use observation-based metrics to gauge how well an AI assistant conducts narrative interviews. They also prompt reflection on AI's role in narrative interviews and spark further research.
... Therapists can also remotely monitor short-term as well as the long-term training progress. AI can be a significant solution for populations who deal with geographical constraints, limitations due to physical disabilities, or psychological factors (i.e., disliking in-person interactions) [70]. It can also support group training or community training. ...
Article
Full-text available
Breathing is one of the most vital functions for being mentally and emotionally healthy. A growing number of studies confirm that breathing, although unconscious, can be under voluntary control. However, it requires systematic practice to acquire relevant experience and skillfulness to consciously utilize breathing as a tool for self-regulation. After the COVID-19 pandemic, a global discussion has begun about the potential role of emerging technologies in breath-control interventions. Emerging technologies refer to a wide range of advanced technologies that have already entered the race for mental health training. Artificial intelligence, immersive technologies, biofeed-back, non-invasive neurofeedback, and other wearable devices provide new, but yet underexplored, opportunities in breathing training. Thus, the current systematic review examines the synergy between emerging technologies and breathing techniques for improving mental and emotional health through the lens of skills development. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology is utilized to respond to the objectives and research questions. The potential benefits, possible risks, ethical concerns, future directions, and implications are also discussed. The results indicated that digitally assisted breathing can improve various aspects of mental health (i.e., attentional control, emotional regulation, mental flexibility, stress management , and self-regulation). A significant finding of this review indicated that the blending of different technologies may maximize training outcomes. Thus, future research should focus on the proper design and evaluation of different digital designs in breathing training to improve health in different populations. This study aspires to provide positive feedback in the discussion about the role of digital technologies in assisting mental and emotional health-promoting interventions among populations with different needs (i.e., employees, students, and people with disabilities).
... The graphical representation of their proposed chatbot was in Fig. 4. They even mentioned several limitations with the small sample size, lack of statistical power, and quality criteria (40). ...
Article
Analyze and finding the most used AI applications or methods in the Mental Health sector and suggesting appropriate directions for advanced research is the primary objective of this research. With this purpose, the author selects papers reviewing to analyze thirty-one articles. The researcher found the most used neuroimaging and recognizing technologies in real life for checking brain abnormalities. Besides, they found chatbot as the most used AI assistant in digital care. As the ultimate goal of this study to observe the mental health of Bangladeshi youths, the researcher has surveyed 19-29 years people and mainly generate questions for the general people for mental disorders like as- Anxiety, Depression, and PTSD. The author used python for analysis dataset, finding correlations, and applied machine learning classification algorithms (e.g., decision tree, support vector machine (SVM), random forest) for best accuracy. This research also included future research directions and also identified existing knowledge gaps.
... These problems have been addressed in different research domains. For instance, Bender et al [38] focus on the difference between synthetic language produced by LLM and human natural language by arguing that LLMs are "stochastic parrots" producing language, but not understanding it. Felin and Holweg [39] similarly argue by reporting differences in human cognition and computation processes of AI. ...
Article
Full-text available
Large language model (LLM)–powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate “human-like” features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
... Virtual therapists and AI-powered chatbots represent a significant trend in enhancing the accessibility of mental health resources [96,97]. These digital entities provide around-the-clock support to individuals with mental health concerns, irrespective of geographical or time constraints. ...
Article
Full-text available
Artificial Intelligence (AI) has emerged as a transformative force in various fields, and its application in mental healthcare is no exception. Hence, this review explores the integration of AI into mental healthcare, elucidating current trends, ethical considerations, and future directions in this dynamic field. This review encompassed recent studies, examples of AI applications, and ethical considerations shaping the field. Additionally, regulatory frameworks and trends in research and development were analyzed. We comprehensively searched four databases (PubMed, IEEE Xplore, PsycINFO, and Google Scholar). The inclusion criteria were papers published in peer-reviewed journals, conference proceedings, or reputable online databases, papers that specifically focus on the application of AI in the field of mental healthcare, and review papers that offer a comprehensive overview, analysis, or integration of existing literature published in the English language. Current trends reveal AI's transformative potential, with applications such as the early detection of mental health disorders, personalized treatment plans, and AI-driven virtual therapists. However, these advancements are accompanied by ethical challenges concerning privacy, bias mitigation, and the preservation of the human element in therapy. Future directions emphasize the need for clear regulatory frameworks, transparent validation of AI models, and continuous research and development efforts. Integrating AI into mental healthcare and mental health therapy represents a promising frontier in healthcare. While AI holds the potential to revolutionize mental healthcare, responsible and ethical implementation is essential. By addressing current challenges and shaping future directions thoughtfully, we may effectively utilize the potential of AI to enhance the accessibility, efficacy, and ethicality of mental healthcare, thereby helping both individuals and communities.
... To evaluate individual studies, this tool applies 16 items. [5] * * * * * * * * * [35] * * * * * * * * [38] * * * * * * * * * [39] * * * * * * * [40] * * * * * * * * * * [42] * * * * * * * * * * [48] * * * * * * * [49] * * * * * * * [50] * * * * * * * * [51] * * * * [52] * * * * * * * [53] * * * * * * * * * * * [54] * * * * * * * * * * * [55] * * * * * * * * * * [56] * * * * * * * * [57] * * * * * * * * [58] * * * * * * * * [59] * * * * * * * * * * [60] * * * * * * * * * * [61] * * * * * * * [62] * * * * * * * * * * * * [63] * * * * * * * * * * [64] * * * * * * * * * * * 10. Did the review authors report on the sources of funding for the studies included in the review? ...
Article
Full-text available
Introduction: Chatbots, computer programs emulating natural language conversations, have gained attention in healthcare. Recent advances address issues like obesity, dementia, oncology, and insomnia. A comprehensive assessment of their utility is essential for widespread adoption. This study aims to summarize chatbots' role in healthcare. Material and Methods: The methodology involved a systematic review of English-language literature up to May 8, 2023, from databases of Embase, PubMed, Web of Science, and Scopus. Selection followed a two-step process based on inclusion/exclusion criteria. The PRISMA checklist and AMSTAR-2 tool ensured quality. Results: The review encompassed 38 articles. Findings reveal chatbots primarily promote healthy lifestyles, improving mental well-being. They are widely used for treatment, education, and screening due to their accessibility. Conclusion: Chatbots hold transformative potential in healthcare, especially in mental health, cancer management, and public health. They are poised to revolutionize the industry, offering innovative solutions and improving patient outcomes.
Article
In response to the increasing pressure of social competition, a psychological phenomenon termed “lying flat” has gained prominence among college students, reflecting a passive attitude characterized by disengagement, avoidance, and a lack of proactive coping mechanisms in the face of challenges. This study undertakes a comprehensive examination of the “lying flat” phenomenon, encompassing its definition, sociocultural origins, and developmental trends. To address this growing concern, an integrated intervention strategy rooted in positive psychology is proposed, focusing on enhancing resilience, fostering positive emotions, and cultivating self-efficacy. The strategy leverages artificial intelligence (AI) algorithms to innovate the intervention framework, including optimizing data collection, refining intervention personalization, and improving predictive capabilities. An empirical study, employing a mixed-method design, validates the effectiveness of this approach, revealing substantial improvements in psychological resilience, emotional positivity, and behavioural engagement among participants. The findings underscore the dual benefits of combining positive psychology and AI in mental health interventions: not only mitigating the psychological impact of the “lying flat” phenomenon but also paving the way for future applications of AI in the psychological support ecosystem. This research contributes to understanding the mechanisms of disengagement behaviours in young adults while offering a replicable, scalable intervention model for psychological resilience and societal integration in high-pressure contexts.
Article
Full-text available
Chatbot technology, a rapidly growing field, uses Natural Language Processing (NLP) methodologies to create conversational AI bots. Contextual understanding is essential for chatbots to provide meaningful interactions. Still, to date chatbots often struggle to accurately interpret user input due to the complexity of natural language and diverse fields, hence the need for a Systematic Literature Review (SLR) to investigate the motivation behind the creation of chatbots, their development procedures and methods, notable achievements, challenges and emerging trends. Through the application of the PRISMA method, this paper contributes to revealing the rapid and dynamic progress in chatbot technology with NLP learning models, enabling sophisticated and human-like interactions on the trends observed in chatbots over the past decade. The results, from various fields such as healthcare, organization and business, virtual personalities, to education, do not rule out the possibility of being developed in other fields such as chatbots for cultural preservation while suggesting the need for supervision in the aspects of language comprehension bias and ethics of chatbot users. In the end, the insights gained from SLR have the potential to contribute significantly to the advancement of chatbots on NLP as a comprehensive field.
Article
Background Prevention of suicide is a global health priority. Approximately 800,000 individuals die by suicide yearly, and for every suicide death, there are another 20 estimated suicide attempts. Large language models (LLMs) hold the potential to enhance scalable, accessible, and affordable digital services for suicide prevention and self-harm interventions. However, their use also raises clinical and ethical questions that require careful consideration. Objective This scoping review aims to identify emergent trends in LLM applications in the field of suicide prevention and self-harm research. In addition, it summarizes key clinical and ethical considerations relevant to this nascent area of research. Methods Searches were conducted in 4 databases (PsycINFO, Embase, PubMed, and IEEE Xplore) in February 2024. Eligible studies described the application of LLMs for suicide or self-harm prevention, detection, or management. English-language peer-reviewed articles and conference proceedings were included, without date restrictions. Narrative synthesis was used to synthesize study characteristics, objectives, models, data sources, proposed clinical applications, and ethical considerations. This review adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) standards. Results Of the 533 studies identified, 36 (6.8%) met the inclusion criteria. An additional 7 studies were identified through citation chaining, resulting in 43 studies for review. The studies showed a bifurcation of publication fields, with varying publication norms between computer science and mental health. While most of the studies (33/43, 77%) focused on identifying suicide risk, newer applications leveraging generative functions (eg, support, education, and training) are emerging. Social media was the most common source of LLM training data. Bidirectional Encoder Representations from Transformers (BERT) was the predominant model used, although generative pretrained transformers (GPTs) featured prominently in generative applications. Clinical LLM applications were reported in 60% (26/43) of the studies, often for suicide risk detection or as clinical assistance tools. Ethical considerations were reported in 33% (14/43) of the studies, with privacy, confidentiality, and consent strongly represented. Conclusions This evolving research area, bridging computer science and mental health, demands a multidisciplinary approach. While open access models and datasets will likely shape the field of suicide prevention, documenting their limitations and potential biases is crucial. High-quality training data are essential for refining these models and mitigating unwanted biases. Policies that address ethical concerns—particularly those related to privacy and security when using social media data—are imperative. Limitations include high variability across disciplines in how LLMs and study methodology are reported. The emergence of generative artificial intelligence signals a shift in approach, particularly in applications related to care, support, and education, such as improved crisis care and gatekeeper training methods, clinician copilot models, and improved educational practices. Ongoing human oversight—through human-in-the-loop testing or expert external validation—is essential for responsible development and use. Trial Registration OSF Registries osf.io/nckq7; https://osf.io/nckq7
Article
The proliferation of Generative Artificial Intelligence (Generative AI) has led to an increased reliance on AI‐generated content for designing and deploying digital health interventions. While generative AI has the potential to facilitate and automate healthcare, there are concerns that AI‐generated content and AI‐generated health advice could trigger, perpetuate, or exacerbate prior traumatic experiences among vulnerable populations. In this discussion article, I examined how generative‐AI‐powered digital health interventions could trigger, perpetuate, or exacerbate emotional trauma among vulnerable populations who rely on digital health interventions as complementary or alternative sources of seeking health services or information. I then proposed actionable strategies for mitigating AI‐generated trauma in the context of digital health interventions. The arguments raised in this article are expected to shift the focus of AI practitioners against prioritizing dominant narratives in AI algorithms into seriously considering the needs of vulnerable minority groups who are at the greatest risk for trauma but are often invisible in AI data sets, AI algorithms, and their resultant technologies.
Article
Background Mental health chatbots have emerged as a promising tool for providing accessible and convenient support to individuals in need. Building on our previous research on digital interventions for loneliness and depression among Korean college students, this study addresses the limitations identified and explores more advanced artificial intelligence–driven solutions. Objective This study aimed to develop and evaluate the performance of HoMemeTown Dr. CareSam, an advanced cross-lingual chatbot using ChatGPT 4.0 (OpenAI) to provide seamless support in both English and Korean contexts. The chatbot was designed to address the need for more personalized and culturally sensitive mental health support identified in our previous work while providing an accessible and user-friendly interface for Korean young adults. Methods We conducted a mixed methods pilot study with 20 Korean young adults aged 18 to 27 (mean 23.3, SD 1.96) years. The HoMemeTown Dr CareSam chatbot was developed using the GPT application programming interface, incorporating features such as a gratitude journal and risk detection. User satisfaction and chatbot performance were evaluated using quantitative surveys and qualitative feedback, with triangulation used to ensure the validity and robustness of findings through cross-verification of data sources. Comparative analyses were conducted with other large language models chatbots and existing digital therapy tools (Woebot [Woebot Health Inc] and Happify [Twill Inc]). Results Users generally expressed positive views towards the chatbot, with positivity and support receiving the highest score on a 10-point scale (mean 9.0, SD 1.2), followed by empathy (mean 8.7, SD 1.6) and active listening (mean 8.0, SD 1.8). However, areas for improvement were noted in professionalism (mean 7.0, SD 2.0), complexity of content (mean 7.4, SD 2.0), and personalization (mean 7.4, SD 2.4). The chatbot demonstrated statistically significant performance differences compared with other large language models chatbots (F=3.27; P=.047), with more pronounced differences compared with Woebot and Happify (F=12.94; P<.001). Qualitative feedback highlighted the chatbot’s strengths in providing empathetic responses and a user-friendly interface, while areas for improvement included response speed and the naturalness of Korean language responses. Conclusions The HoMemeTown Dr CareSam chatbot shows potential as a cross-lingual mental health support tool, achieving high user satisfaction and demonstrating comparative advantages over existing digital interventions. However, the study’s limited sample size and short-term nature necessitate further research. Future studies should include larger-scale clinical trials, enhanced risk detection features, and integration with existing health care systems to fully realize its potential in supporting mental well-being across different linguistic and cultural contexts.
Article
Autism Spectrum disorder is a significant neurodevelopmental behavioral disorder. Children with Autism display a wide array of ambiguous symptoms resulting making the diagnosis quite challenging thus resulting in delayed management. Traditionally, its diagnosis and management require the collaboration of services from the three P's namely the pediatrician, psychiatrist, and child psychologist. The management requires an intensive multi-disciplinary approach which would help minimize the disease symptoms and facilitate development and learning during childhood. Recently, with the advent of widespread testing and usage of various artificial intelligence tools across various domains, AI tools such as Chatbots are being incorporated into medical treatments, especially in behavioral therapy. Considering the increasing use of AI, we believe that the natural language processing techniques employed by ChatGPT algorithms have the potential to identify speech and linguistic patterns in children with ASD. Therefore, through this letter, we have tried to explore the scope of Artificial intelligence (ChatGPT) for behavioral therapy in children affected with autism spectrum disorder.
Article
Full-text available
This project is an innovative mobile application designed to address the mental health needs of children in the digital age. This app combines parental control features, activity tracking, mental health assessments, and a supportive chatbot powered by GPT API to create a holistic solution for parents and children. The app’s primary objectives include: ● Comprehensive Mental Health Assessment ● Real-time Text Sentiments Tracking ● Tasks Tracking ● Safe Browser ● Personalized Guidance and Resources ● Secured and Confidential Platform ● User Friendly
Article
Wie beeinflussen generative KI-Technologien die (Online-)Beratungspraxis? Der Artikel untersucht Potenziale und Herausforderungen, die sich aus einer möglichen KI-Integration ergeben und bietet einen Überblick über aktuelle Entwicklungen. Im Fokus steht die Fragestellung, welche Rolle KI im Gefüge von Beratenden und Ratsuchenden einnimmt und welche Kompetenzen Fachkräfte benötigen, um einen sinnvollen und umsichtigen Umgang mit diesen Technologien entwickeln zu können.
Article
Artificial intelligence (AI)-enabled chatbots are increasingly being used to help people manage their mental health. Chatbots for mental health and particularly 'wellness' applications currently exist in a regulatory 'gray area'. Indeed, most generative AI-powered wellness apps will not be reviewed by health regulators. However, recent findings suggest that users of these apps sometimes use them to share mental health problems and even to seek support during crises, and that the apps sometimes respond in a manner that increases the risk of harm to the user, a challenge that the current US regulatory structure is not well equipped to address. In this Perspective, we discuss the regulatory landscape and potential health risks of AI-enabled wellness apps. Although we focus on the United States, there are similar challenges for regulators across the globe. We discuss the problems that arise when AI-based wellness apps cross into medical territory and the implications for app developers and regulatory bodies, and we outline outstanding priorities for the field.
Book
Full-text available
Knjiga se bavi prikazom dosadašnjih znanja o primjeni chatbotova u psihijatrijskoj praksi i praktičnim savjetima kako ih koristiti u svakodnevnom radu.
Chapter
Full-text available
Poznajući način na koji ChatGPT radi možemo razumjeti kako on stvara odgovore i zbog čega ponekad daje netočne, neprecizne, zastarjele ili nejasne odgovore. Također, možemo razumjeti zašto ponekad daje potpuno izmišljene, nepostojeće podatke. Jednako tako, znat ćemo prepoznati kad se to javlja. U konačnici, odgovornost za primjenu bilo kojeg alata, pa tako i chabotova, je na korisnicima a ne na samom alatu. Svaki odgovor treba pažljivo proučiti i procijeniti koliko ćemo mu vjerovati i hoćemo li ga upotrijebiti. Ukoliko dođe do greške, kao posljedice upotrebe podataka, odgovornost leži na korisniku podataka, a nikako ne na ChatGPT-u. ChatGPT je virtualni davatelj odgovora, ChatGPT je umjetna, oponašajuća (a ne stvarna) inteligencija. On je stroj koji pokušava imitirati ljudske odgovore. On nema zdravog razuma, nema svijesti, nema razumijevanja onoga što nudi. On je jako sofisticirani pretraživač. I ništa više od toga.
Article
Full-text available
Background Internet and mobile-based interventions (IMIs) for mental disorders are seen by some authors as a step forward to narrow the treatment gap in mental health; however, especially in Germany professionals voice ethical concerns against the implementation of IMIs. The fact that there is broad evidence in favor of IMIs and that IMIs have already been implemented in several countries requires an ethical analysis to answer these concerns. Objective The objective is to tackle ethical issues connected to a possible implementation of IMIs for mental disorders in Germany and to point out possible solutions. Material and methods We conducted an ethical analysis using the criteria of well-being of patients, non-maleficence, justice, and patient autonomy, based on the empirical evidence. Results and conclusion The ethical analysis showed that IMIs for mental disorders principally have a positive effect on the well-being of patients and have a low risk of impairment. Additionally, IMIs can minimize risk, improve justice, and strengthen autonomy of mentally ill patients. Despite the broad evidence, there are still research desiderates with respect to ethical aspects, e. g. patient information for mentally ill patients.
Article
Full-text available
Background: This protocol describes a study that will test the effectiveness of a 10-week non-clinical psychological coaching intervention for intentional personality change using a smartphone application. The goal of the intervention is to coach individuals who are willing and motivated to change some aspects of their personality, i.e., the Big Five personality traits. The intervention is based on empirically derived general change mechanisms from psychotherapy process-outcome research. It uses the smartphone application PEACH (PErsonality coACH) to allow for a scalable assessment and tailored interventions in the everyday life of participants. A conversational agent will be used as a digital coach to support participants to achieve their personality change goals. The goal of the study is to examine the effectiveness of the intervention at post-test assessment and three-month follow-up. Methods/Design: A 2x2 factorial between-subject randomized, wait-list controlled trial with intensive longitudinal methods will be conducted to examine the effectiveness of the intervention. Participants will be randomized to one of four conditions. One experimental condition includes a conversational agent with high self-awareness to deliver the coaching program. The other experimental condition includes a conversational agent with low self-awareness. Two wait-list conditions refer to the same two experimental conditions, albeit with four weeks without intervention at the beginning of the study. The 10-week intervention includes different types of micro-interventions: (a) individualized implementation intentions, (b) psychoeducation, (c) behavioral activation tasks, (d) self-reflection, (e) resource activation, and (f) individualized progress feedback. Study participants will be at least 900 German-speaking adults (18 years and older) who install the PEACH application on their smartphones, give their informed consent, pass the screening assessment, take part in the pre-test assessment and are motivated to change or modify some aspects of their personality. Discussion: This is the first study testing the effectiveness of a smartphone- and conversational agent-based coaching intervention for intended personality change. Given that this novel intervention approach proves effective, it could be implemented in various non-clinical settings and could reach large numbers of people due to its low-threshold character and technical scalability.
Chapter
Full-text available
Psychology research reports that people tend to seek companionship with those who have a similar level of extraversion, and markers in dialogueshow the speaker’s extraversion. Work in human-computer interaction seeks to understand creating and maintaining rapport between humans and ECAs. This study examines if humans will report greater rapport when interacting with an agent with an extraversion/introversion profile similar to their own. ECAs representing an extrovert and an introvert were created by manipulating three dialogue features. Using an informal, task-oriented setting, participants interacted with one of the agents in an immersive environment. Results suggest that subjects did not report the greatest rapport when interacting with the agent most similar to their level of extraversion.
Article
Full-text available
Background: Recent years have seen an increase in the use of internet-based cognitive behavioral therapy in the area of mental health. Although lower effectiveness and higher dropout rates of unguided than those of guided internet-based cognitive behavioral therapy remain critical issues, not incurring ongoing human clinical resources makes it highly advantageous. Objective: Current research in psychotherapy, which acknowledges the importance of therapeutic alliance, aims to evaluate the feasibility and acceptability, in terms of mental health, of an application that is embodied with a conversational agent. This application was enabled for use as an internet-based cognitive behavioral therapy preventative mental health measure. Methods: Analysis of the data from the 191 participants of the experimental group with a mean age of 38.07 (SD 10.75) years and the 263 participants of the control group with a mean age of 38.05 (SD 13.45) years using a 2-way factorial analysis of variance (group × time) was performed. Results: There was a significant main effect (P=.02) and interaction for time on the variable of positive mental health (P=.02), and for the treatment group, a significant simple main effect was also found (P=.002). In addition, there was a significant main effect (P=.02) and interaction for time on the variable of negative mental health (P=.005), and for the treatment group, a significant simple main effect was also found (P=.001). Conclusions: This research can be seen to represent a certain level of evidence for the mental health application developed herein, indicating empirically that internet-based cognitive behavioral therapy with the embodied conversational agent can be used in mental health care. In the pilot trial, given the issues related to feasibility and acceptability, it is necessary to pursue higher quality evidence while continuing to further improve the application, based on the findings of the current research.
Article
Full-text available
Background Due to limited resources, waiting periods for psychotherapy are often long and burdening for those in need of treatment and the health care system. In order to bridge the gap between initial contact and the beginning of psychotherapy, web-based interventions can be applied. The implementation of a web-based depression intervention during waiting periods has the potential to reduce depressive symptoms and enhance well-being in depressive individuals waiting for psychotherapy. Methods In a two-arm randomized controlled trial, effectiveness and acceptance of a guided web-based intervention for depressive individuals on a waitlist for psychotherapy are evaluated. Participants are recruited in several German outpatient clinics. All those contacting the outpatient clinics with the wish to enter psychotherapy receive study information and a depression screening. Those adults (age ≥ 18) with depressive symptoms above cut-off (CES-D scale > 22) and internet access are randomized to either intervention condition (treatment as usual and immediate access to the web-based intervention) or waiting control condition (treatment as usual and delayed access to the web-based intervention). At three points of assessment (baseline, post-treatment, 3-months-follow-up) depressive symptoms and secondary outcomes, such as quality of life, attitudes towards psychotherapy and web-based interventions and adverse events are assessed. Additionally, participants’ acceptance of the web-based intervention is evaluated, using measures of intervention adherence and satisfaction. Discussion This study investigates a relevant setting for the implementation of web-based interventions, potentially improving the provision of psychological health care. The results of this study contribute to the evaluation of innovative and resource-preserving health care models for outpatient psychological treatment. Trial registration This trial has been registered on 13 February 2017 in the German clinical trials register (DRKS); registration number DRKS00010282.
Article
Full-text available
The majority of mental health disorders remain untreated. Many limitations of traditional psychological interventions such as limited availability of evidence-based interventions and clinicians could potentially be overcome by providing Internet- and mobile-based psychological interventions (IMIs). This paper is a report of the Taskforce E-Health of the European Federation of Psychologists' Association and will provide an introduction to the subject, discusses areas of application, and reviews the current evidence regarding the efficacy of IMIs for the prevention and treatment of mental disorders. Meta-analyses based on randomized trials clearly indicate that therapist-guided stand-alone IMIs can result in meaningful benefits for a range of indications including, for example, depression, anxiety, insomnia, or posttraumatic stress disorders. The clinical significance of results of purely self-guided interventions is for many disorders less clear, especially with regard to effects under routine care conditions. Studies on the prevention of mental health disorders (MHD) are promising. Blended concepts, combining traditional face-to-face approaches with Internet- and mobile-based elements might have the potential of increasing the effects of psychological interventions on the one hand or to reduce costs of mental health treatments on the other hand. We also discuss mechanisms of change and the role of the therapist in such approaches, contraindications, potential limitations, and risk involved with IMIs, briefly review the status of the implementation into routine health care across Europe, and discuss confidentiality as well as ethical aspects that need to be taken into account, when implementing IMIs. Internet- and mobile-based psychological interventions have high potential for improving mental health and should be implemented more widely in routine care.
Article
Full-text available
Background: The internet offers major opportunities in supporting mental health care, and a variety of technology-mediated mental and behavioral health services have been developed. Yet, despite growing evidence for the effectiveness of these services, their acceptance and use in clinical practice remains low. So far, the current literature still lacks a structured insight into the experienced drivers and barriers to the adoption of electronic mental health (eMental health) from the perspective of clinical psychologists. Objective: The aim of this study was to gain an in-depth and comprehensive understanding of the drivers and barriers for psychologists in adopting eMental health tools, adding to previous work by also assessing drivers and analyzing relationships among these factors, and subsequently by developing a structured representation of the obtained findings. Methods: The study adopted a qualitative descriptive approach consisting of in-depth semistructured interviews with clinical psychologists working in the Netherlands (N=12). On the basis of the findings, a model was constructed that was then examined through a communicative validation. Results: In general, a key driver for psychologists to adopt eMental health is the belief and experience that it can be beneficial to them or their clients. Perceived advantages that are novel to literature include the acceleration of the treatment process, increased intimacy of the therapeutic relationship, and new treatment possibilities due to eMental health. More importantly, a relation was found between the extent to which psychologists have adopted eMental health and the particular drivers and barriers they experience. This differentiation is incorporated in the Levels of Adoption of eMental Health (LAMH) model that was developed during this study to provide a structured representation of the factors that influence the adoption of eMental health. Conclusions: The study identified both barriers and drivers, several of which are new to the literature and found a relationship between the nature and importance of the various drivers and barriers perceived by psychologists and the extent to which they have adopted eMental health. These findings were structured in a conceptual model to further enhance the current understanding. The LAMH model facilitates further research on the process of adopting eMental health, which will subsequently enable targeted recommendations with respect to technology, training, and clinical practice to ensure that mental health care professionals as well as their clients will benefit optimally from the current (and future) range of available eMental health options.
Article
Full-text available
Objectives: This narrative review article provides an overview of current psychotherapeutic approaches specific for adjustment disorders (ADs) and outlines future directions for theoretically-based treatments for this common mental disorder within a framework of stepped care. Methods: Studies on psychological interventions for ADs were retrieved by using an electronic database search within PubMed and PsycINFO, as well as by scanning the reference lists of relevant articles and previous reviews. Results: The evidence base for psychotherapies specifically targeting the symptoms of AD is currently rather weak, but is evolving given several ongoing trials. Psychological interventions range from self-help approaches, relaxation techniques, e-mental-health interventions, behavioural activation to talking therapies such as psychodynamic and cognitive behavioural therapy. Conclusions: The innovations in DSM-5 and upcoming ICD-11, conceptualising AD as a stress-response syndrome, will hopefully stimulate more research in regard to specific psychotherapeutic interventions for AD. Low intensive psychological interventions such as e-mental-health interventions for ADs may be a promising approach to address the high mental health care needs associated with AD and the limited mental health care resources in most countries around the world.
Article
Full-text available
Background: Conversational agents cannot yet express empathy in nuanced ways that account for the unique circumstances of the user. Agents that possess this faculty could be used to enhance digital mental health interventions. Objective: We sought to design a conversational agent that could express empathic support in ways that might approach, or even match, human capabilities. Another aim was to assess how users might appraise such a system. Methods: Our system used a corpus-based approach to simulate expressed empathy. Responses from an existing pool of online peer support data were repurposed by the agent and presented to the user. Information retrieval techniques and word embeddings were used to select historical responses that best matched a user's concerns. We collected ratings from 37,169 users to evaluate the system. Additionally, we conducted a controlled experiment (N=1284) to test whether the alleged source of a response (human or machine) might change user perceptions. Results: The majority of responses created by the agent (2986/3770, 79.20%) were deemed acceptable by users. However, users significantly preferred the efforts of their peers (P<.001). This effect was maintained in a controlled study (P=.02), even when the only difference in responses was whether they were framed as coming from a human or a machine. Conclusions: Our system illustrates a novel way for machines to construct nuanced and personalized empathic utterances. However, the design had significant limitations and further research is needed to make this approach viable. Our controlled study suggests that even in ideal conditions, nonhuman agents may struggle to express empathy as well as humans. The ethical implications of empathic agents, as well as their potential iatrogenic effects, are also discussed.
Conference Paper
Full-text available
Online mental health treatment has the potential to meet the increasing demand for mental health treatment. But low adherence to the treatment remains a problem that endangers treatment outcomes and their cost-effectiveness. This literature review compares predictors of adherence and outcome for clinical and online treatment of mental disorders to identify ways to improve the efficacy of online treatment and increase clients' adherence. Personalization of treatment and client improvement tracking appears to provide the most potential to improve clients' outcome and increase the cost-effectiveness of online treatment. Overall, it was noticed that decision support tools to improve online treatment are commonly not utilized and that their influence on treatment is unknown. However, integration of statistical methods into online treatment and research of their influence on the client has begun. Decision support systems derived from predictors of adherence might be required for personalization of online treatments and to improve outcome and cost-effectiveness to ease the burden of mental disorders.
Article
Full-text available
Medication adherence is of utmost importance for many chronic conditions, regardless of the disease type. Engaging patients in self-tracking their medication is a big challenge. One way to potentially reduce this burden is to use reminders to promote wellness throughout all stages of life and improve medication adherence. Chatbots have proven effectiveness in triggering users to engage in certain activity, such as medication adherence. In this paper, we discuss "Roborto", a chatbot to create an engaging interactive and intelligent environment for patients and assist in positive lifestyle modification. We introduce a way for healthcare providers to track patients adherence and intervene whenever necessary. We describe the health, technical and behavioural approaches to the problem of medication non-adherence and propose a diagnostic and decision support tool. The proposed study will be implemented and validated through a pilot experiment with users to measure the efficacy of the proposed approach.
Article
Full-text available
People with neurological conditions such as Parkinson's disease and dementia are known to have difficulties in language and communication. This paper presents initial testing of an artificial conversational agent, called Harlie. Harlie runs on a smartphone and is able to converse with the user on a variety of topics. A description of the application and a sample dialog are provided to illustrate the various roles chat-bots can play in the management of neurological conditions. Harlie can be used for measuring voice and communication outcomes during the daily life of the user, and for gaining information about challenges encountered. Moreover, it is anticipated that she may also have an educational and support role.
Article
Full-text available
During the last two decades, Internet-delivered cognitive behavior therapy (ICBT) has been tested in hundreds of randomized controlled trials, often with promising results. However, the control groups were often waitlisted, care-as-usual or attention control. Hence, little is known about the relative efficacy of ICBT as compared to face-to-face cognitive behavior therapy (CBT). In the present systematic review and meta-analysis, which included 1418 participants, guided ICBT for psychiatric and somatic conditions were directly compared to face-to-face CBT within the same trial. Out of the 2078 articles screened, a total of 20 studies met all inclusion criteria. Results showed a pooled effect size at post-treatment of Hedges g = .05 (95% CI, -.09 to .20), indicating that ICBT and face-to-face treatment produced equivalent overall effects. Study quality did not affect outcomes. While the overall results indicate equivalence, there have been few studies of the individual psychiatric and somatic conditions so far, and for the majority, guided ICBT has not been compared against face-to-face treatment. Thus, more research, preferably with larger sample sizes, is needed to establish the general equivalence of the two treatment formats.
Article
Full-text available
Fully automated self-help interventions can serve as highly cost-effective mental health promotion tools for massive amounts of people. However, these interventions are often characterised by poor adherence. One way to address this problem is to mimic therapy support by a conversational agent. The objectives of this study were to assess the effectiveness and adherence of a smartphone app, delivering strategies used in positive psychology and CBT interventions via an automated chatbot (Shim) for a non-clinical population — as well as to explore participants' views and experiences of interacting with this chatbot. A total of 28 participants were randomized to either receive the chatbot intervention (n = 14) or to a wait-list control group (n = 14). Findings revealed that participants who adhered to the intervention (n = 13) showed significant interaction effects of group and time on psychological well-being (FS) and perceived stress (PSS-10) compared to the wait-list control group, with small to large between effect sizes (Cohen's d range 0.14–1.06). Also, the participants showed high engagement during the 2-week long intervention, with an average open app ratio of 17.71 times for the whole period. This is higher compared to other studies on fully automated interventions claiming to be highly engaging, such as Woebot and the Panoply app. The qualitative data revealed sub-themes which, to our knowledge, have not been found previously, such as the moderating format of the chatbot. The results of this study, in particular the good adherence rate, validated the usefulness of replicating this study in the future with a larger sample size and an active control group. This is important, as the search for fully automated, yet highly engaging and effective digital self-help interventions for promoting mental health is crucial for the public health.
Conference Paper
Full-text available
Health professionals have limited resources and are not able to personally monitor and support patients in their everyday life. Against this background and due to the increasing number of self-service channels and digital health interventions, we investigate how text-based healthcare chatbots (THCB) can be designed to effectively support patients and health professionals in therapeutic settings beyond on-site consultations. We present an open source THCB system and how the THCP was designed for a childhood obesity intervention. Preliminary results with 15 patients indicate promising results with respect to intervention adherence (ca. 13.000 conversational turns over the course of 4 months or ca. 8 per day and patient), scalability of the THCB approach (ca. 99.5% of all conversational turns were THCB-driven) and over-average scores on perceived enjoyment and attachment bond between patient and THCB. Future work is discussed.
Article
Full-text available
Background and objective: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the screen. The aim of this research is to study which presentation mode is preferable for improving adherence. Methods: This study takes the approach of evaluating a specific part of a therapy, namely psychoeducation. This was done in a non-clinical sample, to first test the general constructs of the human-computer interaction. We performed an experimental study on the effect of presentation mode of psychoeducation on adherence. In this study, we took into account the moderating effects of attitude towards the virtual agent and recollection of the information. Within the paradigm of expressive writing, we asked participants (n= 46) to pick one of their worst memories to describe in a digital diary after receiving verbal or textual psychoeducation. Results and conclusion: We found that both the attitude towards the virtual agent and how well the psychoeducation was recollected were positively related to adherence in the form of task execution. Moreover, after controlling for the attitude to the agent and recollection, presentation of psychoeducation via text resulted in higher adherence than verbal presentation by the virtual agent did.
Conference Paper
Full-text available
There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Article
Full-text available
Embodied conversational agents (ECAs) are advanced computational interactive interfaces designed with the aim to engage users in the continuous and long-term use of a background application. The advantages and benefits of these agents have been exploited in several e-health systems. One of the medical domains where ECAs are recently applied is to support the detection of symptoms, prevention and treatment of mental health disorders. As ECAs based applications are increasingly used in clinical psychology, and due that one fatal consequence of mental health problems is the commitment of suicide, it is necessary to analyse how current ECAs in this clinical domain support the early detection and prevention of risk situations associated with suicidality. The present work provides and overview of the main features implemented in the ECAs to detect and prevent suicidal behaviours through two scenarios: ECAs acting as virtual counsellors to offer immediate help to individuals in risk; and ECAs acting as virtual patients for learning/training in the identification of suicide behaviours. A literature review was performed to identify relevant studies in this domain during the last decade, describing the main characteristics of the implemented ECAs and how they have been evaluated. A total of six studies were included in the review fulfilling the defined search criteria. Most of the experimental studies indicate promising results, though these types of ECAs are not yet commonly used in routine practice. The identification of some open challenges for the further development of ECAs within this domain is also discussed.
Conference Paper
Full-text available
This work documents the recent rise in popularity of messaging bots: chatterbot-like agents with simple, textual interfaces that allow users to access information, make use of services, or provide entertainment through online messaging platforms. Conversational interfaces have been often studied in their many facets, including natural language processing, artificial intelligence, human-computer interaction, and usability. In this work we analyze the recent trends in chatterbots and provide a survey of major messaging platforms, reviewing their support for bots and their distinguishing features. We then argue for what we call "Botplication", a bot interface paradigm that makes use of context, history, and structured conversation elements for input and output in order to provide a conversational user experience while overcoming the limitations of text-only interfaces.
Article
Full-text available
Background Web-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time. Objective The objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression. Methods In an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, “Depression in College Students,” as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2). Results Participants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F1,54= 9.24; P=.004). Participants’ comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy. Conclusions Conversational agents appear to be a feasible, engaging, and effective way to deliver CBT.
Article
Full-text available
Introduction: Benefits from mental health early interventions may not be sustained over time, and longer-term intervention programs may be required to maintain early clinical gains. However, due to the high intensity of face-to-face early intervention treatments, this may not be feasible. Adjunctive internet-based interventions specifically designed for youth may provide a cost-effective and engaging alternative to prevent loss of intervention benefits. However, until now online interventions have relied on human moderators to deliver therapeutic content. More sophisticated models responsive to user data are critical to inform tailored online therapy. Thus, integration of user experience with a sophisticated and cutting-edge technology to deliver content is necessary to redefine online interventions in youth mental health. This paper discusses the development of the moderated online social therapy (MOST) web application, which provides an interactive social media-based platform for recovery in mental health. We provide an overview of the system's main features and discus our current work regarding the incorporation of advanced computational and artificial intelligence methods to enhance user engagement and improve the discovery and delivery of therapy content. Methods: Our case study is the ongoing Horyzons site (5-year randomized controlled trial for youth recovering from early psychosis), which is powered by MOST. We outline the motivation underlying the project and the web application's foundational features and interface. We discuss system innovations, including the incorporation of pertinent usage patterns as well as identifying certain limitations of the system. This leads to our current motivations and focus on using computational and artificial intelligence methods to enhance user engagement, and to further improve the system with novel mechanisms for the delivery of therapy content to users. In particular, we cover our usage of natural language analysis and chatbot technologies as strategies to tailor interventions and scale up the system. Conclusions: To date, the innovative MOST system has demonstrated viability in a series of clinical research trials. Given the data-driven opportunities afforded by the software system, observed usage patterns, and the aim to deploy it on a greater scale, an important next step in its evolution is the incorporation of advanced and automated content delivery mechanisms.
Article
Full-text available
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what's new, and what's different this time? And is there an opportunity here to improve how our industry does technology transfer?
Article
Full-text available
Internet-und mobilebasierte Interventionen (IMI) für psychische Störungen sind meist Selbsthilfeprogram-me, deren Einsatzmöglichkeiten von Prävention, über die Behandlung bis zur Nachsorge reichen. Die Wirk-samkeit von IMI wurde in zahlreichen Studien belegt, auch im direkten Vergleich zu Psychotherapien im klassischen Setting. Der folgende Beitrag gibt einen Überblick über den Gegenstandsbereich, die Evidenz-lage und das Nutzungspotenzial sowie die Implementierungsmöglichkeiten und-hindernisse von IMI. SAR AH PAGANINI, J IA XI LIN, FR EIBURG , DAV ID DANIEL EBER T, ER L ANG EN NÜR NBERG , LÜNEBURG , HAR ALD BAUMEIS T ER , ULM Internet-und mobile-basierte Interventionen können Betrooene errei-chen, die traditionelle Angebote bisher nicht wahrnehmen.
Article
Full-text available
Major depression and depressive symptoms are highly prevalent and there is a need for different forms of psychological treatments that can be delivered from a distance at a low cost. In the present review the authors contrast face-to-face and internet-delivered cognitive behavior therapy (ICBT) for depression. A total of five studies are reviewed in which guided ICBT was directly compared against face-to-face CBT. Meta-analytic summary statistics were calculated for the five studies involving a total of 429 participants. The average effect size difference was Hedges g =0.12 (95% CI: -0.06~0.30) in the direction of favoring guided ICBT. The small difference in effect has no implication for clinical practice.The overall empirical status of clinician-guided ICBT for depression is commented on and future challenges are highlighted. Among these are developing treatments for patients with more severe and longstanding depression and for children, adolescents and the elderly. Also, there is a need to investigate mechanisms of change.
Conference Paper
Full-text available
We present SimSensei Kiosk, an implemented virtual human interviewer designed to create an engaging face-to-face interaction where the user feels comfortable talking and sharing information. SimSensei Kiosk is also designed to create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety or post-traumatic stress disorder (PTSD). In this paper, we summarize the design methodology, performed over the past two years, which is based on three main development cycles: (1) analysis of face-to-face human interactions to identify potential distress indicators, dialogue policies and virtual human gestures, (2) development and analysis of a Wizard-of-Oz prototype system where two human operators were deciding the spoken and gestural responses, and (3) development of a fully automatic virtual interviewer able to engage users in 15-25 minute interactions. We show the potential of our fully automatic virtual human interviewer in a user study, and situate its performance in relation to the Wizard-of-Oz prototype. Copyright © 2014, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Article
Full-text available
Online interventions are increasingly seen as having the potential to meet the growing demand for mental health services. However, with the burgeoning of services provided online by psychologists, counselors, and social workers, it is becoming critical to ensure that the interventions provided are supported by research evidence. This article reviews evidence for the effectiveness of individual synchronous online chat counseling and therapy (referred to as “online chat”). Despite using inclusive review criteria, only six relevant studies were found. They showed that although there is emerging evidence supporting the use of online chat, the overall quality of the studies is poor, including few randomized control trials (RCTs). There is an urgent need for further research to support the widespread implementation of this form of mental health service delivery.
Article
Full-text available
Internet-delivered cognitive behavior therapy (ICBT) has been tested in many research trials, but to a lesser extent directly compared to face-to-face delivered cognitive behavior therapy (CBT). We conducted a systematic review and meta-analysis of trials in which guided ICBT was directly compared to face-to-face CBT. Studies on psychiatric and somatic conditions were included. Systematic searches resulted in 13 studies (total N=1053) that met all criteria and were included in the review. There were three studies on social anxiety disorder, three on panic disorder, two on depressive symptoms, two on body dissatisfaction, one on tinnitus, one on male sexual dysfunction, and one on spider phobia. Face-to-face CBT was either in the individual format (n=6) or in the group format (n=7). We also assessed quality and risk of bias. Results showed a pooled effect size (Hedges' g) at post-treatment of -0.01 (95% CI: -0.13 to 0.12), indicating that guided ICBT and face-to-face treatment produce equivalent overall effects. Study quality did not affect outcomes. While the overall results indicate equivalence, there are still few studies for each psychiatric and somatic condition and many conditions for which guided ICBT has not been compared to face-to-face treatment. Thus, more research is needed to establish equivalence of the two treatment formats.
Article
Full-text available
Die ,,Studie zur Gesundheit Erwachsener in Deutschland“ (DEGS1) und ihr Zusatzmodul ,,Psychische Gesundheit“ (DEGS1-MH) erlauben erstmals seit dem 15 Jahre zurückliegenden Bundesgesundheitssurvey (BGS98) aktuelle Abschätzungen zu Morbidität, Einschränkungsprofilen und Inanspruchnahmeverhalten der deutschen Erwachsenen. Es werden die wichtigsten Ergebnisse zu Prävalenzen psychischer Störungen, zu damit assoziierten Beeinträchtigungen sowie zu Kontaktraten mit Gesundheitsdiensten berichtet.Der Studie liegt eine bevölkerungsrepräsentative Erwachsenenstichprobe (18–79 Jahre, n = 5317) zugrunde, die überwiegend persönlich mit ausführlichen klinischen Interviews (Composite International Diagnostic Interview; CIDI) untersucht wurde.Die 12-Monats-Prävalenz psychischer Störungen beträgt insgesamt 27,7 %, wobei große Unterschiede in verschiedenen Gruppen (z. B. Geschlecht, Alter, sozialer Status) zu verzeichnen sind. Psychische Störungen stellten sich als besonders beeinträchtigend heraus (erhöhte Zahl an Einschränkungstagen). Weniger als die Hälfte der Betroffenen berichtet, aktuell wegen psychischer Probleme in Behandlung zu stehen (10–40 % in Abhängigkeit von der Anzahl der Diagnosen).Psychische Störungen sind häufig. Die im Vergleich zu Personen ohne aktuelle psychische Diagnose deutlich erhöhte Rate an Beeinträchtigungstagen signalisiert neben dem individuellen Leiden der Betroffenen eine große gesellschaftliche Krankheitslast – auch verglichen mit vielen körperlichen Erkrankungen. Trotz des in Deutschland vergleichsweise gut ausgebauten Versorgungssystems für psychische Störungen ist Optimierungsbedarf hinsichtlich der Behandlungsrate zu vermuten.
Article
Full-text available
The German health interview and examination survey for adults (DEGS1) with the mental health module (DEGS1-MH) is the successor to the last survey of mental disorders in the general German population 15 years ago (GHS-MHS). This paper reports the basic findings on the 12-month prevalence of mental disorders, associated disabilities and self-reported healthcare utilization. A representative national cohort (age range 18-79 years, n = 5,317) was selected and individuals were personally examined (87.5 % face to face and 12.5 % via telephone) by a comprehensive clinical interview using the composite international diagnostic interview (CIDI) questionnaire. The overall 12-month prevalence of mental disorders was 27.7 % with substantial differences between subgroups (e.g. sex, age, socioeconomic status). Mental disorders were found to be particularly impairing (elevated number of disability days). Less than 50 % of those affected reported to be in contact with health services due to mental health problems within the last 12 months (range 10-40 % depending on the number of diagnoses). Mental disorders were found to be commonplace with a prevalence level comparable to that found in the 1998 predecessor study but several further adjustments will have to be made for a sound methodological comparison between the studies. Apart from individual distress, elevated self-reported disability indicated a high societal disease burden of mental disorders (also in comparison with many somatic diseases). Despite a relatively comprehensive and well developed mental healthcare system in Germany there are still optimisation needs for treatment rates.
Technical Report
This paper is a survey of modern chatbot platforms, Natural Language Processing tools, and their application and design. A chatbot is proposed for the GA Tech. OMSCS program to answer prospective students questions immediately 24/7. http://hdl.handle.net/1853/58516
Article
Psychotherapie und Digitalisierung – zwei scheinbar unvereinbare Begriffe, die jedoch zunehmend in einem Atemzug verwendet werden. Von der „One size fits all“-Prophezeiung bis zum Untergang der Versorgung psychisch erkrankter Menschen findet eine teils hoch emotionale und oftmals wenig wissenschaftlich fundierte Diskussion statt. Verzahnte Psychotherapie kann ein Ansatz sein, der beide Welten verbindet.