Conference Paper

Generative Transformer Chatbots for Mental Health Support: A Study on Depression and Anxiety

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Generative pretrained transformer models have been popular recently due to their enhanced capabilities and performance. In contrast to many existing artificial intelligence models, generative pretrained transformer models can perform with very limited training data. Generative pretrained transformer 3 (GPT-3) is one of the latest releases in this pipeline, demonstrating human-like logical and intellectual responses to prompts. Some examples include writing essays, answering complex questions, matching pronouns to their nouns, and conducting sentiment analyses. However, questions remain with regard to its implementation in health care, specifically in terms of operationalization and its use in clinical practice and research. In this viewpoint paper, we briefly introduce GPT-3 and its capabilities and outline considerations for its implementation and operationalization in clinical practice through a use case. The implementation considerations include (1) processing needs and information systems infrastructure, (2) operating costs, (3) model biases, and (4) evaluation metrics. In addition, we outline the following three major operational factors that drive the adoption of GPT-3 in the US health care system: (1) ensuring Health Insurance Portability and Accountability Act compliance, (2) building trust with health care providers, and (3) establishing broader access to the GPT-3 tools. This viewpoint can inform health care practitioners, developers, clinicians, and decision makers toward understanding the use of the powerful artificial intelligence tools integrated into hospital systems and health care.
Chapter
Full-text available
Chatbots potentially address deficits in availability of the traditional health workforce and could help to stem concerning rates of youth mental health issues including high suicide rates. While chatbots have shown some positive results in helping people cope with mental health issues, there are yet deep concerns regarding such chatbots in terms of their ability to identify emergency situations and act accordingly. Risk of suicide/self-harm is one such concern which we have addressed in this project. A chatbot decides its response based on the text input from the user and must correctly recognize the significance of a given input. We have designed a self-harm classifier which could use the user’s response to the chatbot and predict whether the response indicates intent for self-harm. With the difficulty to access confidential counselling data, we looked for alternate data sources and found Twitter and Reddit to provide data similar to what we would expect to get from a chatbot user. We trained a sentiment analysis classifier on Twitter data and a self-harm classifier on the Reddit data. We combined the results of the two models to improve the model performance. We got the best results from a LSTM-RNN classifier using BERT encoding. The best model accuracy achieved was 92.13%. We tested the model on new data from Reddit and got an impressive result with an accuracy of 97%. Such a model is promising for future embedding in mental health chatbots to improve their safety through accurate detection of self-harm talk by users.
Article
Full-text available
Aim The prevalence of mental health difficulties and the demand for psychological support for students in higher education (HE) appear to be increasing. Online therapy is a widely accessible resource that could provide effective support; however, little is known about such provision. The aim of this study was therefore to answer the research question ‘What factors serve to influence higher education students' levels of engagement with online therapy?’ Method A systematic review of qualitative scholarly and peer‐reviewed literature was conducted across 10 databases. Six papers met the inclusion criteria, were assessed for quality and were analysed using thematic synthesis. Findings Factors that serve to motivate HE students to engage with online therapy included the perception that it might enhance the quality of the therapeutic relationship, that it would facilitate more autonomy in the work, and that it might enable them to be anonymous and avoid face‐to‐face contact. In contrast, demotivating factors were primarily practical in nature. Fitting therapeutic work into their busy lives, technological challenges and persisting mental health stigma proved important factors. Conclusion This review synthesises the reasons why HE students might engage with or withdraw from online therapy. It highlights that students appear to view online therapy positively, but they can be inhibited by both personal and practical issues. Therapeutic services therefore need to ensure that information about the work they offer online is clear and transparent and that the platforms they work on are secure and stable. Finally, the need for further research, to keep abreast of technological developments, is recommended.
Conference Paper
Full-text available
We address the problem of automatic detection of psychiatric disorders from the linguistic content of social media posts. We build a large scale dataset of Reddit posts from users with eight disorders and a control user group. We extract and analyze linguistic characteristics of posts and identify differences between diagnostic groups. We build strong classification models based on deep contextualized word representations and show that they out-perform previously applied statistical models with simple linguistic features by large margins. We compare user-level and post-level classification performance, as well as an ensembled multiclass model.
Article
Full-text available
Accessibility to medical knowledge and healthcare costs are the two major impediments for common man. Conversational agents like Medical chatbots, which are designed keeping in view medical applications can potentially address these issues. Chatbots can either be generic or disease-specific in nature. Diabetes is a non-communicable disease and early detection of the same can let people know about the serious consequences of this disorder and help save human lives. In this paper, we have developed a generic text-to-text 'Diabot'-a DIAgnostic chatBOT which engages patients in conversation using advanced Natural Language Understanding (NLU) techniques to provide personalized prediction using the general health dataset and based on the various symptoms sought from the patient. The design is further extended as a DIAbetes chatBOT for specialized Diabetes prediction using the Pima Indian diabetes dataset for suggesting proactive preventive measures to be taken. For prediction, there exists multiple classification algorithms in Machine Learning which can be used based on their accuracy. However, rather than considering only one model and hoping this model is the best or most accurate predictor we can make, the novelty in this paper lies in Ensemble learning, which is a meta-algorithm that combines a myriad of weaker models and averages them to produce one final balanced and accurate model. From literature reviews, it is observed that very little research has happened in ensemble methods to increase prediction accuracy. The paper presents a state-of-the art Diabot design with an undemanding front-end interface for common man using React UI, RASA NLU based text pre-processing, quantitative performance comparison of various machine learning algorithms as standalone classifiers and combining them all in a majority voting ensemble. It is observed that the chatbot is able to interact seamlessly with all patients based on the symptoms sought. The accuracy of Ensemble model is balanced for general health prediction and highest for diabetes prediction among all weak learners considered which provides motivation for further exploring ensemble techniques in this domain."
Article
Full-text available
The goal of this project is to develop a chatbot using deep learning models. Chatterbot is an existing research area whose main goal is to appear as human as possible and most of the current models(which use RNN and related sequential learning models) are unable to achieve this task to relate over long dependencies. Adding onto that , NLP tasks require a lot of data which can be hard to collect for smaller projects/tasks which inspired to try out sequence to sequence learning model using LSTM. For that we have used a movie dialog corpus of 220,579 conversation exchanges among which about 50,000 conversational exchanges are only used as a training corpus to our model since training on more conversation exchanges requires high computation power than we have.
Article
Full-text available
Background: Chatbots are systems that are able to converse and interact with human users using spoken, written, and visual languages. Chatbots have the potential to be useful tools for individuals with mental disorders, especially those who are reluctant to seek mental health advice due to stigmatization. While numerous studies have been conducted about using chatbots for mental health, there is a need to systematically bring this evidence together in order to inform mental health providers and potential users about the main features of chatbots and their potential uses, and to inform future research about the main gaps of the previous literature. Objective: We aimed to provide an overview of the features of chatbots used by individuals for their mental health as reported in the empirical literature. Methods: Seven bibliographic databases (Medline, Embase, PsycINFO, Cochrane Central Register of Controlled Trials, IEEE Xplore, ACM Digital Library, and Google Scholar) were used in our search. In addition, backward and forward reference list checking of the included studies and relevant reviews was conducted. Study selection and data extraction were carried out by two reviewers independently. Extracted data were synthesised using a narrative approach. Chatbots were classified according to their purposes, platforms, response generation, dialogue initiative, input and output modalities, embodiment, and targeted disorders. Results: Of 1039 citations retrieved, 53 unique studies were included in this review. The included studies assessed 41 different chatbots. Common uses of chatbots were: therapy (n = 17), training (n = 12), and screening (n = 10). Chatbots in most studies were rule-based (n = 49) and implemented in stand-alone software (n = 37). In 46 studies, chatbots controlled and led the conversations. While the most frequently used input modality was written language only (n = 26), the most frequently used output modality was a combination of written, spoken and visual languages (n = 28). In the majority of studies, chatbots included virtual representations (n = 44). The most common focus of chatbots was depression (n = 16) or autism (n = 10). Conclusion: Research regarding chatbots in mental health is nascent. There are numerous chatbots that are used for various mental disorders and purposes. Healthcare providers should compare chatbots found in this review to help guide potential users to the most appropriate chatbot to support their mental health needs. More reviews are needed to summarise the evidence regarding the effectiveness and acceptability of chatbots in mental health.
Chapter
With the rising popularity of chatbots, the research on their underlying technology has expanded to provide increased support to the users. One such sphere has been mental health support. As we train chatbots to better understand human emotions, we can also employ them to assist users in dealing with their emotions and improving their mental well-being. This paper presents a novel approach toward building a chatbot framework that can converse with the users and also provide therapeutic advice based on assessment of the user’s mood. The framework employs sentiment analysis for analyzing the user behavior which classifies the use of our chatbot architecture. Depending on the classification, the framework present two trained chatbot model based on self-attention mechanism to engage user in generic or therapy based conversations. Hence, the framework is designed with an emphasis on using natural language processing and machine learning techniques to ameliorate the onset of mental health disorders.
Article
Open-Domain Question Answering (ODQA) is a technique for finding an answer to a given query from a large set of documents. In this paper, we present an experimentation study to compare ODQA candidate solutions in the context of troubleshooting documents. We mainly focus on a well known open-source framework which is called Haystack. This framework comprises two key components which are the Retriever and the Reader. The Haystack Framework comes with several Retriever-Reader combinations and the choice of the best one is still unanswered till now. In this paper, we conduct an experimentation study to compare different Retriever-Reader combinations. Our aim is to come up with the best combination of components in regard to the speed and the processing power within the context of troubleshooting queries.
] Manish Bali, Samahit Mohanty, Subarna Chatterjee, Manash Sarma, and Rajesh Puravankara. 2019. Diabot: a predictive medical chatbot using ensemble learning
  • al Bali
] Alaa A Abd-Alrazaq, Mohannad Alajlani, Nashva Ali, Kerstin Denecke, Bridgette M Bewick, and Mowafa Househ. 2021. Perceptions and opinions of patients about mental health chatbots: scoping review
  • Abd-Alrazaq
  • al Abd-Alrazaq
] Himanshu Bansal and Rizwan Khan. 2018. A review paper on human computer interaction
  • Khan Bansal
] Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, and Jihie Kim. 2019. Ensemble-based deep reinforcement learning for chatbots
  • al Cuayáhuitl
A survey of transformers. AI Open ( 2022 )
  • Lin
  • al Lin
] Alaa A Abd-Alrazaq, Mohannad Alajlani, Ali Abdallah Alalwan, Bridgette M Bewick, Peter Gardner, and Mowafa Househ. 2019. An overview of the features of chatbots in mental health: A scoping review
  • Abd-Alrazaq
  • al Abd-Alrazaq
] Ebtesam Hussain Almansor, Farookh Khadeer Hussain, and Omar Khadeer Hussain. 2021. Supervised ensemble sentiment-based framework to measure chatbot quality of services
  • al Almansor
Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing
  • Jacob Devlin
  • Ming-Wei Chang
Jacob Devlin and Ming-Wei Chang. 2018. Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing. Google AI Blog. Weblog.[Online] Available from: https://ai. googleblog. com/2018/11/open-sourcing-bertstate-of-art-pre. html [Accessed 4 December 2019] (2018).
] Kolla Bhanu Prakash, Y Nagapawan, N Lakshmi Kalyani, and V Pradeep Kumar. 2020. Chatterbot implementation using transfer learning and LSTM encoder-decoder architecture
  • al Prakash
] Taihua Shao, Yupu Guo, Honghui Chen, and Zepeng Hao. 2019. Transformer-based neural network for answer selection in question answering
  • al Shao
] Sinarwati Mohamad Suhaili, Naomie Salim, and Mohamad Nazim Jambli. 2021. Service chatbots: A systematic review
  • al Suhaili
] Amy E Sickel, Jason D Seacat, and Nina A Nabors. 2014. Mental health stigma update: A review of consequences
  • Sickel
  • al Sickel
] Zeeshan Haque Syed, Asma Trabelsi, Emmanuel Helbert, Vincent Bailleau, and Christian Muths. 2021. Question answering chatbot for troubleshooting queries based on transfer learning
  • Syed
  • al Syed
Perceptions and opinions of patients about mental health chatbots: scoping review
  • Mohannad Alaa A Abd-Alrazaq
  • Nashva Alajlani
  • Kerstin Ali
  • Denecke
  • M Bridgette
  • Mowafa Bewick
  • Househ
Alaa A Abd-Alrazaq, Mohannad Alajlani, Nashva Ali, Kerstin Denecke, Bridgette M Bewick, and Mowafa Househ. 2021. Perceptions and opinions of patients about mental health chatbots: scoping review. Journal of medical Internet research 23, 1 (2021), e17828.
Supervised ensemble sentiment-based framework to measure chatbot quality of services
  • Farookh Ebtesam Hussain Almansor
  • Omar Khadeer Khadeer Hussain
  • Hussain
Ebtesam Hussain Almansor, Farookh Khadeer Hussain, and Omar Khadeer Hussain. 2021. Supervised ensemble sentiment-based framework to measure chatbot quality of services. Computing 103 (2021), 491-507.
A review paper on human computer interaction
  • Himanshu Bansal
  • Rizwan Khan
Himanshu Bansal and Rizwan Khan. 2018. A review paper on human computer interaction. Int. J. Adv. Res. Comput. Sci. Softw. Eng 8, 4 (2018), 53.
Chatbot Interaction with Artificial Intelligence: human data augmentation with T5 and language transformer ensemble for text classification
  • J Jordan
  • Anikó Bird
  • Diego R Ekárt
  • Faria
Jordan J Bird, Anikó Ekárt, and Diego R Faria. 2021. Chatbot Interaction with Artificial Intelligence: human data augmentation with T5 and language transformer ensemble for text classification. Journal of Ambient Intelligence and Humanized Computing (2021), 1-16.
Care-Bot: A Mental Health ChatBot
  • Reuben Crasto
  • Lance Dias
  • Dominic Miranda
  • Deepali Kayande
Reuben Crasto, Lance Dias, Dominic Miranda, and Deepali Kayande. 2021. Care-Bot: A Mental Health ChatBot. In 2021 2nd International Conference for Emerging Technology (INCET). IEEE, 1-5.
Ensemble-based deep reinforcement learning for chatbots
  • Heriberto Cuayáhuitl
  • Donghyeon Lee
  • Seonghan Ryu
  • Yongjin Cho
  • Sungja Choi
  • Satish Indurthi
  • Seunghak Yu
  • Hyungtak Choi
  • Inchul Hwang
  • Jihie Kim
Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, and Jihie Kim. 2019. Ensemble-based deep reinforcement learning for chatbots. Neurocomputing 366 (2019), 118-130.
Better but not well: Mental health policy in the United States since 1950
  • G Richard
  • Frank
  • A Sherry
  • Glied
Richard G Frank and Sherry A Glied. 2006. Better but not well: Mental health policy in the United States since 1950. JHU Press.
WOzBot: A Wizard of Oz Based Method for Chatbot Response Improvement. Master's thesis
  • Chaitanya Joglekar
Chaitanya Joglekar. 2022. WOzBot: A Wizard of Oz Based Method for Chatbot Response Improvement. Master's thesis. Trinity College Dublin.
Shell shock to PTSD: Military psychiatry from 1900 to the Gulf War
  • Edgar Jones
  • Simon Wessely
Edgar Jones and Simon Wessely. 2005. Shell shock to PTSD: Military psychiatry from 1900 to the Gulf War. Psychology Press.
2022. A survey of transformers
  • Tianyang Lin
  • Yuxin Wang
  • Xiangyang Liu
  • Xipeng Qiu
Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. 2022. A survey of transformers. AI Open (2022).
Gochat: Goal-oriented chatbots with hierarchical reinforcement learning
  • Jianfeng Liu
  • Feiyang Pan
  • Ling Luo
Jianfeng Liu, Feiyang Pan, and Ling Luo. 2020. Gochat: Goal-oriented chatbots with hierarchical reinforcement learning. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1793-1796.
Pretrained transformers for simple question answering over knowledge graphs
  • Denis Lukovnikov
  • Asja Fischer
  • Jens Lehmann
Denis Lukovnikov, Asja Fischer, and Jens Lehmann. 2019. Pretrained transformers for simple question answering over knowledge graphs. In International Semantic Web Conference. Springer, 470-486.
Language Models are Unsupervised Multitask Learners
  • Alec Radford
  • Jeff Wu
  • Rewon Child
  • David Luan
  • Dario Amodei
  • Ilya Sutskever
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019).
Transformer-based neural network for answer selection in question answering
  • Taihua Shao
  • Yupu Guo
  • Honghui Chen
  • Zepeng Hao
Taihua Shao, Yupu Guo, Honghui Chen, and Zepeng Hao. 2019. Transformer-based neural network for answer selection in question answering. IEEE Access 7 (2019), 26146-26156.
Mental health stigma update: A review of consequences
  • Amy E Sickel
  • Jason D Seacat
  • Nina A Nabors
Amy E Sickel, Jason D Seacat, and Nina A Nabors. 2014. Mental health stigma update: A review of consequences. Advances in Mental Health 12, 3 (2014), 202-215.
Service chatbots: A systematic review
  • Mohamad Sinarwati
  • Naomie Suhaili
  • Mohamad Nazim Salim
  • Jambli
Sinarwati Mohamad Suhaili, Naomie Salim, and Mohamad Nazim Jambli. 2021. Service chatbots: A systematic review. Expert Systems with Applications 184 (2021), 115461.
Attention is all you need
  • Ashish Vaswani
  • Noam Shazeer
  • Niki Parmar
  • Jakob Uszkoreit
  • Llion Jones
  • Aidan N Gomez
  • Łukasz Kaiser
  • Illia Polosukhin
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998-6008. WHO. 2021. Depression. https://www.who.int/news-room/fact-sheets/detail/ depression