Figure - uploaded by Eduardo Bunge
Content may be subject to copyright.
Sociodemographic characteristics of baseline sample (N = 170).

Sociodemographic characteristics of baseline sample (N = 170).

Source publication
Article
Full-text available
Background The use of chatbots to address mental health conditions have become increasingly popular in recent years. However, few studies aimed to teach parenting skills through chatbots, and there are no reports on parental user experience. Aim: This study aimed to assess the user experience of a parenting chatbot micro intervention to teach how t...

Contexts in source publication

Context 1
... average age of the parents' children was 5.69 years old (SD = 2.93), and there was relative gender homogeneity (51.2% girls and 48.8% boys). See Table 1. A total of 89 parents were randomly assigned to the experimental group and 81 to the control group. ...
Context 2
... average age of the parents' children was 5.69 years old (SD = 2.93), and there was relative gender homogeneity (51.2% girls and 48.8% boys). See Table 1. A total of 89 parents were randomly assigned to the experimental group and 81 to the control group. ...

Similar publications

Article
Full-text available
The research aims to study the factors which affect baby boomers’ attitudes towards the acceptance of mobile network providers’ Artificial Intelligence (AI) Chatbots in Thailand and their influences. The research sample consists of 400 people who were born from 1946 to 1964 and had experience in using AI chatbots. The proposed concept of model of t...
Article
Full-text available
A chatbot or conversational agent is a software that can interact or ``chat'' with a human user using a natural language, like English, for instance. Since the first chatbot developed, many have been created but most of their problems still persist, like providing the right answer to the user and user acceptance itself. Considering such facts, in t...
Article
Full-text available
Introduction Within the technological development path, chatbots are considered an important tool for economic and social entities to become more efficient and to develop customer-centric experiences that mimic human behavior. Although artificial intelligence is increasingly used, there is a lack of empirical studies that aim to understand consumer...
Preprint
Full-text available
Natural Language Understanding (NLU) is important in today's technology as it enables machines to comprehend and process human language, leading to improved human-computer interactions and advancements in fields such as virtual assistants, chatbots, and language-based AI systems. This paper highlights the significance of advancing the field of NLU...
Article
Full-text available
Existing persona-based dialogue generation models focus on the semantic consistency between personas and responses. However, various influential factors can cause persona inconsistency, such as the speaking style in the context. Existing models perform inflexibly in speaking styles on various-persona-distribution datasets, resulting in persona styl...

Citations

... Prior work suggested that AI-driven systems can support parenting education [61] and provide evidence-based advice through applications and chatbots, delivering micro-interventions such as teaching parents how to praise their children effectively [11,17] or offering strategies to teach complex concepts [55,69]. Many parents also prefer using GAI to create educational materials tailored to their children's needs, rather than granting children direct access to these tools [27]. ...
Conference Paper
Full-text available
AI-assisted learning companion robots are increasingly used in early education. Many parents express concerns about content appropriateness, while they also value how AI and robots could supplement their limited skill, time, and energy to support their children's learning. We designed a card-based kit, SET, to systematically capture scenarios that have different extents of parental involvement. We developed a prototype interface, PAiREd, with a learning companion robot to deliver LLM-generated educational content that can be reviewed and revised by parents. Parents can flexibly adjust their involvement in the activity by determining what they want the robot to help with. We conducted an in-home field study involving 20 families with children aged 3-5. Our work contributes to an empirical understanding of the level of support parents with different expectations may need from AI and robots and a prototype that demonstrates an innovative interaction paradigm for flexibly including parents in supporting their children.
... Prior work suggested that AI-driven systems can support parenting education [61] and provide evidence-based advice through applications and chatbots, delivering micro-interventions such as teaching parents how to praise their children effectively [11,17] or offering strategies to teach complex concepts [55,69]. Many parents also prefer using GAI to create educational materials tailored to their children's needs, rather than granting children direct access to these tools [27]. ...
Preprint
Full-text available
AI-assisted learning companion robots are increasingly used in early education. Many parents express concerns about content appropriateness, while they also value how AI and robots could supplement their limited skill, time, and energy to support their children's learning. We designed a card-based kit, SET, to systematically capture scenarios that have different extents of parental involvement. We developed a prototype interface, PAiREd, with a learning companion robot to deliver LLM-generated educational content that can be reviewed and revised by parents. Parents can flexibly adjust their involvement in the activity by determining what they want the robot to help with. We conducted an in-home field study involving 20 families with children aged 3-5. Our work contributes to an empirical understanding of the level of support parents with different expectations may need from AI and robots and a prototype that demonstrates an innovative interaction paradigm for flexibly including parents in supporting their children.
... Similarly, Beatty et al [25], He et al [38], and Jeong et al [39] highlighted the importance of measuring goal alignment using the WAI-SR subscale, which evaluates how users and the chatbot collaborate to set mental health-related goals. Entenberg et al [40] as well as Martinengo et al [41] reported in the case of their study that therapeutic goals were established by the chatbot at the beginning of the interaction, ensuring clarity from the outset. Forman-Hoffman et al [42] reported that in their context, therapeutic goal alignment was evaluated via the therapeutic alliance score, highlighting a structured measurement approach. ...
... Several studies explored how engagement was measured or facilitated. Entenberg et al [40] described engagement as determined by the number of messages and characters sent. Escobar Viera et al [56] found that participants sent an average of 49.3 messages to the chatbot, highlighting message frequency as an indicator of engagement. ...
... Chan et al [62] discussed issues with inappropriate chatbot reinforcements, context misunderstandings, and technical errors, which undermined their reliability. Entenberg et al [40] noted that customizable and human-like features significantly enhanced user satisfaction, while Forman-Hoffman et al [42] highlighted design elements like responsiveness and inclusivity (eg, avoiding past names for trans users) as facilitators. However, limited conversational content and inability to handle complex interactions were key barriers [56]. ...
Article
Full-text available
Background: Mental health disorders significantly impact global populations, prompting the rise of digital mental health interventions, such as artificial intelligence (AI)-powered chatbots, to address gaps in access to care. This review explores the potential for a "digital therapeutic alliance (DTA)," emphasizing empathy, engagement, and alignment with traditional therapeutic principles to enhance user outcomes. Objective: The primary objective of this review was to identify key concepts underlying the DTA in AI-driven psychothera-peutic interventions for mental health. The secondary objective was to propose an initial definition of the DTA based on these identified concepts. Methods: The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for scoping reviews and Tavares de Souza's integrative review methodology were followed, encompassing systematic literature searches in Medline, Web of Science, PsycNet, and Google Scholar. Data from eligible studies were extracted and analyzed using Horvath et al's conceptual framework on a therapeutic alliance, focusing on goal alignment, task agreement, and the therapeutic bond, with quality assessed using the Newcastle-Ottawa Scale and Cochrane Risk of Bias Tool. Results: A total of 28 studies were identified from an initial pool of 1294 articles after excluding duplicates and ineligible studies. These studies informed the development of a conceptual framework for a DTA, encompassing key elements such as goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers affecting therapeutic outcomes. The interventions primarily focused on AI-powered chatbots, digital psychotherapy, and other digital tools. Conclusions: The findings of this integrative review provide a foundational framework for the concept of a DTA and report its potential to replicate key therapeutic mechanisms such as empathy, trust, and collaboration in AI-driven psychotherapeutic tools. While the DTA shows promise in enhancing accessibility and engagement in mental health care, further research and innovation are needed to address challenges such as personalization, ethical concerns, and long-term impact.
... As can be seen in Table 1, a considerable part of the study related to the UX and usability of chatbots has been carried out in the health domain (e.g., [47][48][49][50][51][52][53]) and related areas such as mental health (e.g., [10,[54][55][56][57]), digital health [58], and well-being [59]. Several chatbot UX and usability studies have also been conducted in the context of education (e.g., [14,51,60,61]) and higher education [62]. ...
... Mulia, for example, evaluated the usability and users' satisfaction when using ChatGPT for text generation [65]. Existing research on chatbot usability and UX covers a broad spectrum of applications, with studies examining factors like task completion, user satisfaction, effectiveness, and specific usability metrics such as the System Usability Scale (SUS) (e.g., [9,14,49,51,54,56,58,[60][61][62][64][65][66]), the Chatbot Usability Questionnaire (CUQ) (e.g., [48,50,53,56,64]), the User Experience Questionnaire (UEQ) (e.g., [47,55,57,60,64]). The Usability Metric for User Experience LITE (UMUX-LITE) [67] offers a quick and reliable measure of usability with a strong correlation to the SUS. ...
... We found only one study in which the authors built a measurement instrument based on the International Positive and Negative Affect Schedule Short-Form (I-PANAS-SF) [10]. Studies evaluating factors that affect chatbot UX and usability also include other metrics, such as attractiveness [55], cognitive load [13], customer satisfaction [63], ease of completing tasks [59], ease of use [9,52], learnability [62], perceived autonomy [13], perceived competence [13], satisfaction with operation [13], responsiveness [63], satisfaction with the system [13], satisfaction [57], system reliability [62], task performance [59], utility [9,52,62], utility errors [59], and others. ...
Article
Full-text available
Qualitative data analysis (QDA) tools are essential for extracting insights from complex datasets. This study investigates researchers’ perceptions of the usability, user experience (UX), mental workload, trust, task complexity, and emotional impact of three tools: Taguette 1.4.1 (a traditional QDA tool), ChatGPT (GPT-4, December 2023 version), and Gemini (formerly Google Bard, December 2023 version). Participants (N = 85), Master’s students from the Faculty of Electrical Engineering and Computer Science with prior experience in UX evaluations and familiarity with AI-based chatbots, performed sentiment analysis and data annotation tasks using these tools, enabling a comparative evaluation. The results show that AI tools were associated with lower cognitive effort and more positive emotional responses compared to Taguette, which caused higher frustration and workload, especially during cognitively demanding tasks. Among the tools, ChatGPT achieved the highest usability score (SUS = 79.03) and was rated positively for emotional engagement. Trust levels varied, with Taguette preferred for task accuracy and ChatGPT rated highest in user confidence. Despite these differences, all tools performed consistently in identifying qualitative patterns. These findings suggest that AI-driven tools can enhance researchers’ experiences in QDA while emphasizing the need to align tool selection with specific tasks and user preferences.
... Similarly, Beatty and colleagues as well as other teams highlighted the importance of measuring goal alignment using the Working Alliance Inventory -Short Revised (WAI-SR) subscale, which evaluates how users and the chatbot collaborate to set mental health-related goals [38][39][40]. Entenberg and team as well as Martinengo reported in the case of their study that therapeutic goals were established by the chatbot at the beginning of the interaction, ensuring clarity from the outset [41,42]. Forman-Hoffman and colleagues reported that in their context, therapeutic goal alignment was evaluated via the therapeutic alliance score, highlighting a structured measurement approach [43]. ...
... Several studies explored how engagement was measured or facilitated. Entenberg et al. described engagement as determined by the number of messages and characters sent [41]. Escobar Viera et al. found that participants sent an average of 49.3 messages to the chatbot, highlighting message frequency as an indicator of engagement [54]. ...
... Chan et al. discussed issues with inappropriate chatbot reinforcements, context misunderstandings, and technical errors, which undermined their reliability [58]. Easton et al. noted that customizable and human-like features significantly enhanced user satisfaction, while Escobar Viera et al. highlighted design elements like responsiveness and inclusivity (e.g., avoiding past names for trans users) as facilitators [41,43]. However, limited conversational content and inability to handle complex interactions were key barriers [54]. ...
Preprint
Full-text available
Background: Mental health disorders significantly impact global populations, prompting the rise of digital mental health interventions like artificial intelligence-powered chatbots to address gaps in access to care. This review explores the potential for a "digital therapeutic alliance," emphasizing empathy, engagement, and alignment with traditional therapeutic principles to enhance user outcomes. Objective: The primary objective of this review was to identify key concepts underlying the digital therapeutic alliance in AI-driven psychotherapeutic interventions for mental health. The secondary objective was to propose an initial definition of the digital therapeutic alliance based on these identified concepts. Methods: The PRISMA for Scoping Reviews and Tavares de Souza's integrative review methodology were followed, encompassing systematic literature searches in Medline, Web of Science, PsycNet, and Google Scholar. Data from eligible studies were extracted and analyzed using Horvath et al.'s conceptual framework on therapeutic alliance, focusing on goal alignment, task agreement, and the therapeutic bond, with quality assessed using the Newcastle-Ottawa Scale and Cochrane Risk of Bias Tool. Results: A total of 23 studies were identified from an initial pool of 1,227 articles after excluding duplicates and ineligible studies. These studies informed the development of a conceptual framework for Digital Therapeutic Alliance, encompassing key elements such as goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers affecting therapeutic outcomes. The interventions primarily focused on AI-powered chatbots, digital psychotherapy, and other digital tools. Conclusions: The findings of this integrative review provide a foundational framework for the concept of Digital Therapeutic Alliance, and reported its potential to replicate key therapeutic mechanisms such as empathy, trust, and collaboration in artificial intelligence-driven psychotherapeutic tools. While promising for enhancing accessibility and engagement in mental health care, further research and innovation are needed to address challenges like personalization, ethical concerns, and long-term impact.
... Similarly, Beatty et al [25], He et al [38], and Jeong et al [39] highlighted the importance of measuring goal alignment using the WAI-SR subscale, which evaluates how users and the chatbot collaborate to set mental health-related goals. Entenberg et al [40] as well as Martinengo et al [41] reported in the case of their study that therapeutic goals were established by the chatbot at the beginning of the interaction, ensuring clarity from the outset. Forman-Hoffman et al [42] reported that in their context, therapeutic goal alignment was evaluated via the therapeutic alliance score, highlighting a structured measurement approach. ...
... Several studies explored how engagement was measured or facilitated. Entenberg et al [40] described engagement as determined by the number of messages and characters sent. Escobar Viera et al [56] found that participants sent an average of 49.3 messages to the chatbot, highlighting message frequency as an indicator of engagement. ...
... Chan et al [62] discussed issues with inappropriate chatbot reinforcements, context misunderstandings, and technical errors, which undermined their reliability. Entenberg et al [40] noted that customizable and human-like features significantly enhanced user satisfaction, while Forman-Hoffman et al [42] highlighted design elements like responsiveness and inclusivity (eg, avoiding past names for trans users) as facilitators. However, limited conversational content and inability to handle complex interactions were key barriers [56]. ...
Article
Full-text available
Background Mental health disorders significantly impact global populations, prompting the rise of digital mental health interventions, such as artificial intelligence (AI)-powered chatbots, to address gaps in access to care. This review explores the potential for a “digital therapeutic alliance (DTA),” emphasizing empathy, engagement, and alignment with traditional therapeutic principles to enhance user outcomes. Objective The primary objective of this review was to identify key concepts underlying the DTA in AI-driven psychotherapeutic interventions for mental health. The secondary objective was to propose an initial definition of the DTA based on these identified concepts. Methods The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for scoping reviews and Tavares de Souza’s integrative review methodology were followed, encompassing systematic literature searches in Medline, Web of Science, PsycNet, and Google Scholar. Data from eligible studies were extracted and analyzed using Horvath et al’s conceptual framework on a therapeutic alliance, focusing on goal alignment, task agreement, and the therapeutic bond, with quality assessed using the Newcastle-Ottawa Scale and Cochrane Risk of Bias Tool. Results A total of 28 studies were identified from an initial pool of 1294 articles after excluding duplicates and ineligible studies. These studies informed the development of a conceptual framework for a DTA, encompassing key elements such as goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers affecting therapeutic outcomes. The interventions primarily focused on AI-powered chatbots, digital psychotherapy, and other digital tools. Conclusions The findings of this integrative review provide a foundational framework for the concept of a DTA and report its potential to replicate key therapeutic mechanisms such as empathy, trust, and collaboration in AI-driven psychotherapeutic tools. While the DTA shows promise in enhancing accessibility and engagement in mental health care, further research and innovation are needed to address challenges such as personalization, ethical concerns, and long-term impact.
... In the past few years, various researchers have explored the use of AI tools for parenting, such as using AI-enabled systems to assist parent-child storytelling experience [44] and to promote effective parenting skills such as praising [8][9][10][11]. Despite the extensive adoption of LLMs for language and academic learning, only a few explored the usefulness of LLMs in parenting. Luo et al. [21] interviewed early childhood education experts from China and the United States (U.S.) and found that one role of ChatGPT in parenting is to serve as an on-call facilitator for educators and caregivers, which leads to new opportunities and challenges related to AI literacy and AI social interaction [20,46]. ...
... Consequently, existing clinical tests addressing depression evolution are used as a gold standard. This is the case of [SIGH-D, IDS-C, Zarit Test, SAD PERSONS, RSIS, HAM-A, MBI, PANAS] 1 [35,36,[48][49][50][51][52]54,[56][57][58][59]37,61,[63][64][65][66][67][68]38,39,[42][43][44]46,47]. ...
... In the case of contributions addressing both male and female individuals, a further classification can be identified according to target audience. 38.9% of studies are targeted to the general population [35,37,58,[62][63][64]38,[40][41][42]44,49,51,54] , whereas 33.3% are aimed to depression patients [36,46,67,69,47,48,50,56,[59][60][61]16] , students (11.1%) [57,65,66,68] or caregivers (8.3%) [55,43,39]. The early detection of depression in caregivers, and depression follow-up, are key due to the role that caregivers play from a public health standpoint, and its social impact. ...
Article
Full-text available
ABSTRACT Objective This work explores the advances in conversational agents aimed at the detection of mental health disorders, and specifically the screening of depression. The focus is put on those based on voice interaction, but other approaches are also tackled, such as text-based interaction or embodied avatars. Methods PRISMA was selected as the systematic methodology for the analysis of existing literature, which was retrieved from Scopus, PubMed, IEEE Xplore, APA PsycINFO, Cochrane, and Web of Science. Relevant research addresses the detection of depression using conversational agents, and the selection criteria utilized include their effectiveness, usability, personalization, and psychometric properties. Results Of the 993 references initially retrieved, 36 were finally included in our work. The analysis of these studies allowed us to identify 30 conversational agents that claim to detect depression, specifically or in combination with other disorders such as anxiety or stress disorders. As a general approach, screening was implemented in the conversational agents taking as a reference standardized or psychometrically validated clinical tests, which were also utilized as a golden standard for their validation. The implementation of questionnaires such as Patient Health Questionnaire or the Beck Depression Inventory, which are used in 65% of the articles analyzed, stand out. Conclusions The usefulness of intelligent conversational agents allows screening to be administered to different types of profiles, such as patients (33% of relevant proposals) and caregivers (11%), although in many cases a target profile is not clearly of (66% of solutions analyzed). This study found 30 standalone conversational agents, but some proposals were explored that combine several approaches for a more enriching data acquisition. The interaction implemented in most relevant conversational agents is text-based, although the evolution is clearly towards voice integration, which in turns enhances their psychometric characteristics, as voice interaction is perceived as more natural and less invasive.
... Therefore, these findings show the relevance of this intervention as it provides new parenting information in an acceptable format. This is aligned with previous reports from the same participants stating that the intervention was useful for everyday life (32). Moreover, these findings can guide further considerations when developing these interventions. ...
Article
Full-text available
Introduction Mental health issues have been on the rise among children and adolescents, and digital parenting programs have shown promising outcomes. However, there is limited research on the potential efficacy of utilizing chatbots to promote parental skills. This study aimed to understand whether parents learn from a parenting chatbot micro intervention, to assess the overall efficacy of the intervention, and to explore the user characteristics of the participants, including parental busyness, assumptions about parenting, and qualitative engagement with the chatbot. Methods A sample of 170 parents with at least one child between 2–11 years old were recruited. A randomized control trial was conducted. Participants in the experimental group accessed a 15-min intervention that taught how to utilize positive attention and praise to promote positive behaviors in their children, while the control group remained on a waiting list. Results Results showed that participants engaged with a brief AI-based chatbot intervention and were able to learn effective praising skills. Although scores moved in the expected direction, there were no significant differences by condition in the praising knowledge reported by parents, perceived changes in disruptive behaviors, or parenting self-efficacy, from pre-intervention to 24-hour follow-up. Discussion The results provided insight to understand how parents engaged with the chatbot and suggests that, in general, brief, self-guided, digital interventions can promote learning in parents. It is possible that a higher dose of intervention may be needed to obtain a therapeutic change in parents. Further research implications on chatbots for parenting skills are discussed.
Article
Objective We aim to describe the development of a conversational agent (CA) for parenting, termed PAT (Parenting Assistant platform), to demonstrate how artificial intelligence (AI) can enhance parenting skills. Background Behavioral problems are the most common issues in childhood mental health. Developing and disseminating scalable interventions to address early‐stage behavioral problems are of high priority. Artificial intelligence (AI)‐based CAs can offer innovative methods to deliver parenting interventions to reduce behavioral problems. CAs have the capability to interact through text or voice conversations and can undergo training using evidence‐based parenting programs. However, research on CAs for parenting and behavioral problems is limited. Experience The development of PAT consisted of three phases: Phase 1 was purely rule‐based, Phase 2 was hybrid (rule‐based format plus large language models), and Phase 3 featured an agentic architecture. The latest version of PAT includes prompt engineering, guardrails, retrieval‐augmented generation, few‐shots learning, context, and memory management through agentic architecture. Although comprehensive empirical results are pending, the iterative development and enhancement of PAT indicate the potential for effective digital intervention. The agentic architecture of the latest version of PAT aims to provide robust, context‐aware interactions to support parenting challenges. Implications CAs have the potential to reach a broader population of parents and deliver personalized interventions tailored to their specific needs. Moreover, CAs are structured to provide timely support, which can enhance family dynamics and contribute to improved long‐term outcomes for both parents and children. Conclusion AI‐based CAs can be used as alternatives to waitlists; as digital cotherapists; and implemented in health care, mental health, and school settings. The potential benefits and risks of the different types of CA and features are discussed.