Conference PaperPDF Available

The Impact of Interpersonal Closeness Cues in Text-based Healthcare Chatbots on Attachment Bond and the Desire to Continue Interacting: An Experimental Design

Abstract and Figures

Working alliance describes an important relationship quality between health professionals and patients and is robustly linked to treatment success. However, due to limited resources of health professionals, working alliance cannot always be promoted just-in-time in a ubiquitous fashion. To address this scalability problem, we investigate the direct effect of interpersonal closeness cues of text-based healthcare chatbots (THCBs) on attachment bond from the working alliance con-struct and the indirect effect on the desire to continue interacting with THCBs. The underlying research model and hypotheses are informed by counselling psychology and research on conver-sational agents. In order to investigate the hypothesized effects, we first develop a THCB codebook with 12 design dimensions on interpersonal closeness cues that are categorized into visual cues (i.e. avatar), verbal cues (i.e. greetings, address, jargon, T-V-distinction), quasi-nonverbal cues (i.e. emoticons) and relational cues (i.e. small talk, self-disclosure, empathy, humor, meta-relational talk and continuity). In a second step, four distinct THCB designs are developed along the continuum of interpersonal closeness (i.e. institutional-like, expert-like, peer-like and myself-like THCBs) and a corresponding study design for an interactive THCB-based online experiment is presented to test our hypotheses. We conclude this work-in-progress by outlining our future work.
Content may be subject to copyright.
A preview of the PDF is not available
... Comprising of task, bonds and goals (34) shared between coach and coachee, it is a key predictor of health behavior and attitude change (35). In digital contexts it can be boosted by creating interactions that adhere to principles outlined in positive psychology coaching and motivational interviewing (18,(35)(36)(37) such as leveraging interpersonal cues (38), expressing empathy (39), and eliciting change talk (29,37,40,41). For Elena+ we utilize past findings such as blending both social and task-oriented dialogue (42), depiction of a pictorial avatar representation for the agent (38,43,44), and utilization of some backstory for the CA (45, 46) (i.e., it is a digital representation of a real nurse like Elena that helps fight . ...
... In digital contexts it can be boosted by creating interactions that adhere to principles outlined in positive psychology coaching and motivational interviewing (18,(35)(36)(37) such as leveraging interpersonal cues (38), expressing empathy (39), and eliciting change talk (29,37,40,41). For Elena+ we utilize past findings such as blending both social and task-oriented dialogue (42), depiction of a pictorial avatar representation for the agent (38,43,44), and utilization of some backstory for the CA (45, 46) (i.e., it is a digital representation of a real nurse like Elena that helps fight . In this vein, to help nurture a coaching atmosphere, terms such as "menu" or "next time you use the app" are avoided, and terms relevant to face-to-face communication such as "coaching choices" or "in your next coaching session" are utilized by the CA. ...
Article
Full-text available
Background: The current COVID-19 coronavirus pandemic is an emergency on a global scale, with huge swathes of the population required to remain indoors for prolonged periods to tackle the virus. In this new context, individuals' health-promoting routines are under greater strain, contributing to poorer mental and physical health. Additionally, individuals are required to keep up to date with latest health guidelines about the virus, which may be confusing in an age of social-media disinformation and shifting guidelines. To tackle these factors, we developed Elena+, a smartphone-based and conversational agent (CA) delivered pandemic lifestyle care intervention. Methods: Elena+ utilizes varied intervention components to deliver a psychoeducation-focused coaching program on the topics of: COVID-19 information, physical activity, mental health (anxiety, loneliness, mental resources), sleep and diet and nutrition. Over 43 subtopics, a CA guides individuals through content and tracks progress over time, such as changes in health outcome assessments per topic, alongside user-set behavioral intentions and user-reported actual behaviors. Ratings of the usage experience, social demographics and the user profile are also captured. Elena+ is available for public download on iOS and Android devices in English, European Spanish and Latin American Spanish with future languages and launch countries planned, and no limits on planned recruitment. Panel data methods will be used to track user progress over time in subsequent analyses. The Elena+ intervention is open-source under the Apache 2 license (MobileCoach software) and the Creative Commons 4.0 license CC BY-NC-SA (intervention logic and content), allowing future collaborations; such as cultural adaptions, integration of new sensor-related features or the development of new topics. Discussion: Digital health applications offer a low-cost and scalable route to meet challenges to public health. As Elena+ was developed by an international and interdisciplinary team in a short time frame to meet the COVID-19 pandemic, empirical data are required to discern how effective such solutions can be in meeting real world, emergent health crises. Additionally, clustering Elena+ users based on characteristics and usage behaviors could help public health practitioners understand how population-level digital health interventions can reach at-risk and sub-populations.
... Social communication between an agent and a user is suggested to contribute to the trust and working alliance between an agent and the user (Bickmore, 2010 Kowatsch et al. (2018). explain that they focus on behaviours for relational agents that can be employed by a computer and provide literature that motivates their effects in human-human interaction. ...
... The resulting topic model provides a set of topics that are relevant to include in CAs for health coaching. These topics include a set of 'social' topics, for which the actions were deduced from literature on relational agents (e.g., , and Kowatsch et al. (2018)), as social interaction is essential for building up a personal bond and beneficial for interactions spanning a longer period of time (Bickmore, 2010;. They also include a set of 'meta' topics, since it may be useful to let the CA (as the coach or expert) explain how to use various functionalities of the system. ...
Thesis
Full-text available
A healthy lifestyle is important for our well-being and can prevent illnesses, but changing our behaviour to follow a lifestyle can be difficult. Digital health (eHealth) applications can support people in this process. However, their use decreases rapidly when the novelty effect wears off. Two causes for this are the lack of human involvement and the lack of personalised content. Therefore, conversational agents are added to eHealth applications in the role of health coaches to provide a social incentive, and adjustment of their dialogue content to users is investigated to increase personal relevance (tailoring). Such tailoring has previously been found to be effective in eHealth applications but has not extensively been investigated for interactive two-way communication over a longer period of time, such as in coaching conversations. The objective of this thesis research was therefore to address the following question: “How can we tailor users’ coaching conversations with conversational agents to improve engagement?” To that end, this thesis describes the development of dialogue authoring tools that support cooperation between health coaching experts and system developers; investigates tailoring of coaching strategies that determine the underlying long-term course of coaching conversations; introduces a five-step tailoring process and a topic model to support automatically tailoring coaching conversations on the topic level; and evaluates a proof-of-concept implementation of such automatic topic selection in a micro-randomized trial. The final chapter discusses the main findings and conclusions following three themes: 1) content and system design, 2) tailoring content and conversations, and 3) evaluation of tailored content. It concludes that content should be carefully constructed and reported, that tailoring approaches should be combined, and that evaluation of tailored applications remains complicated due to the many options for tailoring and factors of influence in long-term daily life situations. The second part of the chapter then discusses considerations for future work for five themes: safety, verification and validation; autonomy and trust; social interaction; input and output modalities; and open science.
... The results showed that mental healthcare chatbots can simulate empathetic expressions by extracting responses from large corpora. Moreover, the effects of interpersonal closeness cues of healthcare chatbots were investigated in [20]. Multiple cues were used for the investigation, including visual cues such as avatars, verbal cues such as greetings, non-verbal cues such as emojis, and relational cues such as small talk, empathy, and humor. ...
... The responses were extracted from the "EmpatheticDialogues" dataset which contains around 25,000 conversations [21]. The approach of extracting sentences from personality-tagged datasets to personalize a chatbot's responses was proven in [20]. Visual elements such as emojis, and static and animated images (GIFs) were added to the informal character responses to create a human-like feeling. ...
Conference Paper
Full-text available
Chatbots are becoming an attractive tool for people who seek medical advice due to their constant availability. Multiple healthcare chatbots were developed for different purposes such as delivering advice, booking appointments, and accessing medical records. Additionally, it was found that personalized healthcare chatbots affected the user experience positively, due to the addition of human empathy. Therefore, we propose building a character-based chatbot named ``Chasey'' for COVID-19, to combat the risk of misinformation amplification during the pandemic. Chasey provides users with various COVID-19 information such as tracking the cases per country, giving advice, answering frequently asked questions, and performing symptoms checking. According to the selected chatbot character, users will receive personalized responses to their inquiries from verified sources. Moreover, we investigate how our chatbot implementation overcomes some of the challenges and limitations of healthcare and COVID-19 chatbots. Finally, an experiment was conducted to evaluate the chatbot's usability, as well as, the likability and trustworthiness of the chatbot characters. Overall, the participants were satisfied with the chatbot features and character change option. Moreover, significant results were found between the likability of the chatbot characters.
... To answer our first research question and as a prerequisite to empirically assess the effects of chatbots' impersonated social roles on perceived interpersonal closeness, the affective bond, and the intention to use them, we reviewed literature from social psychology, communication, and human-computer interaction research to develop a design codebook for chatbots with different social roles. A prior version of the design codebook and the study design have been presented at the European Conference on Information Systems 2018 (ECIS 2018) and published as research-in-progress work in the conference proceedings [79]. ...
Article
Full-text available
Background: The working alliance refers to an important relationship quality between health professionals and clients that robustly links to treatment success. Recent research shows that clients can develop an affective bond with chatbots. However, few research studies have investigated whether this perceived relationship is affected by the social roles of differing closeness a chatbot can impersonate and by allowing users to choose the social role of a chatbot. Objective: This study aimed at understanding how the social role of a chatbot can be expressed using a set of interpersonal closeness cues and examining how these social roles affect clients' experiences and the development of an affective bond with the chatbot, depending on clients' characteristics (ie, age and gender) and whether they can freely choose a chatbot's social role. Methods: Informed by the social role theory and the social response theory, we developed a design codebook for chatbots with different social roles along an interpersonal closeness continuum. Based on this codebook, we manipulated a fictitious health care chatbot to impersonate one of four distinct social roles common in health care settings-institution, expert, peer, and dialogical self-and examined effects on perceived affective bond and usage intentions in a web-based lab study. The study included a total of 251 participants, whose mean age was 41.15 (SD 13.87) years; 57.0% (143/251) of the participants were female. Participants were either randomly assigned to one of the chatbot conditions (no choice: n=202, 80.5%) or could freely choose to interact with one of these chatbot personas (free choice: n=49, 19.5%). Separate multivariate analyses of variance were performed to analyze differences (1) between the chatbot personas within the no-choice group and (2) between the no-choice and the free-choice groups. Results: While the main effect of the chatbot persona on affective bond and usage intentions was insignificant (P=.87), we found differences based on participants' demographic profiles: main effects for gender (P=.04, ηp2=0.115) and age (P<.001, ηp2=0.192) and a significant interaction effect of persona and age (P=.01, ηp2=0.102). Participants younger than 40 years reported higher scores for affective bond and usage intentions for the interpersonally more distant expert and institution chatbots; participants 40 years or older reported higher outcomes for the closer peer and dialogical-self chatbots. The option to freely choose a persona significantly benefited perceptions of the peer chatbot further (eg, free-choice group affective bond: mean 5.28, SD 0.89; no-choice group affective bond: mean 4.54, SD 1.10; P=.003, ηp2=0.117). Conclusions: Manipulating a chatbot's social role is a possible avenue for health care chatbot designers to tailor clients' chatbot experiences using user-specific demographic factors and to improve clients' perceptions and behavioral intentions toward the chatbot. Our results also emphasize the benefits of letting clients freely choose between chatbots.
... Indeed, in pursuit of scaling-up health interventions, rapid implementation is often encouraged to address the digital divide between developed and developing nations (89), whilst at the same time, greater personalization of healthcare is known to be both more beneficial as well as a strength readily delivered by digital tools such as CAs (13). While not wishing to dissuade the important rollout of new technologies, our research suggests that considering unique linguistic-cultural features within a linguaculture (such as T/V distinction) and optimizing CA dialogues accordingly would be a worthwhile step: Particularly as relevant demographic user characteristics can be readily elicited early in dialogues with the CA and subsequently utilized for personalization (90). ...
Article
Full-text available
Background: Conversational agents (CAs) are a novel approach to delivering digital health interventions. In human interactions, terms of address often change depending on the context or relationship between interlocutors. In many languages, this encompasses T/V distinction —formal and informal forms of the second-person pronoun “You”—that conveys different levels of familiarity. Yet, few research articles have examined whether CAs' use of T/V distinction across language contexts affects users' evaluations of digital health applications. Methods: In an online experiment ( N = 284), we manipulated a public health CA prototype to use either informal or formal T/V distinction forms in French (“tu” vs. “vous”) and German (“du” vs. “Sie”) language settings. A MANCOVA and post-hoc tests were performed to examine the effects of the independent variables (i.e., T/V distinction and Language) and the moderating role of users' demographic profile (i.e., Age and Gender) on eleven user evaluation variables. These were related to four themes: (i) Sociability, (ii) CA-User Collaboration, (iii) Service Evaluation, and (iv) Behavioral Intentions. Results: Results showed a four-way interaction between T/V Distinction, Language, Age, and Gender, influencing user evaluations across all outcome themes. For French speakers, when the informal “T form” (“ Tu” ) was used, higher user evaluation scores were generated for younger women and older men (e.g., the CA felt more humanlike or individuals were more likely to recommend the CA), whereas when the formal “V form” (“ Vous” ) was used, higher user evaluation scores were generated for younger men and older women. For German speakers, when the informal T form (“ Du” ) was used, younger users' evaluations were comparable regardless of Gender, however, as individuals' Age increased, the use of “ Du” resulted in lower user evaluation scores, with this effect more pronounced in men. When using the formal V form (“ Sie” ), user evaluation scores were relatively stable, regardless of Gender, and only increasing slightly with Age. Conclusions: Results highlight how user CA evaluations vary based on the T/V distinction used and language setting, however, that even within a culturally homogenous language group, evaluations vary based on user demographics, thus highlighting the importance of personalizing CA language.
... Humanlike appearance is proposed to increase perception of social presence, credibility and competence (Nass and Moon, 2000;Westerman et al., 2015). A humanlike appearance may boost perceptions of competence in a chatbot (Schurink, 2019;Araujo, 2018;Kowatsch et al., 2018), and perceived humanness of AI-chatbots support enhanced perceptions of self-agency through chatbot assistance (cf. Gibbons and McCoy, 1991). ...
AI-chatbots as frontline agents promise innovative opportunities for shaping service offerings that benefit customers and retailers. Examining current practice through the lens of agency, as defined by Social Cognitive Theory, we present a 3-level classification of AI-chatbot design (anthropomorphic role, appearance and interactivity) and examine how the combination of these three aspects of chatbot design impacts on the complementarities of agency. Recognizing current implementation challenges, we advance that the complementarities of agency at each level are the lynchpin mechanism that translates AI-chatbot design into service relevant outcomes. We develop a research agenda focused on the emotion interface, resolution of the proxy agency dilemma and development of collective agency to support the implementation of AI-chatbots as frontline service agents.
Preprint
Full-text available
Background. Empowering people to decide on their health has proven to be beneficial and to enable the creation of a therapeutic alliance. This could be the same in an e-Mental health service. However, little is known about the degree of decision-making people should have when using such services and when they are seriously depressed or in a life-threatening situation. Method. The topic was explored through two studies. The first study was a quantitative study to investigate how much decision-making freedom the self-help e-Mental health service allowed and in what situation (serious or less serious mental complaints) the service could be used. Participants were randomly assigned to one of four prototypes of a self-help e-Mental health service (for elderly people) with a different degree of decision-making and level of gravity of the situation. Afterwards, they were asked to fill in a survey to measure autonomy, competence, relatedness, privacy, safety, patient-technology alliance and intention to use. To analyse the data, ANOVAs and regression analyses were performed. In a second, qualitative study, 10 (clinical) experts with different backgrounds were interviewed about the degree of decision-making elderly people should have when using an e-Mental health service. The interviews were analysed via open and axial coding. Results. For the first study, 72 elderly people were recruited. No significant effect of decision-making and level of gravity was found. Relatedness significantly influences patient-technology alliance and intention to use. Additionally, patient-technology alliance significantly influences intention to use. For the second study, it was found that control is central for users, even if it is more difficult for people who are seriously depressed or in a critical situation. Nonetheless, design and technical suggestions on how to support users of e-Mental health services who have more serious symptoms are presented, including personalization, a three-steps approach to control, and setting goals. Conclusions. The results of this study can be applied to other self-help e-Mental health services with therapeutic purposes. Additionally, further research is needed to understand which other factors, together with relatedness, can influence the creation of a therapeutic alliance and how to foster intention to use.
Article
Full-text available
Conversational agents (CAs) are often included as virtual coaches in eHealth applications. Tailoring conversations with these coaches to the individual user can increase the effectiveness of the coaching. An improvement for this tailoring process could be to (automatically) tailor the conversation at the topic level. In this article, we describe the design and evaluation of a blueprint topic model for use in the implementation of such topic selection. First, we constructed a topic model by extracting actions from the literature that a CA as coach could perform. We divided these actions in groups and labeled them with topics. We included literature from the behavioral psychology, relational agents and persuasive technology domains. Second, we evaluated this topic model through an online closed card sort study with health coaching experts. The constructed topic model contains 30 topics and 115 actions. Overall, the sorting of actions into topics was validated by the 11 experts participating in the card sort. Cards with actions that were sorted incorrectly mostly missed an immediacy indicator in their description (e.g., the difference between “you could plan regular walks” as opposed to “let’s plan a walk”) and/or were based on behavior change techniques that were difficult to translate to a conversation. The blueprint topic model presented in this article is an important step towards more intelligent virtual coaches. Future research should focus on the implementation of automatic topic selection. Furthermore, tailoring of coaching dialogues with CAs in multiple steps could be further investigated, for example, from the technical or user interaction perspective.
Article
Full-text available
Users interact with chatbots for various purposes and motivations – and for different periods of time. However, since chatbots are considered social actors and given that time is an essential component of social interactions, the question arises as to how chatbots need to be designed depending on whether they aim to help individuals achieve short-, medium- or long-term goals. Following a taxonomy development approach, we compile 22 empirically and conceptually grounded design dimensions contingent on chatbots’ temporal profiles. Based upon the classification and analysis of 120 chatbots therein, we abstract three time-dependent chatbot design archetypes: Ad-hoc Supporters, Temporary Assistants, and Persistent Companions. While the taxonomy serves as a blueprint for chatbot researchers and designers developing and evaluating chatbots in general, our archetypes also offer practitioners and academics alike a shared understanding and naming convention to study and design chatbots with different temporal profiles.
Article
Full-text available
In this paper, a user interface paradigm, called Talk-and-Tools, is presented for automated e-coaching. The paradigm is based on the idea that people interact in two ways with their environment: symbolically and physically. The main goal is to show how the paradigm can be applied in the design of interactive systems that offer an acceptable coaching process. As a proof of concept, an e-coaching system is implemented that supports an insomnia therapy on a smartphone. A human coach was replaced by a cooperative virtual coach that is able to interact with a human coachee. In the interface of the system, we distinguish between a set of personalized conversations (“Talk”) and specialized modules that form a coherent structure of input and output facilities (“Tools”). Conversations contained a minimum of variation to exclude unpredictable behavior but included the necessary mechanisms for variation to offer personalized consults and support. A variety of system and user tests was conducted to validate the use of the system. After a 6-week therapy, some users spontaneously reported the experience of building a relationship with the e-coach. It is concluded that the addition of a conversational component fills an important gap in the design of current mobile systems.
Article
Full-text available
Background: Existing research postulates a variety of components that show an impact on utilization of technology-mediated mental health information systems (MHIS) and treatment outcome. Although researchers assessed the effect of isolated design elements on the results of Web-based interventions and the associations between symptom reduction and use of components across computer and mobile phone platforms, there remains uncertainty with regard to which components of technology-mediated interventions for mental health exert the greatest therapeutic gain. Until now, no studies have presented results on the therapeutic benefit associated with specific service components of technology-mediated MHIS for depression. Objective: This systematic review aims at identifying components of technology-mediated MHIS for patients with depression. Consequently, all randomized controlled trials comparing technology-mediated treatments for depression to either waiting-list control, treatment as usual, or any other form of treatment for depression were reviewed. Updating prior reviews, this study aims to (1) assess the effectiveness of technology-supported interventions for the treatment of depression and (2) add to the debate on what components in technology-mediated MHIS for the treatment of depression should be standard of care. Methods: Systematic searches in MEDLINE, PsycINFO, and the Cochrane Library were conducted. Effect sizes for each comparison between a technology-enabled intervention and a control condition were computed using the standard mean difference (SMD). Chi-square tests were used to test for heterogeneity. Using subgroup analysis, potential sources of heterogeneity were analyzed. Publication bias was examined using visual inspection of funnel plots and Begg's test. Qualitative data analysis was also used. In an explorative approach, a list of relevant components was extracted from the body of literature by consensus between two researchers. Results: Of 6387 studies initially identified, 45 met all inclusion criteria. Programs analyzed showed a significant trend toward reduced depressive symptoms (SMD -0.58, 95% CI -0.71 to -0.45, P<.001). Heterogeneity was large (I2≥76). A total of 15 components were identified. Conclusions: Technology-mediated MHIS for the treatment of depression has a consistent positive overall effect compared to controls. A total of 15 components have been identified. Further studies are needed to quantify the impact of individual components on treatment effects and to identify further components that are relevant for the design of future technology-mediated interventions for the treatment of depression and other mental disorders.
Article
Full-text available
Objectives This review sought to determine what is currently known about the focus, form, and efficacy of web-based interventions that aim to support the well-being of workers and enable them to manage their work-related stress.MethodA scoping review of the literature as this relates to web-based interventions for the management of work-related stress and supporting the psychological well-being of workers was conducted.ResultsForty-eight web-based interventions were identified and reviewed, the majority of which (n = 37) were "individual"-focused and utilized cognitive-behavioral techniques, relaxation exercises, mindfulness, or cognitive behavior therapy. Most interventions identified were provided via a website (n = 34) and were atheoretical in nature.Conclusions There is some low-to-moderate quality evidence that "individual"-focused interventions are effective for supporting employee well-being and managing their work-related stress. There are few web-based interventions that target "organizational" or "individual/organization" interface factors, and there is limited support for their efficacy. A clear gap appears to exist between work-stress theory and its application in the design and development of web-based interventions for the management of work-related stress.
Article
Full-text available
Background: Depression is a burdensome, recurring mental health disorder with high prevalence. Even in developed countries, patients have to wait for several months to receive treatment. In many parts of the world there is only one mental health professional for over 200 people. Smartphones are ubiquitous and have a large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms and providing context-sensitive intervention support. Objective: The objective of this study is 2-fold, first to explore the detection of daily-life behavior based on sensor information to identify subjects with a clinically meaningful depression level, second to explore the potential of context sensitive intervention delivery to provide in-situ support for people with depressive symptoms. Methods: A total of 126 adults (age 20-57) were recruited to use the smartphone app Mobile Sensing and Support (MOSS), collecting context-sensitive sensor information and providing just-in-time interventions derived from cognitive behavior therapy. Real-time learning-systems were deployed to adapt to each subject's preferences to optimize recommendations with respect to time, location, and personal preference. Biweekly, participants were asked to complete a self-reported depression survey (PHQ-9) to track symptom progression. Wilcoxon tests were conducted to compare scores before and after intervention. Correlation analysis was used to test the relationship between adherence and change in PHQ-9. One hundred twenty features were constructed based on smartphone usage and sensors including accelerometer, Wifi, and global positioning systems (GPS). Machine-learning models used these features to infer behavior and context for PHQ-9 level prediction and tailored intervention delivery. Results: A total of 36 subjects used MOSS for ≥2 weeks. For subjects with clinical depression (PHQ-9≥11) at baseline and adherence ≥8 weeks (n=12), a significant drop in PHQ-9 was observed (P=.01). This group showed a negative trend between adherence and change in PHQ-9 scores (rho=-.498, P=.099). Binary classification performance for biweekly PHQ-9 samples (n=143), with a cutoff of PHQ-9≥11, based on Random Forest and Support Vector Machine leave-one-out cross validation resulted in 60.1% and 59.1% accuracy, respectively. Conclusions: Proxies for social and physical behavior derived from smartphone sensor data was successfully deployed to deliver context-sensitive and personalized interventions to people with depressive symptoms. Subjects who used the app for an extended period of time showed significant reduction in self-reported symptom severity. Nonlinear classification models trained on features extracted from smartphone sensor data including Wifi, accelerometer, GPS, and phone use, demonstrated a proof of concept for the detection of depression superior to random classification. While findings of effectiveness must be reproduced in a RCT to proof causation, they pave the way for a new generation of digital health interventions leveraging smartphone sensors to provide context sensitive information for in-situ support and unobtrusive monitoring of critical mental health states.
Article
Importance The value of self-monitoring of blood glucose (SMBG) levels in patients with non–insulin-treated type 2 diabetes has been debated. Objective To compare 3 approaches of SMBG for effects on hemoglobin A1c levels and health-related quality of life (HRQOL) among people with non–insulin-treated type 2 diabetes in primary care practice. Design, Setting, and Participants The Monitor Trial study was a pragmatic, open-label randomized trial conducted in 15 primary care practices in central North Carolina. Participants were randomized between January 2014 and July 2015. Eligible patients with type 2 non–insulin-treated diabetes were: older than 30 years, established with a primary care physician at a participating practice, had glycemic control (hemoglobin A1c) levels higher than 6.5% but lower than 9.5% within the 6 months preceding screening, as obtained from the electronic medical record, and willing to comply with the results of random assignment into a study group. Of the 1032 assessed for eligibility, 450 were randomized. Interventions No SMBG, once-daily SMBG, and once-daily SMBG with enhanced patient feedback including automatic tailored messages delivered via the meter. Main Outcomes and Measures Coprimary outcomes included hemoglobin A1c levels and HRQOL at 52 weeks. Results A total of 450 patients were randomized and 418 (92.9%) completed the final visit. There were no significant differences in hemoglobin A1c levels across all 3 groups (P = .74; estimated adjusted mean hemoglobin A1c difference, SMBG with messaging vs no SMBG, −0.09%; 95% CI, −0.31% to 0.14%; SMBG vs no SMBG, −0.05%; 95% CI, −0.27% to 0.17%). There were also no significant differences found in HRQOL. There were no notable differences in key adverse events including hypoglycemia frequency, health care utilization, or insulin initiation. Conclusions and Relevance In patients with non–insulin-treated type 2 diabetes, we observed no clinically or statistically significant differences at 1 year in glycemic control or HRQOL between patients who performed SMBG compared with those who did not perform SMBG. The addition of this type of tailored feedback provided through messaging via a meter did not provide any advantage in glycemic control. Trial Registration clinicaltrials.gov Identifier: NCT02033499
Article
Background: Embodied conversational agents (ECAs) are computer-generated characters that simulate key properties of human face-to-face conversation, such as verbal and nonverbal behavior. In Internet-based eHealth interventions, ECAs may be used for the delivery of automated human support factors. Objective: We aim to provide an overview of the technological and clinical possibilities, as well as the evidence base for ECA applications in clinical psychology, to inform health professionals about the activity in this field of research. Methods: Given the large variety of applied methodologies, types of applications, and scientific disciplines involved in ECA research, we conducted a systematic scoping review. Scoping reviews aim to map key concepts and types of evidence underlying an area of research, and answer less-specific questions than traditional systematic reviews. Systematic searches for ECA applications in the treatment of mood, anxiety, psychotic, autism spectrum, and substance use disorders were conducted in databases in the fields of psychology and computer science, as well as in interdisciplinary databases. Studies were included if they conveyed primary research findings on an ECA application that targeted one of the disorders. We mapped each study's background information, how the different disorders were addressed, how ECAs and users could interact with one another, methodological aspects, and the study's aims and outcomes. Results: This study included N=54 publications (N=49 studies). More than half of the studies (n=26) focused on autism treatment, and ECAs were used most often for social skills training (n=23). Applications ranged from simple reinforcement of social behaviors through emotional expressions to sophisticated multimodal conversational systems. Most applications (n=43) were still in the development and piloting phase, that is, not yet ready for routine practice evaluation or application. Few studies conducted controlled research into clinical effects of ECAs, such as a reduction in symptom severity. Conclusions: ECAs for mental disorders are emerging. State-of-the-art techniques, involving, for example, communication through natural language or nonverbal behavior, are increasingly being considered and adopted for psychotherapeutic interventions in ECA research with promising results. However, evidence on their clinical application remains scarce. At present, their value to clinical practice lies mostly in the experimental determination of critical human support factors. In the context of using ECAs as an adjunct to existing interventions with the aim of supporting users, important questions remain with regard to the personalization of ECAs' interaction with users, and the optimal timing and manner of providing support. To increase the evidence base with regard to Internet interventions, we propose an additional focus on low-tech ECA solutions that can be rapidly developed, tested, and applied in routine practice.
Book
This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. · Presents a comprehensive overview of the various technologies that underlie conversational user interfaces; · Combines descriptions of conversational user interface technologies with a guide to various toolkits and software that enable readers to implement and test their own solutions; · Provides a series of worked examples so readers can develop and implement different aspects of the technologies.
Article
This study investigates how user satisfaction and intention to use for an interactive movie recommendation system is determined by communication variables and relationship between conversational agent and user. By adopting the Computers-Are-Social-Actors (CASA) paradigm and uncertainty reduction theory, this study examines the influence of self-disclosure and reciprocity as key communication variables on user satisfaction. A two-way ANOVA test was conducted to analyze the effects of self-disclosure and reciprocity on user satisfaction with a conversational agent. The interactional effect of self-disclosure and reciprocity on user satisfaction was not significant, but the main effects proved to be both significant. PLS analysis results showed that perceived trust and interactional enjoyment are significant mediators in the relationship between communication variables and user satisfaction. In addition, reciprocity is a stronger variable than self-disclosure in predicting relationship building between an agent and a user. Finally, user satisfaction is an influential factor of intention to use. These findings have implications from both practical and theoretical perspective.
Article
In order to improve the social capabilities of embodied conversational agents, we propose a computational model to enable agents to automatically select and display appropriate smiling behavior during human-machine interaction. A smile may convey different communicative intentions depending on subtle characteristics of the facial expression and contextual cues. To construct such a model, as a first step, we explore the morphological and dynamic characteristics of different types of smiles (polite, amused, and embarrassed smiles) that an embodied conversational agent may display. The resulting lexicon of smiles is based on a corpus of virtual agents' smiles directly created by users and analyzed through a machine-learning technique. Moreover, during an interaction, a smiling expression impacts on the observer's perception of the interpersonal stance of the speaker. As a second step, we propose a probabilistic model to automatically compute the user's potential perception of the embodied conversational agent's social stance depending on its smiling behavior and on its physical appearance. This model, based on a corpus of users' perceptions of smiling and nonsmiling virtual agents, enables a virtual agent to determine the appropriate smiling behavior to adopt given the interpersonal stance it wants to express. An experiment using real human-virtual agent interaction provided some validation of the proposed model.
Article
Background The just-in-time adaptive intervention (JITAI) is an intervention design aiming to provide the right type/amount of support, at the right time, by adapting to an individual?s changing internal and contextual state. The availability of increasingly powerful mobile and sensing technologies underpins the use of JITAIs to support health behavior, as in such a setting an individual?s state can change rapidly, unexpectedly, and in his/her natural environment. PurposeDespite the increasing use and appeal of JITAIs, a major gap exists between the growing technological capabilities for delivering JITAIs and research on the development and evaluation of these interventions. Many JITAIs have been developed with minimal use of empirical evidence, theory, or accepted treatment guidelines. Here, we take an essential first step towards bridging this gap. Methods Building on health behavior theories and the extant literature on JITAIs, we clarify the scientific motivation for JITAIs, define their fundamental components, and highlight design principles related to these components. Examples of JITAIs from various domains of health behavior research are used for illustration. Conclusion As we enter a new era of technological capacity for delivering JITAIs, it is critical that researchers develop sophisticated and nuanced health behavior theories capable of guiding the construction of such interventions. Particular attention has to be given to better understanding the implications of providing timely and ecologically sound support for intervention adherence and retention.