Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Emotion-aware chatbots that can sense human emotions are becoming increasingly prevalent. However, the exposition of emotions by emotion-aware chatbots undermines human autonomy and users' trust. One way to ensure autonomy is through the provision of control. Offering too much control, in turn, may increase users’ cognitive effort. To investigate the impact of control over emotion-aware chatbots on autonomy, trust, and cognitive effort, as well as user behavior, we carried out an experimental study with 176 participants. The participants interacted with a chatbot that provided emotional feedback and were additionally able to control different chatbot dimensions (e.g., timing, appearance, and behavior). Our findings show, first, that higher control levels increase autonomy and trust in emotion-aware chatbots. Second, higher control levels do not significantly increase cognitive effort. Third, in our post hoc behavioral analysis, we identify four behavioral control strategies based on control feature usage timing, quantity, and cognitive effort. These findings shed light on the individual preferences of user control over emotion-aware chatbots. Overall, our study contributes to the literature by showing the positive effect of control over emotion-aware chatbots and by identifying four behavioral control strategies. With our findings, we also provide practical implications for future design of emotion-aware chatbots.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The research content of the human-robot relationship mainly includes human-robot trust (trust), robot acceptance (acceptance), user self-disclosure willingness (self-disclosure), user satisfaction (satisfaction), and user privacy concerns (privacy) [10,48]. This direction of research is mainly based on the user's psychological cognition, through the use of the model research method (structural equation model) and user experiments to conduct quantitative empirical research, in order to identify the influencing factors affecting human-robot relationships, as well as the influence mechanism [14,49,50]. From the large number of research results accumulated in the current literature, the influencing factors (antecedent variables) of the human-robot relationship can be broadly categorized into three dimensions: "Human", "Robot", and "Environment" [51][52][53]. ...
... Verbal cues encompass the tone, intonation, and conversational style of social robots' speech, while nonverbal cues encompass facial expressions, gestures, body movements, and gaze, among others [2,50]. For example, robots can use gestures, natural language, and even "eye contact" to engage in positive emotional interactions with users [19,49,60]. As shown in Figure 3B, the average occurrence time of keywords in this cluster is relatively early, which belongs to the research basis of human-computer interaction design of social robots. ...
... As shown in Figure 3B, the average occurrence time of keywords in this cluster is relatively early, which belongs to the research basis of human-computer interaction design of social robots. From the perspective of comprehensive keyword characteristics, this cluster primarily delves into the impact of social cues and emotional factors on human-robot interaction, focusing on the recognition and expression of social robot emotions and their integration into specific designs and evaluations [2,49,58]. ...
Article
Full-text available
(1) Background: Social robot interaction design is crucial for determining user acceptance and experience. However, few studies have systematically discussed the current focus and future research directions of social robot interaction design from a bibliometric perspective. Therefore, we conducted this study in order to identify the latest research progress and evolution trajectory of research hotspots in social robot interaction design over the last decade. (2) Methods: We conducted a comprehensive review based on 2416 papers related to social robot interaction design obtained from the Web of Science (WOS) database. Our review utilized bibliometric techniques and integrated VOSviewer and CiteSpace to construct a knowledge map. (3) Conclusions: The current research hotspots of social robot interaction design mainly focus on #1 the study of human–robot relationships in social robots, #2 research on the emotional design of social robots, #3 research on social robots for children’s psychotherapy, #4 research on companion robots for elderly rehabilitation, and #5 research on educational social robots. The reference co-citation analysis identifies the classic literature that forms the basis of the current research, which provides theoretical guidance and methods for the current research. Finally, we discuss several future research directions and challenges in this field.
... The research content of the human-robot relationship mainly includes human-robot trust (Trust), robot acceptance (Acceptance), user self-disclosure willingness (Self-Disclosure), user satisfaction (Satisfaction), and user privacy concerns (Privacy) [10,20,[57][58][59]. This direction is mainly based on the user's psychological cognition, through the use of the model research method (structural equation model) and user experiments to conduct quantitative empirical research, in order to identify the influencing factors affecting human-robot relationships as well as the influence mechanism [14,[60][61][62]. From the large number of research results accumulated in the current literature, the influencing factors (antecedent variables) of the human-robot relationship can be broadly categorized into three dimensions: "Human," "Robot," and "Environment." ...
... Verbal cues encompass the tone, intonation, and conversational style of social robot's speech, while nonverbal cues encompass facial expressions, gestures, body movements, and gaze, among others [2,61,73]. For example, robots can use gestures, natural language, and even "eye 6 contact" to engage in positive emotional interactions with users [26,60,74]. As shown in Figure 3B, the average occurrence time of keywords in this cluster is relatively early, which belongs to the research basis of human-computer interaction design of social robots. ...
... As shown in Figure 3B, the average occurrence time of keywords in this cluster is relatively early, which belongs to the research basis of human-computer interaction design of social robots. From the perspective of comprehensive keyword characteristics, this cluster mainly discusses how the social clue design and emotional factors of social robots affect the interaction between humans and robots, that is, to study the recognition and expression of social robot emotions and how to integrate them into specific designs and evaluating [2,60,71]. ...
Preprint
Full-text available
(1) Background: The social robot interaction design is critical in determining acceptance and user experience. However, few studies have systematically discussed the current focus and future research directions of social robot interaction design from a bibliometric perspective. Therefore, in order to identify the latest research progress and evolution trajectory of research hotspots in social robot interaction design over the last decade. (2) Methods: based on 2416 papers related to social robot interaction design collected in the Web of Science (WOS) database, a comprehensive review was conducted by using bibliometrics to draw a knowledge map using VOSviewer and CiteSpace in an integrated manner. (3) Conclusions: The current research hotspots of social robot interaction design mainly focus on #1the study of human-robot relationships in social robot, #2Research on the Emotional Design of Social Robot, #3research on social robot for children's psychotherapy, #4research on companion robot for elderly rehabilitation, and #5research on educational social robot. The reference co-citation identifies the classic literature that forms the basis of the current research, which provides theoretical guidance and methods for the current research. Finally, we discuss several future research directions and challenges in this field.
... A robot expressing a higher (vs a lower) ability to infer the mental states of its users was demonstrated to be perceived as creepier [10]. More generally, in some studies, the anthropomorphic robot features were demonstrated to be negatively related to user attitudes [12,45], and emotions displayed by chatbots may harm user autonomy [46]. ...
... The indirect effects of empathy/autonomy support expression on CPI and VIS through PUA were negative for a high level of CF, while being nonsignificant for its mean and low levels. These results add to the growing literature on the user response to chatbots (eg, [6,9,11,33,45,46,49]). Our results are in line with numerous previous studies indicating mixed or negative effects of chatbots' empathy [6,11], robots' capability to infer human mental states [10], robots' and conversational agents' anthropomorphic features [12,45], and chatbots' displayed emotions [46] on users' response to chatbots. ...
... These results add to the growing literature on the user response to chatbots (eg, [6,9,11,33,45,46,49]). Our results are in line with numerous previous studies indicating mixed or negative effects of chatbots' empathy [6,11], robots' capability to infer human mental states [10], robots' and conversational agents' anthropomorphic features [12,45], and chatbots' displayed emotions [46] on users' response to chatbots. Importantly, unlike these previous studies, our study was based on the measurement of the actual quality of conversation with a chatbot user instead of presenting cues or scenarios to study participants. ...
Article
Full-text available
Background: Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context. Objective: To investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy-support by COVID-19 vaccine chatbots. Methods: An experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used 2 (empathy/autonomy-support expression: present vs. absent) × 2 (chatbot expertise cues: expert endorser vs. layperson endorser) between-subject design. Chatbot conversation quality was assessed through the actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from -5 to 5 (VIS). Results: There occurred a negative interaction effect of chatbot empathy/autonomy-support expression and conversation fallback (the percentage of chatbot answers "I do not understand" in a conversation) on PUA (PROCESS, Model 1, B = -3.358, SE = 1.235, t(186) = 2.718, P = .007). Specifically, empathy/autonomy-support expression had a more negative effect on PUA when the conversation fallback was higher (conditional effect of empathy/autonomy-support expression at the conversation fallback (CF) level of +1SD: B = -.405, SE = .158, t(186) = 2.564, P = .011; conditional effects non-significant for the mean level (B = -.103, SE = .113, t(186) = .914, P = .36) and the -1SD level (B = .031, SE = .123, t(186) = .252, P = .80)). Moreover, an indirect effect of empathy/autonomy-support expression on CPI via PUA was more negative when CF was higher (PROCESS, Model 7, 5000 bootstrap samples, moderated mediation index = -3.676, BootSE = 1.614; 95%CI[-6.697, -.102]; the conditional indirect effect at the CF level of +1SD: B = -.443, BootSE = .202; 95%CI[-.809, -.005]; conditional indirect effects non-significant for the mean level (B = -.113, BootSE = .124; 95%CI[-.346, .137]) and the -1SD level (B = .034, BootSE = .132; 95%CI[-.224, .305]). Indirect effects of empathy/autonomy-support expression on VIS via PUA were marginally more negative when the CF was higher. No effects of chatbot expertise cues were found. Conclusions: The findings suggest that expressing empathy/autonomy-support by a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users' questions. The paper adds to the literature on vaccine chatbots by exploring the conditional effects of chatbot empathy/autonomy-support expression. The results guide policymakers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy.
... This flexibility is known to improve user perceptions and behaviors [18,19]. Cognitive load theory [cf., 20] suggests that task outcomes improve as a response to increased control-as long as the cognitive load does not increase significantly [21,22]. Providing explanations of the system's reasoning is another technique widely considered to be a viable means for improving user perceptions and behaviors [12,23,24]. ...
... In contrast, when tasks are perceived as too difficult, people are less motivated to invest mental effort in order to complete the task [54]. Research has revealed that increased decision control improves user outcomes: For example, Benke et al. [21] show that, when interacting with chatbots, increased decision control improves perceptions of user trust and task performance, while cognitive load only increased slightly but not significantly. Further, Dietvorst et al. [18] show that giving users the option to at least slightly adjust forecasting outcomes improves task performance. ...
... Study 1 shows that high decision control improves user perceptions of trust and understanding, as well as intended and actual compliance. This finding is in line with Dietvorst et al. [18] and Benke et al. [21], who showed that giving people control over the task outcome improves perceptions, as Note. * * * p < 0.001; * * p < 0.01; * p < 0.05; † p < 0.10 well as task performance. ...
Article
Full-text available
Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions—such as low trust or limited understanding—reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations—unless appropriately designed for the specific user—may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
... The latest studies have highlighted the potential of human-AI teamwork in amplifying results across various fields, like education and team learning (Sukhwal et al., 2023). For example, the adoption of CAs may foster group learning by enhancing knowledge transfer, improving team interaction (Ahmad et al., 2020), and even serving as a group moderator to identify members' feelings (Benke et al., 2022). ...
... For example, thanks to advances in Natural Language Processing (NLP), the responses of CAs and their human-like qualities are improving, positioning them as virtual group members (Diederich et al., 2022). AI tools such as CAs can facilitate information sharing, task coordination, and real-time feedback, enabling efficient communication within virtual teams (Benke et al., 2022). ...
Article
Full-text available
This study advances the understanding of Artificial Intelligence (AI)’s role, particularly that of conversational agents like ChatGPT, in augmenting team-based knowledge acquisition in virtual learning settings. Drawing on human-AI teams and anthropomorphism theories and addressing the gap in the literature on human-AI collaboration within virtual teams, this study examines a multi-level, longitudinal model using a sample of 344 graduate students from 48 student project teams in online project-based learning environments. Our model investigates the direct and interactional effects of AI characteristics —autonomy and explainability— and team perceived virtuality (TPV) on the learners’ knowledge-updating process. Findings indicate that embedding AI in learning teams supports knowledge acquisition and learning intentions. The results reveal that while AI explainability significantly enhances knowledge update perceptions, AI autonomy alone does not directly influence knowledge acquisition. Instead, the positive effect of AI autonomy on knowledge updating is contingent upon a high TPV within the team. These findings offer new theoretical insights into AI’s empowering role in educational contexts and provide practical guidance for integrating AI into virtual team learning. This research underlines the importance of designing AI tools with a focus on explainability and leveraging the synergy between AI autonomy and TPV to maximize learning outcomes.
... A social chatbot acts as an artificial companion, satisfying the human need for communication, affection, and social belonging Zhou et al., 2020). The primary purpose of a social chatbot is therapeutic, which is more critical for psychologically sensitive individuals who require emotional and social support (Benke et al., 2022). ...
... Evidence indicates that individuals lacking socialinteraction ability with real people are likely to connect with a social chatbot (Benke et al., 2022). Their frequent interactions with a social chatbot indicate a degree of compulsiveness in which individuals experience a loss of control and repeatedly perform the behavior (Liu et al., 2022). ...
Article
This study investigates the impact of social interaction anxiety on compulsive chat with a social chatbot named Xiaoice. To provide insights into the limited literature, the authors explore the role of fear of negative evaluation (FONE) and fear of rejection (FOR) as mediators in this relationship. By applying a variance-based structural equation modeling on a non-clinical sample of 366 Chinese university students who have interacted with Xiaoice, the authors find that social interaction anxiety increases compulsive chat with a social chatbot both directly and indirectly through fear of negative evaluation and rejection, with a more substantial effect of the former. The mediating effect of fear of negative evaluation transfers through fear of rejection, which establishes a serial link between social interaction anxiety and compulsive chat with a social chatbot. Further, frustration about unavailability (FAU) strengthens the relationship between FOR and compulsive chat with a social chatbot (CCSC). These findings offer theoretical and practical insights into our understanding of the process by which social interaction anxiety influences chat behavior with a social chatbot.
... Finding acceptable solutions will ensure the feasibility of e-health applications and lead to new tools and technologies for future healthcare applications. Emotion-aware AI identifies human emotions based on facial expressions [3]. Healthcare technology still has a long way to go before it can capture human emotions. ...
... The PHQ-9 technique distinguishes nine behaviors from those included in the diagnostic disorders (DSM-V). 3 According to the article. 4 PHQ-9 symptoms are then categorized into different disorders, including sleep, concentration, and eating disorders. ...
Article
Depression is a severe medical condition that substantially impacts people’s daily lives. Recently, researchers have examined user-generated data from social media platforms to detect and diagnose this mental illness. As a result, in this paper we have focused on phrases used in personal remarks to solve recognizing grief on social media. This research aims to develop generalized attention networks (GATs) that employ masked self-attention layers to overcome the depression text categorization problem. The networks distribute weight to each node in a neighborhood based on neighbors’ properties/emotions without using expensive matrix operations like similarity or architectural knowledge. This study expands the emotional vocabulary through the use of hypernyms. As a result, our architecture outperforms the competition. Our experimental results show that the emotion lexicon combined with an attention network achieves receiver operating characteristic (ROC)-0.87 while staying interpretable and transparent. After obtaining qualitative agreement from the psychiatrist, the learned embedding is used to show the contribution of each symptom to the activated word. By utilizing unlabeled forum text, the approach increases the rate of detecting depression symptoms from online data.
... The strength to perceive emotions is a competitive edge between advanced AI applications and previous informative technology (Song et al., 2022). Programming designers incorporate the feeling and emotional mechanism into the chatbot design structure to reinforce users' emotional attachment to chatbots (Benke et al., 2022). In addition, such capacity for emotion is attributed to the emotional arousal of the client in psychotherapy and counseling studies (Lane et al., 2015). ...
... Benke, et al. [66] Emotion-aware chatbot Conditional Experiment Control levels induced users' perceptions of autonomy and trust in emotion-aware chatbots, but did not increase cognitive effort. ...
Article
Full-text available
In recent years, with the continuous expansion of artificial intelligence (AI) application forms and fields, users’ acceptance of AI applications has attracted increasing attention from scholars and business practitioners. Although extant studies have extensively explored user acceptance of different AI applications, there is still a lack of understanding of the roles played by different AI applications in human–AI interaction, which may limit the understanding of inconsistent findings about user acceptance of AI. This study addresses this issue by conducting a systematic literature review on AI acceptance research in leading journals of Information Systems and Marketing disciplines from 2020 to 2023. Based on a review of 80 papers, this study made contributions by (i) providing an overview of methodologies and theoretical frameworks utilized in AI acceptance research; (ii) summarizing the key factors, potential mechanisms, and theorization of users’ acceptance response to AI service providers and AI task substitutes, respectively; and (iii) proposing opinions on the limitations of extant research and providing guidance for future research.
... We believe that higher-levels of learner control could increase the interactivity of the learning exercises. We know from existing research on conversational user interfaces, that it can be beneficial for users to have control over the conversational flow (e.g., Benke et al., 2022). We base this hypothesis on the ICAP framework by Chi and Wylie (2014). ...
Article
Full-text available
Conversational tutoring systems (CTSs) offer a promising avenue for individualized learning support, especially in domains like persuasive writing. Although these systems have the potential to enhance the learning process, the specific role of learner control and inter- activity within them remains underexplored. This paper introduces WritingTutor , a CTS designed to guide students through the pro- cess of crafting persuasive essays, with a focus on varying levels of learner control. In an experimental study involving 96 students, we evaluated the effects of high-level learner control, encompassing con- tent navigation and interface appearance control, against a benchmark version of WritingTutor without these features and a static, non- interactive tutoring group. Preliminary findings suggest that tutoring and learner control might enhance the learning experience in terms of enjoyment, ease-of-use, and perceived autonomy. However, these differences are not significant after pair-wise comparison and appear not to translate to significant differences in learning outcomes. This research contributes to the understanding of learner control in CTS, offering empirical insights into its influence on the learning experience.
... Additionally, considering and targeting the longer-term, relational nature of engagement (such as usage frequency and impact) by focusing on building trustworthy, empowering, and relational IS can also contribute to more impactful engagement experiences. Leveraging emotion-aware chatbots (Benke et al., 2022) equipped with the ability to perceive and potentially adapt to users' emotional states presents a strategic approach for fostering user engagement by cultivating meaningful relationships. All this seems even more important as we note that engagement considerations are not just important for users and customers but for a broad set of stakeholders (Hollebeek et al., 2020). ...
Article
Full-text available
Humans are considered “engaged” once they invest personal resources such as energy, time, or attention beyond a required level. This engagement state occurs when feeling connected to another actor or an object. In information systems (IS) research, engagement is recognized in the concept of ‘user engagement’, which has for many years successfully been employed to understand interaction patterns and user reactions. However, as IS artifacts evolve, we observe contemporary phenomena that are no longer covered by the existing concept: For one, IS have developed from passive resources (such as websites) to human-like actors (such as conversational, nowadays often large language model (LLM)-based, agents) so that a pure ‘user engagement’ perspective does not capture new affordances for engagement with active counterparts. Secondly, IS increasingly act as intermediaries into other engagement objects (e.g., organizations) that are the ultimate target of engagement. Thus, IS research may benefit from a broader perspective on engagement. Our work systematically draws on a structured literature review across adjacent academic disciplines and an in-depth qualitative analysis to develop a more comprehensive conceptual framework for engagement. With this framework, we contribute a refined and broadened conceptual base for engagement and discuss how it can inform future IS research.
... With the advances in NLP, both CAs' response capabilities and human-like characteristics are significantly improving, enhancing the potential for CAs to be become artificial teammates and assist their human team members (Diederich et al., 2022). For example, CAs can act as team moderators to detect team members' emotions and coordinate communications among them (Benke et al., 2022). Thus, we suggest that future research could examine how CAs can be designed to assist knowledge transfer among team members and enhance team collaboration and performance. ...
... Recent research has revealed that human-AI collaboration can enhance outcomes in diverse domains, including team learning and education (Sukhwal et al., 2023). For instance, the integration of AI, such as Conversational AI (CA), can enhance team learning through improved knowledge transfer and team communication (Ahmad et al., 2020), and act as a team moderator to detect members' emotions (Benke et al., 2022). ...
Article
The paper examines the impact of artificial intelligence (AI) in the unexplored context of virtual project-based team learning. We built on relevant research and developed a framework grounded in shared mental models (SMM) of AI. A multi-level, multi-stage model related to AI SMM, AI interactivity, trust, and ethics to knowledge updates and learning intentions. A study of 344 graduate students in 48 online teams tested this model. Findings indicate that AI significantly improves learning outcomes in virtual teams, thus enriching the literature on human-agent teams (HATs) and virtual learning. The implications advocate for improved perceptions of AI attributes to maximize learning benefits.
... Therefore, empathic and socially oriented conversations should be designed to leverage favorable service perceptions, especially when considering service failure as in our example, where trust, satisfaction as well as empathy was leveraged through social presence. This is also an enabler when considering the so-called "feeling AI" that is an important element for innovative service interaction processes (Benke et al., 2021;Huang & Rust, 2020). Third, our findings have several implications for policy makers who are concerned with the ethical and social aspects of chatbot design and use. ...
Article
Full-text available
Although chatbots are oftentimes used in customer service encounters, interactions are oftentimes perceived as not satisfactory. One key aspect for designing chatbots is the use of anthropomorphic design elements. In this experimental study, we examine the two anthropomorphic chatbot design elements of personification, which includes a human-like appearance, and social orientation of communication style, which means a more sensitive and extensive communication. We tested the influence of the two design elements on social presence, satisfaction, trust and empathy towards a chatbot. First, the results show a significant influence of both anthropomorphic design elements on social presence. Second, our findings illustrate that social presence influences trusting beliefs, empathy, and satisfaction. Third, social presence acts as a mediator for both anthropomorphic design elements for satisfaction with a chatbot. Our implications provide a better understanding of anthropomorphic chatbot design elements when designing chatbots for short-term interactions, and we offer actionable implications for practice that enable more effective chatbot implementations.
... Their findings suggest that both factors affect trust (e.g., through increased social presence) (Zierau et al., 2021). Furthermore, questions regarding the behavioral model with a person-oriented communication agent are discussed in Sajjadi et al. (2019), security-related questions in Udhani et al. (2019), questions regarding perceived social presence towards chatbots in Schuetzler et al. (2020), autonomy and the trust level in chatbots in terms of human sensibilities in Benke et al. (2022), and the intention to use AI-based chatbots in Pillai and Sivathanu (2020). ...
Article
Full-text available
In the present study, different trust factors regarding customers' perceptions of their intention to interact with or without trust-supporting design elements as signals (stimuli) in an artificial intelligence (AI)-based chatbot in customer service are identified. Based on 199 publications, a research model is derived for identifying and evaluating various variables influencing customers' views of their intention to interact with or without trust-supporting design elements as signals (stimuli) in AI-based chatbots in customer service. The research approach of the study model includes the influencing variables of perceived security and traceability, perceived social presence, and trust. A survey with 158 survey participants is used to empirically evaluate the model developed. One of the main findings of this research study is that perceived security and comprehensibility have a significant influence on the usage intention of an AI-based chatbot with trust-supporting design elements as signals (stimuli) in customer service.
... Chatbots can influence how we feel (M. Lee et al., 2019) and may become aware of our emotions, e.g. through sentiment analysis (Benke et al., 2022). Thus, we cover what emotions are before touching on gratitude as a specific moral emotion that relates to well-being. ...
Article
Full-text available
Gratitude is a moral emotion that demonstrates our appreciation of altruism. In psychology, feeling grateful is linked to an increase in well-being, yet there is a lack of HCI research on if gratitude can be cultivated through and with conversational agents. We quantitatively studied whether a chatbot can increase people’s gratitude (N = 133), as well as its influence on people’s positive and negative emotions. Compared to the control condition, a chatbot that shared gratitude interventions significantly enhanced people’s gratitude and positive emotions, while lowering negative emotions. Interestingly, people’s experience of gratitude differed from other positive emotions: Simple positive emotions, like joy, can go up while reported gratitude decreases. We share qualitative observations on how gratitude can be a complex emotional experience, encompassing positive and negative emotions, such as finding relief in admitting to a chatbot about one’s sadness over friendship during the COVID-19 pandemic. © 2023 The Author(s). Published with license by Taylor & Francis Group, LLC.
... Chatbots deliberately imitate human social communication and communicate with users through context emotion recognition [10]. However, limited by the development of natural language processing, it is often possible to identify the wrong emotion and give the user a poor or ambiguous answer. ...
Article
Full-text available
With the popularization and development of the concept of artificial intelligence, the application of artificial intelligence has also begun to deepen into people's lives. While bringing convenience to people, it has also made some people worry about whether artificial intelligence will replace humans. Therefore, In order to make people understand the current development status and bottlenecks of artificial intelligence more intuitively, as well as the difference between artificial intelligence and human brain, this article will turn from speech recognition and natural language processing, human-computer dialogue, image recognition, and machine learning ability, that is, machine listening, reading, and thinking four aspects of research and discussion, and finally summarize why artificial intelligence cannot completely surpass humans.
... Another contentious area concerns customer-AI relationship consequences. For example, while engagement and relationships with conversational intelligent chatbots can result in improved social and psychological wellbeing Xie & Pentina, 2022), the growing chatbot ability to sense and imitate human emotions can beget addiction , threaten user identity and autonomy (Benke et al., 2022;Złotowski et al., 2017), cause "an eerie sensation" (Mori et al., 2012), and lead to psychological reactance (Ghazali et al., 2018). Clearly, a comprehensive review of the empirical literature to systematically analyze the findings in the domain of customer-AI relationship research, propose an organizing framework, and suggest future research directions, is in order. ...
Article
Full-text available
Recent advancements in artificial intelligence (AI) and the emergence of AI-based social applications in the market have propelled research on the possibility of consumers developing relationships with AI. Motivated by the diversity of approaches and inconsistent findings in this emerging research stream, this systematic literature review analyzes 37 peer-reviewed empirical studies focusing on human-AI relationships published between 2018 and 2023. We identify three major theoretical domains (social psychology, communication and media studies, and human-machine interactions) as foundations for conceptual development, and detail theories used in the reviewed papers. Given the radically new nature of social AI innovation, we recommend developing a novel theoretical approach that would synergistically utilize cross-disciplinary literature. Analysis of the methodology indicates that quantitative studies dominate this research stream, while qualitative, longitudinal, and mixed-method approaches are used infrequently. Examination of research models and variables used in the studies suggests the need to reconceptualize factors and processes of human-AI relationship, such as agency, autonomy, authenticity, reciprocity, and empathy, to better correspond to the social AI context. Based on our analysis, we propose an integrative conceptual framework and offer directions for future research that incorporate the need to develop a comprehensive theory of human-AI relationships, explore the nomological networks of its key constructs, and implement methodological variety and triangulation. K E Y W O R D S AI companion, AI friendship, AI relationship, artificial intelligence, chatbot, conversational agent, digital assistant, literature review, social AI
... In this respect, we need more research on how exactly to implement adaption and adaptability (Diederich et al., 2022). Novel AI developments could expand the possibilities for user adaptation, for example, when CAs are designed to be sensitive to the user's personality (Ahmad et al., 2021) or emotions (Benke et al., 2022). With this progress in the field of AI, the technical possibilities for implementing the individual design principles will evolve over the time. ...
Article
Full-text available
Due to significant technological progress in the field of artificial intelligence, conversational agents have the potential to become smarter, deepen the interaction with their users, and overcome a function of merely assisting. Since humans often treat computers as social actors, theories on interpersonal relationships can be applied to human-machine interaction. Taking these theories into account in designing conversational agents provides the basis for a collaborative and benevolent long-term relationship, which can result in virtual companionship. However, we lack prescriptive design knowledge for virtual companionship. We addressed this with a systematic and iterative design science research approach, deriving meta-requirements and five theoretically grounded design principles. We evaluated our prescriptive design knowledge by taking a two-way approach, first instantiating and evaluating the virtual classmate Sarah, and second analyzing Replika, an existing virtual companion. Our results show that with virtual companionship, conversational agents can incorporate the construct of companionship known from human-human relationships by addressing the need to belong, to build interpersonal trust, social exchange, and a reciprocal and benevolent interaction. The findings are summarized in a nascent design theory for virtual companionship, providing guidance on how our design prescriptions can be instantiated and adapted to different domains and applications of conversational agents.
Conference Paper
In today’s educational landscape, students learn collaboratively, where students benefit from both peer interactions and facilitator guidance. Prior research in Human-Computer Interaction (HCI) and Computer-Supported Collaborative Learning (CSCL) has explored chatbots and AI techniques to aid such collaboration. However, these methods often depend on predefined dialogues (which limits adaptability), are not based on collaborative learning theories, and do not fully recognize the learning context. In this paper, we introduce an Large Language Model (LLM)-powered conversational AI, designed to enhance small group learning through its advanced language understanding and generation capabilities. We detail the iterative design process, final design, and implementation. Our preliminary evaluation indicates that the bot performs as designed but points to considerations in the timing of interventions and bot’s role in discussions. The evaluation also reveals that learners perceive the bot’s tone and behavior as important for engagement. We discuss design implications for chatbot integration in collaborative learning and future research directions.
Article
Since the introduction of OpenAI's ChatGPT‐3 in late 2022, conversational chatbots have gained significant popularity. These chatbots are designed to offer a user‐friendly interface for individuals to engage with technology using natural language in their daily interactions. However, these interactions raise user privacy concerns due to the data shared and the potential for misuse in these conversational information exchanges. Furthermore, there are no overarching laws and regulations governing such conversational interfaces in the United States. Thus, there is a need to investigate the user privacy concerns. To understand these concerns in the existing literature, this paper presents a literature review and analysis of 38 papers out of 894 retrieved papers that focus on user privacy concerns arising from interactions with text‐based conversational chatbots through the lens of social informatics. The review indicates that the primary user privacy concern that has consistently been addressed is self‐disclosure. This review contributes to the broader understanding of privacy concerns regarding chatbots the need for further exploration in this domain. As these chatbots continue to evolve, this paper acts as a foundation for future research endeavors and informs potential regulatory frameworks to safeguard user privacy in an increasingly digitized world.
Article
Labeling is critical in creating training datasets for supervised machine learning, and is a common form of crowd work heteromation. It typically requires manual labor, is badly compensated and not infrequently bores the workers involved. Although task variety is known to drive human autonomy and intrinsic motivation, there is little research in this regard in the labeling context. Against this backdrop, we manipulate the presentation sequence of a labeling task in an online experiment and use the theoretical lens of self-determination theory to explain psychological work outcomes and work performance. We rely on 176 crowd workers contributing with group comparisons between three presentation sequences (by label, by image, random) and a mediation path analysis along the phenomena studied. Surprising among our key findings is that the task variety when sorting by label is perceived higher than when sorting by image and the random group. Naturally, one would assume that the random group would be perceived as most varied. We choose a visual metaphor to explain this phenomenon, whereas paintings offer a structured presentation of coloured pixels, as opposed to random noise.
Article
Purpose The purpose of this study is to develop a framework for the perceived intelligence of VAs and explore the mechanisms of different dimensions of the perceived intelligence of VAs on users’ exploration intention (UEI) and how these antecedents can collectively result in the highest level of UEI. Design/methodology/approach An online survey on Amazon Mechanical Turk is employed. The model is tested utilizing the structural equation modeling (SEM) and fuzzy-set qualitative comparative analysis (fsQCA) approach from the collected data of VA users ( N = 244). Findings According to the SEM outcomes, perceptual, cognitive, emotional and social intelligence have different mechanisms on UEI. Findings from the fsQCA reinforce the SEM results and provide the configurations that enhanced UEI. Originality/value This study extends the conceptual framework of perceived intelligence and enriches the literature on anthropomorphism and users’ exploration. These findings also provide insightful suggestions for practitioners regarding the design of VA products.
Article
Full-text available
Chatbots offer customers access to personalised services and reduce costs for organisations. While some customers initially resisted interacting with chatbots, the COVID‐19 outbreak caused them to reconsider. Motivated by this observation, we explore how disruptive situations, such as the COVID‐19 outbreak, stimulate customers' willingness to interact with chatbots. Drawing on the theory of consumption values, we employed interviews to identify emotional, epistemic, functional, and social values that potentially shape willingness to interact with chatbots. Findings point to six values and suggest that disruptive situations stimulate how the values influence WTI with chatbots. Following theoretical insights that values collectively contribute to behaviour, we set up a scenario‐based study and employed a fuzzy set qualitative comparative analysis. We show that customers who experience all values are willing to interact with chatbots, and those who experience none are not, irrespective of disruptive situations. We show that disruptive situations stimulate the willingness to interact with chatbots among customers with configurations of values that would otherwise not have been sufficient. We complement the picture of relevant values for technology interaction by highlighting the epistemic value of curiosity as an important driver of willingness to interact with chatbots. In doing so, we offer a configurational perspective that explains how disruptive situations stimulate technology interaction.
Article
Current research has examined the use of user-generated data from online media to identify and diagnose depression as a serious mental health issue that can significantly impact an individual's daily life. To this end, many studies examined words in personal statements to identify depression. In addition to aiding in the diagnosis and treatment of depression, this study uses and utilizes a Graph Attention Network (GAT) model for the classification of depression from online media. The model is based on masked self-attention layers, that assigns different weight to each node in a neighborhood without costly matrix operations. In addition, an emotion lexicon was extended using hypernyms to improve the model performance. Furthermore, embedding of the model was used to illustrate the contribution of the activated words to each symptom and to obtain qualitative agreement from psychiatrists. This technique uses previously learned embedding to illustrate the contribution of activated words to depressive symptoms in online forums. A significant improvement was observed in the model's performance through the use of the lexicon extension method, resulting in an increase in the ROC performance. The performance was also enhanced by an increase in vocabulary and the adoption of a graph-based curriculum. The lexicon expansion method involves the generation of additional words with similar semantic attributes, utilizing similarity metrics to reinforce lexical features. The graph-based curriculum learning also utilized to handle more challenging training samples, allowing the model to develop increasing expertise in learning complex correlations between input data and output labels.
Article
Depression is a serious illness that significantly affects the lives of those affected. Recent studies have looked at the possibility of detecting and diagnosing this mental disorder using user-generated data from various forms of online media. Therefore, we addressed the issue of detecting sadness in social media by focusing on terms in personal remarks. To overcome the limitations in classifying depression texts, this study aims to develop attention networks that use covert levels of self-attention. Since nodes/words can express the properties/emotions of their neighbors, this paper naturally assigns each node in a neighborhood its weight without performing costly matrix operations such as similarity or network architecture knowledge. The paper extended the emotion lexicon by using hypernyms. For this reason, our method is superior to the performance of the other designs. According to the results of our experiments, the emotion lexicon combined with an attention network achieves a ROC of 0.87 while maintaining its interpretability and transparency level. Subsequently, the learned embedding is used to display the contribution of each symptom to the activated word, and the psychiatrist is polled to obtain his qualitative agreement with this representation. By using unlabeled forum language, the method increases the rate at which depression symptoms can be identified from information in Internet forums.
Article
In a digitally empowered business world, a growing number of family businesses are leveraging the use of chatbots in an attempt to improve customer experience. This research investigates the antecedents of chatbots’ successful use in small family businesses. Subsequently, we determine the effect of two distinctive sets of human–machine communication factors—functional and humanoid—on customer experience. We assess the latter with respect to its effect on customer satisfaction. While a form of intimate attachment can occur between customers and small businesses, affective commitment is prevalent in customers’ attitudes and could be conflicting with the distant and impersonal nature of chatbot services. Therefore, we also test the moderating role of customers’ affective commitment in the relationship between customer experience and customer satisfaction. Data come from 408 respondents, and the results offer an explicit course of action for family businesses to effectively embed chatbot services in their customer communication. The study provides practical and theoretical insights that stipulate the dimensions of chatbots’ effective use in the context of small family businesses.
Article
Full-text available
There has been a recent surge of interest in social chatbots, and human–chatbot relationships (HCRs) are becoming more prevalent, but little knowledge exists on how HCRs develop and may impact the broader social context of the users. Guided by Social Penetration Theory, we interviewed 18 participants, all of whom had developed a friendship with a social chatbot named Replika, to understand the HCR development process. We find that at the outset, HCRs typically have a superficial character motivated by the users' curiosity. The evolving HCRs are characterised by substantial affective exploration and engagement as the users' trust and engagement in self-disclosure increase. As the relationship evolves to a stable state, the frequency of interactions may decrease, but the relationship can still be seen as having substantial affective and social value. The relationship with the social chatbot was found to be rewarding to its users, positively impacting the participants' perceived wellbeing. Key chatbot characteristics facilitating relationship development included the chatbot being seen as accepting, understanding and non-judgmental. The perceived impact on the users' broader social context was mixed, and a sense of stigma associated with HCRs was reported. We propose an initial model representing the HCR development identified in this study and suggest avenues for future research.
Article
Full-text available
Fueled by the pervasion of tools like Slack or Microsoft Teams, the usage of text-based communication in distributed teams has grown massively in organizations. This brings distributed teams many advantages, however, a critical shortcoming in these setups is the decreased ability of perceiving, understanding and regulating emotions. This is problematic because better team members’ abilities of emotion management positively impact team-level outcomes like team cohesion and team performance, while poor abilities diminish communication flow and well-being. Leveraging chatbot technology in distributed teams has been recognized as a promising approach to reintroduce and improve upon these abilities. In this article we present three chatbot designs for emotion management for distributed teams. In order to develop these designs, we conducted three participatory design workshops which resulted in 153 sketches. Subsequently, we evaluated the designs following an exploratory evaluation with 27 participants. Results show general stimulating effects on emotion awareness and communication efficiency. Further, they report emotion regulation and increased compromise facilitation through social and interactive design features, but also perceived threats like loss of control. With some design features adversely impacting emotion management, we highlight design implications and discuss chatbot design recommendations for enhancing emotion management in teams
Article
Full-text available
From past research it is well known that social exclusion has detrimental consequences for mental health. To deal with these adverse effects, socially excluded individuals frequently turn to other humans for emotional support. While chatbots can elicit social and emotional responses on the part of the human interlocutor, their effectiveness in the context of social exclusion has not been investigated. In the present study, we examined whether an empathic chatbot can serve as a buffer against the adverse effects of social ostracism. After experiencing exclusion on social media, participants were randomly assigned to either talk with an empathetic chatbot about it (e.g., “I’m sorry that this happened to you”) or a control condition where their responses were merely acknowledged (e.g., “Thank you for your feedback”). Replicating previous research, results revealed that experiences of social exclusion dampened the mood of participants. Interacting with an empathetic chatbot, however, appeared to have a mitigating impact. In particular, participants in the chatbot intervention condition reported higher mood than those in the control condition. Theoretical, methodological, and practical implications, as well as directions for future research are discussed.
Article
Full-text available
This article describes the development of Microsoft XiaoIce, the most popular social chatbot in the world. XiaoIce is uniquely designed as an artifical intelligence companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient and emotional quotient in system design, cast human–machine social chat as decision-making over Markov Decision Processes, and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components, including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intent, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million active users and succeeded in establishing long-term relationships with many of them. Analysis of large-scale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.
Article
Full-text available
AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AI-generated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry.
Conference Paper
Full-text available
Maintaining a positive group emotion is important for team collaboration. It is, however, a challenging task for self-managing teams especially when they conduct intra-group collaboration via text-based communication tools. Recent advances in AI technologies open the opportunity of using chatbots for emotion regulation in group chat. However, little is known about how to design such a chatbot and how group members react to its presence. As an initial exploration, we design GremoBot based on text analysis technology and emotion regulation literature. We then conduct a study with nine three-person teams performing different types of collective tasks. In general, participants find GremoBot useful for reinforcing positive feelings and steering them away from negative words. We further discuss the lessons learned and considerations derived for designing a chatbot for group emotion management.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Conference Paper
Full-text available
In this study, we develop two new perspectives for technostress mitigation from the viewpoint of coping. First, we examine users' emotional coping responses to stressful IT, focusing specifically on distress venting and distancing from IT. As these mechanisms may not always be effective for individuals' well-being, we extend our approach to self-regulation in coping, which concerns general stress-resistance. Thus, we specifically study how IT control moderates the effect of emotional coping responses to stressful situations involving IT use. We test the proposed model in a cross-sectional study of IT users from multiple organizations (N=1,091). The study contributes to information systems literature by uncovering mechanisms individuals' can use to mitigate the negative effects of technostress and by delineating the less-understood perspective of interrelated coping mechanisms; how emotional coping responses are moderated by IT control towards more favorable outcomes. Implications of the research are discussed.
Conference Paper
Full-text available
Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.
Article
Full-text available
Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive.
Article
Full-text available
We present artificial intelligent (AI) agents that act as interviewers to engage with a user in a text-based conversation and automatically infer the user's personality traits. We investigate how the personality of an AI interviewer and the inferred personality of a user influences the user's trust in the AI interviewer from two perspectives: the user's willingness to confide in and listen to an AI interviewer. We have developed two AI interviewers with distinct personalities and deployed them in a series of real-world events. We present findings from four such deployments involving 1,280 users, including 606 actual job applicants. Notably, users are more willing to confide in and listen to an AI interviewer with a serious, assertive personality in a high-stakes job interview. Moreover, users’ personality traits, inferred from their chat text, along with interview context, influence their perception of and their willingness to confide in and listen to an AI interviewer. Finally, we discuss the design implications of our work on building hyper-personalized, intelligent agents.
Article
Full-text available
Disclosing the current location of a person can seriously affect their privacy, but many apps request location information to provide location-based services. Simultaneously, these apps provide only crude controls for location privacy settings (sharing all or nothing). There is an ongoing discussion about rights of users regarding their location privacy (e.g. in the context of the General Data Protection Regulation – GDPR). GDPR requires data collectors to notify users about data collection and to provide them with opt-out options. To address these requirements, we propose a set of user interface (UI) controls for fine-grained management of location privacy settings based on privacy theory (Westin), privacy by design principles and general UI design principles. The UI notifies users about the state of location data sharing and provides controls for adjusting location sharing preferences. It addresses three key issues: whom to share location with, when to share it, and where to share it. Results of a user study (N=23) indicate that (1) the proposed interface led to a greater sense of control, that (2) it was usable and well received, and that (3) participants were keen on using it in real life. Our findings can inform the development of interfaces to manage location privacy.
Conference Paper
Full-text available
A future where the conversation with machines can potentially involve mutual emotions between the parties may be not so far in time. Inspired by the episode of Black Mirror "Be Right Back'' and Replika, a futuristic app that promises to be "your best friend'', in this work we are considering the positive and negative points of including an automated learning conversational agent inside the personal world of feelings and emotions. These systems can impact both single individuals and society, worsening an already critical situation. Our conclusion is that a regulation on the artificial emotional content should be considered before actually going beyond some one-way-only limits.
Conference Paper
Full-text available
We present and discuss a fully-automated collaboration system, CoCo, that allows multiple participants to video chat and receive feedback through custom video conferencing software. After a conferencing session, a virtual feedback assistant provides insights on the conversation to participants. CoCo automatically pulls audial and visual data during conversations and analyzes the extracted streams for affective features, including smiles, engagement, attention, as well as speech overlap and turn-taking. We validated CoCo with 39 participants split into 10 groups. Participants played two back-to-back team-building games, Lost at Sea and Survival on the Moon, with the system providing feedback between the two. With feedback, we found a statistically significant change in balanced participation---that is, everyone spoke for an equal amount of time. There was also statistically significant improvement in participants' self-evaluations of conversational skills awareness, including how often they let others speak, as well as of teammates' conversational skills. The entire framework is available at https://github.com/ROC-HCI/CollaborationCoach_PostFeedback.
Conference Paper
Full-text available
There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Article
Full-text available
Presents an integrative theoretical framework to explain and to predict psychological changes achieved by different modes of treatment. This theory states that psychological procedures, whatever their form, alter the level and strength of self-efficacy. It is hypothesized that expectations of personal efficacy determine whether coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences. Persistence in activities that are subjectively threatening but in fact relatively safe produces, through experiences of mastery, further enhancement of self-efficacy and corresponding reductions in defensive behavior. In the proposed model, expectations of personal efficacy are derived from 4 principal sources of information: performance accomplishments, vicarious experience, verbal persuasion, and physiological states. Factors influencing the cognitive processing of efficacy information arise from enactive, vicarious, exhortative, and emotive sources. The differential power of diverse therapeutic procedures is analyzed in terms of the postulated cognitive mechanism of operation. Findings are reported from microanalyses of enactive, vicarious, and emotive modes of treatment that support the hypothesized relationship between perceived self-efficacy and behavioral changes. (21/2 p ref)
Conference Paper
Full-text available
Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.
Article
Full-text available
In 2016, Microsoft launched Tay, an experimental artificial intelligence chat bot. Learning from interactions with Twitter users, Tay was shut down after one day because of its obscene and inflammatory tweets. This article uses the case of Tay to re-examine theories of agency. How did users view the personality and actions of an artificial intelligence chat bot when interacting with Tay on Twitter? Using phenomenological research methods and pragmatic approaches to agency, we look at what people said about Tay to study how they imagine and interact with emerging technologies and to show the limitations of our current theories of agency for describing communication in these settings. We show how different qualities of agency, different expectations for technologies, and different capacities for affordance emerge in the interactions between people and artificial intelligence. We argue that a perspective of “symbiotic agency”—informed by the imagined affordances of emerging technology—is required to really understand the collapse of Tay.
Article
Full-text available
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what's new, and what's different this time? And is there an opportunity here to improve how our industry does technology transfer?
Article
Full-text available
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human-automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism-the degree to which an agent exhibits human characteristics-is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human-agent trust as well as novel automation design. (PsycINFO Database Record
Article
Conversational Artificial Intelligence (AI) backed Alexa, Siri and Google Assistants are examples of Voice-based digital assistants (VBDA) that are ubiquitously occupying our living spaces. While they gather an enormous amount of personal information to provide bespoke user experience, they also evoke serious privacy concerns regarding the collection, use and storage of personal data of the consumers. The objective of this research is to examine the perception of the consumers towards the privacy concerns and in turn its influence on the adoption of VBDA. We extend the celebrated UTAUT2 model with perceived privacy concerns, perceived privacy risk and perceived trust. With the assistance of survey data collected from tech-savvy respondents, we show that trust in technology and the service provider plays an important role in the adoption of VBDA. In addition, we notice that consumers showcase a trade-off between privacy risks and benefits associated with VBDA while adopting the VBDA such technologies, reiterating their calculus behaviour. Contrary to the extant literature, our results indicate that consumers' perceived privacy risk does not influence adoption intention directly. It is mediated through perceived privacy concerns and consumers’ trust. Then, we propose theoretical and managerial implications to conclude the paper.
Article
In the current era, interacting with Artificial Intelligence (AI) has become an everyday activity. Understanding the interaction between humans and AI is of potential value because, in future, such interactions are expected to become more pervasive. Two studies—one survey and one experiment—were conducted to demonstrate positive effects of anthropomorphism on interactions with smart-speaker-based AI assistants and to examine the mediating role of psychological distance in this relationship. The results of Study 1, an online survey, showed that participants with a higher tendency to anthropomorphize their AI assistant/s evaluated it/them more positively, and this effect was mediated by psychological distance. In Study 2, the hypotheses were tested in a more sophisticated experiment. Again, the results indicated that, in the high-anthropomorphism (vs. low-anthropomorphism) condition, participants had more positive attitudes toward the AI assistant, and the effect was mediated by psychological distance. Though several studies have demonstrated the effect of anthropomorphism, few have probed the underlying mechanism of anthropomorphism thoroughly. The current research not only contributes to the anthropomorphism literature, but also provides direction to research on facilitating human–AI interaction.
Article
Many of the world's leading brands and increasingly government agencies are using intelligent agent technologies, also known as chatbots to interact with consumers. However, consumer satisfaction with chatbots is mixed. Consumers report frustration with chatbots arising from misunderstood questions, irrelevant responses, and poor integration with human service agents. This study examines whether human-computer interactions can be more personalized by matching consumer personality with congruent machine personality using language. Although the idea that personality is manifested through language, and that people are more likely to be responsive to others with the same personality is well known, there is a dearth of research that examines whether this is consistent for human-computer interactions. Based on a sample of over 57,000 chatbot interactions, this study demonstrates that consumer personality can be predicted during contextual interactions, and that chatbots can be manipulated to ‘assume a personality’ using response language. Matching consumer personality with congruent chatbot personality had a positive impact on consumer engagement with chatbots and purchasing outcomes for interactions involving social gain.
Article
It is now common for people to encounter artificial intelligence (AI) across many areas of their personal and professional lives. Interactions with AI agents may range from the routine use of information technology tools to encounters where people perceive an artificial agent as exhibiting mind. Combining two studies (useable N = 266), we explore people's qualitative descriptions of a personal encounter with an AI in which it exhibits characteristics of mind. Across a range of situations reported, a clear pattern emerged in the responses: the majority of people report their own emotions including surprise, amazement, happiness, disappointment, amusement, unease, and confusion in their encounter with a minded AI. We argue that emotional reactions occur as part of mind perception as people negotiate between the disparate concepts of programmed electronic devices and actions indicative of human-like minds. Specifically, emotions are often tied to AIs that produce extraordinary outcomes, inhabit crucial social roles, and engage in human-like actions. We conclude with future directions and the implications for ethics, the psychology of mind perception, the philosophy of mind, and the nature of social interactions in a world of increasingly sophisticated AIs.
Article
Multivariate analysis of variance (MANOVA) is a powerful and versatile method to infer and quantify main and interaction effects in metric multivariate multi-factor data. It is, however, neither robust against change in units nor meaningful for ordinal data. Thus, we propose a novel nonparametric MANOVA. Contrary to existing rank-based procedures, we infer hypotheses formulated in terms of meaningful Mann–Whitney-type effects in lieu of distribution functions. The tests are based on a quadratic form in multivariate rank effect estimators, and critical values are obtained by bootstrap techniques. The newly developed procedures provide asymptotically exact and consistent inference for general models such as the nonparametric Behrens–Fisher problem and multivariate one-, two-, and higher-way crossed layouts. Computer simulations in small samples confirm the reliability of the developed method for ordinal and metric data with covariance heterogeneity. Finally, an analysis of a real data example illustrates the applicability and correct interpretation of the results.
Article
TODAY, PEOPLE INCREASINGLY rely on computer agents in their lives, from searching for information, to chatting with a bot, to performing everyday tasks. These agent-based systems are our first forays into a world in which machines will assist, teach, counsel, care for, and entertain us. While one could imagine purely rational agents in these roles, this prospect is not attractive for several reasons, which we will outline in this article. The field of affective computing concerns the design and development of computer systems that sense, interpret, adapt, and potentially respond appropriately to human emotions. Here, we specifically focus on the design of affective agents and assistants. Emotions play a significant role in our decisions, memory, and well-being. Furthermore, they are critical for facilitating effective communication and social interactions. So, it makes sense that the emotional component surrounding the design of computer agents should be at the forefront of this design discussion. © 2018 Association for Computing Machinery. All Rights Reserved.
Article
When we ask a chatbot for advice about a personal problem, should it simply provide informational support and refrain from offering emotional support? Or, should it show sympathy and empathize with our situation? Although expression of caring and understanding is valued in supportive human communications, do we want the same from a chatbot, or do we simply reject it due to its artificiality and uncanniness? To answer this question, we conducted two experiments with a chatbot providing online medical information advice about a sensitive personal issue. In Study 1, participants (N = 158) simply read a dialogue between a chatbot and a human user. In Study 2, participants (N = 88) interacted with a real chatbot. We tested the effect of three types of empathic expression-sympathy, cognitive empathy, and affective empathy-on individuals' perceptions of the service and the chatbot. Data reveal that expression of sympathy and empathy is favored over unemotional provision of advice, in support of the Computers are Social Actors (CASA) paradigm. This is particularly true for users who are initially skeptical about machines possessing social cognitive capabilities. Theoretical, methodological, and practical implications are discussed.
Article
This study investigates whether social- versus task-oriented interaction of virtual shopping assistants differentially benefits low versus high Internet competency older consumers with respect to social (perceived interactivity, trust), cognitive (perceived information load), functional (self-efficacy, perceived ease of use, perceived usefulness), and behavioral intent (website patronage intent) outcomes in an online shopping task. A total of 121 older adults (61–89 years) participated in a laboratory experiment with a 2 (digital assistant interaction style: (social-vs. task-oriented) × 2 (user Internet competency: low vs. high) × 2 (user exchange modality: text vs. voice) between-subjects design. The results revealed that users' Internet competency and the digital assistant's conversational style had significant interaction effects on social, functional, and behavioral intent outcomes. Social-oriented digital assistants lead to superior social outcomes (enhanced perceptions of two-way interactivity and trust in the integrity of the site) for older users with high Internet competency, who need less task-related assistance. On the other hand, low-competency older users showed significantly superior cognitive (lower perceived information load) and functional outcomes (greater perceived ease and self-efficacy of using the site) when the digital assistant employed a task-oriented interaction style. Theoretical and agent design implications are discussed.
Article
Although current social machine technology cannot fully exhibit the hallmarks of human morality or agency, popular culture representations and emerging technology make it increasingly important to examine human interlocutors’ perception of social machines (e.g., digital assistants, chatbots, robots) as moral agents. To facilitate such scholarship, the notion of perceived moral agency (PMA) is proposed and defined, and a metric developed and validated through two studies: (1) a large-scale online survey featuring potential scale items and concurrent validation metrics for both machine and human targets, and (2) a scale validation study with robots presented as variably agentic and moral. The PMA metric is shown to be reliable, valid, and exhibiting predictive utility.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
Disembodied conversational agents in the form of chatbots are increasingly becoming a reality on social media and messaging applications, and are a particularly pressing topic for service encounters with companies. Adopting an experimental design with actual chatbots powered with current technology, this study explores the extent to which human-like cues such as language style and name, and the framing used to introduce the chatbot to the consumer can influence perceptions about social presence as well as mindful and mindless anthropomorphism. Moreover, this study investigates the relevance of anthropomorphism and social presence to important company-related outcomes, such as attitudes, satisfaction and the emotional connection that consumers feel with the company after interacting with the chatbot.
Article
Future applications are envisioned in which a single human operator manages multiple heterogeneous unmanned vehicles (UVs) by working together with an autonomy teammate that consists of several intelligent decision-aiding agents/services. This article describes recent advancements in developing a new interface paradigm that will support human-autonomy teaming for air, ground, and surface (sea craft) UVs in defence of a military base. Several concise and integrated candidate control station interfaces are described by which the operator determines the role of autonomy in UV management using an adaptable automation control scheme. An extended play calling based control approach is used to support human-autonomy communication and teaming in managing how UV assets respond to potential threats (e.g. asset allocation, routing, and execution details). The design process for the interfaces is also described including: analysis of a base defence scenario used to guide this effort, consideration of ecological interface design constructs, and generation of UV and task-related pictorial symbology.
Article
Privacy notice and choice are essential aspects of privacy and data protection regulation worldwide. Yet, today's privacy notices and controls are surprisingly ineffective at informing users or allowing them to express choice. Here, the authors analyze why existing privacy notices fail to inform users and tend to leave them helpless, and discuss principles for designing more effective privacy notices and controls.
Article
This article describes conversation-based assessments with computer agents that interact with humans through chat, talking heads, or embodied animated avatars. Some of these agents perform actions, interact with multimedia, hold conversations with humans in natural language, and adaptively respond to a person’s actions, verbal contributions, and emotions. Data are logged throughout the interactions in order to assess the individual’s mastery of subject matters, skills, and proficiencies on both cognitive and noncognitive characteristics. There are different agent-based designs that focus on learning and assessment. Dialogues occur between one agent and one human, as in the case of intelligent tutoring systems. Three-party conversations, called trialogues, involve two agents interacting with a human. The two agents can take on different roles (such as tutors and peers), model actions and social interactions, stage arguments, solicit help from the human, and collaboratively solve problems. Examples of assessment with these agent-based environments are presented in the context of intelligent tutoring, educational games, and interventions to help struggling adult readers. Most of these involve assessment at varying grain sizes to guide the intelligent interaction, but conversation-based assessment with agents is also currently being used in high stakes assessments.
Article
This study investigates how user satisfaction and intention to use for an interactive movie recommendation system is determined by communication variables and relationship between conversational agent and user. By adopting the Computers-Are-Social-Actors (CASA) paradigm and uncertainty reduction theory, this study examines the influence of self-disclosure and reciprocity as key communication variables on user satisfaction. A two-way ANOVA test was conducted to analyze the effects of self-disclosure and reciprocity on user satisfaction with a conversational agent. The interactional effect of self-disclosure and reciprocity on user satisfaction was not significant, but the main effects proved to be both significant. PLS analysis results showed that perceived trust and interactional enjoyment are significant mediators in the relationship between communication variables and user satisfaction. In addition, reciprocity is a stronger variable than self-disclosure in predicting relationship building between an agent and a user. Finally, user satisfaction is an influential factor of intention to use. These findings have implications from both practical and theoretical perspective.
Article
The statistical tests used in the analysis of structural equation models with unobservable variables and measurement error are examined. A drawback of the commonly applied chi square test, in addition to the known problems related to sample size and power, is that it may indicate an increasing correspondence between the hypothesized model and the observed data as both the measurement properties and the relationship between constructs decline. Further, and contrary to common assertion, the risk of making a Type II error can be substantial even when the sample size is large. Moreover, the present testing methods are unable to assess a model's explanatory power. To overcome these problems, the authors develop and apply a testing system based on measures of shared variance within the structural model, measurement model, and overall model.
Article
The use of bots as virtual confederates in online field experiments holds extreme promise as a new methodological tool in computational social science. However, this potential tool comes with inherent ethical challenges. Informed consent can be difficult to obtain in many cases, and the use of confederates necessarily implies the use of deception. In this work we outline a design space for bots as virtual confederates, and we propose a set of guidelines for meeting the status quo for ethical experimentation. We draw upon examples from prior work in the CSCW community and the broader social science literature for illustration. While a handful of prior researchers have used bots in online experimentation, our work is meant to inspire future work in this area and raise awareness of the associated ethical issues.
Article
The privacy calculus established that online self-disclosures are based on a cost-benefit tradeoff. For the context of SNSs, however, the privacy calculus still needs further support as most studies consist of small student samples and analyze self-disclosure only, excluding self-withdrawal (e.g., the deletion of posts), which is essential in SNS contexts. Thus, this study used a U.S. representative sample to test the privacy calculus' generalizability and extend its theoretical framework by including both self-withdrawal behaviors and privacy self-efficacy. Results confirmed the extended privacy calculus model. Moreover, both privacy concerns and privacy self-efficacy positively predicted use of self-withdrawal. With regard to predicting self-disclosure in SNSs, benefits outweighed privacy concerns; regarding self-withdrawal, privacy concerns outweighed both privacy self-efficacy and benefits.
Article
Eye behavior metrics, time of run, and subjective survey results were assessed during human-computer interaction with high, low, and intermediate system autonomy levels. The results of this study are provided as a contribution to knowledge on the relationship between cognitive workload physiology and automation. Research suggests that changes in eye behavior metrics are related to changes in cognitive workload. Few studies have investigated the relationship between eye behavior metrics physiology measures and levels of automation. A within-subjects experiment involving 18 participants who played an open-source real-time strategy game was conducted. Three different versions of the game were developed, each with a unique static autonomy level designed from Sheridan and Verplank's 10 levels of autonomy (levels 2, 4, and 9). NASA-TLX subject survey ratings, time to complete run, and visual fixation rate were found to be significantly different among automation levels. These findings suggest that assessing visual physiology may be a promising indicator for evaluating cognitive workload when interacting with static autonomy levels. This efforts takes us one step closer to using visual physiology as a useful method for evaluating operator workload in almost real-time. Relevance to industry Potential applications of this research include development of software that integrates adaptive automation to improve human-computer task performance for high cognitive workload tasks (Air Traffic Control, aircraft piloting, process control, information analyst, etc.).
Article
Twitter's design allows the implementation of automated programs that can submit tweets, interact with others, and generate content based on algorithms. Scholars and end-users alike refer to these programs to as “Twitterbots.” This two-part study explores the differences in perceptions of communication quality between a human agent and a Twitterbot in the areas of cognitive elaboration, information seeking, and learning outcomes. In accordance with the Computers Are Social Actors (CASA) framework (Reeves & Nass, 1996), results suggest that participants learned the same from either a Twitterbot or a human agent. Results are discussed in light of CASA, as well as implications and directions for future studies.
Article
This study empirically explored consumers’ response to the personalization–privacy paradox arising from the use of location-based mobile commerce (LBMC) and investigated the factors affecting consumers’ psychological and behavioral reactions to the paradox. A self-administered online consumer survey was conducted using a South Korean sample comprising those with experience using LBMC, and data from 517 respondents were analyzed. Using cluster analysis, consumers were categorized into four groups according to their responses regarding perceived personalization benefits and privacy risks: indifferent (n = 87), personalization oriented (n = 113), privacy oriented (n = 152), and ambivalent (n = 165). The results revealed significant differences across consumer groups in the antecedents and outcomes of the personalization–privacy paradox. Multiple regression analysis showed that factors influence the two outcome variables of the personalization–privacy paradox: internal conflict (psychological outcome) and continued use intention of LBMC (behavioral outcome). In conclusion, this study showed that consumer involvement, self-efficacy, and technology optimism significantly affected both outcome variables, whereas technology insecurity influenced internal conflict, and consumer trust influenced continued use intention. This study contributes to the current literature and provides practical implications for marketers and retailers aiming to succeed in the mobile commerce environment.
Article
Dunn's test is the appropriate nonparametric pairwise multiple-comparison procedure when a Kruskal-Wallis test is rejected, and it is now implemented for Stata in the dunntest command. dunntest produces multiple comparisons following a Kruskal-Wallis k-way test by using Stata's built-in kwallis command. It includes options to control the familywise error rate by using Dunn's proposed Bonferroni adjustment, the Sidak adjustment, the Holm stepwise adjustment, or the Holm-Sidak stepwise adjustment. There is also an option to control the false discovery rate using the Benjamini-Hochberg stepwise adjustment.