Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Emotion-aware chatbots that can sense human emotions are becoming increasingly prevalent. However, the exposition of emotions by emotion-aware chatbots undermines human autonomy and users' trust. One way to ensure autonomy is through the provision of control. Offering too much control, in turn, may increase users’ cognitive effort. To investigate the impact of control over emotion-aware chatbots on autonomy, trust, and cognitive effort, as well as user behavior, we carried out an experimental study with 176 participants. The participants interacted with a chatbot that provided emotional feedback and were additionally able to control different chatbot dimensions (e.g., timing, appearance, and behavior). Our findings show, first, that higher control levels increase autonomy and trust in emotion-aware chatbots. Second, higher control levels do not significantly increase cognitive effort. Third, in our post hoc behavioral analysis, we identify four behavioral control strategies based on control feature usage timing, quantity, and cognitive effort. These findings shed light on the individual preferences of user control over emotion-aware chatbots. Overall, our study contributes to the literature by showing the positive effect of control over emotion-aware chatbots and by identifying four behavioral control strategies. With our findings, we also provide practical implications for future design of emotion-aware chatbots.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This flexibility is known to improve user perceptions and behaviors (Burton, Stein, & Jensen, 2020;Dietvorst, Simmons, & Massey, 2018). Cognitive load theory (cf., Sweller, 1994) suggests that task outcomes improve as a response to increased control -as long as the cognitive load does not increase significantly (Benke, Gnewuch, & Maedche, 2022;Van Merrienboer, Schuurman, De Croock, & Paas, 2002). Providing explanations of the system's reasoning is another technique widely considered to be a viable means for improving user perceptions and behaviors (Bansal et al., 2021;Gregor & Benbasat, 1999;Miller, 2019). ...
... In contrast, when tasks are perceived as too difficult, people are less motivated to invest mental effort in order to complete the task (Vandewaetere & Clarebout, 2013). Research has revealed that increased decision control improves user outcomes: For example, Benke et al. (2022) show that, when interacting with chatbots, increased decision control improves perceptions of user trust and task performance, while cognitive load only increased slightly but not significantly. Further, Dietvorst et al. (2018) show that giving users the option to at least slightly adjust forecasting outcomes improves task performance. ...
... Study 1 shows that high decision control improves user perceptions of trust and understanding, as well as intended and actual compliance. This finding is in line with Dietvorst et al. (2018) and Benke et al. (2022), who demonstrated that giving people control over the task outcome improves perceptions, as well as task performance. In Study 2, we will test if explanation presence -another technique commonly used in human-AI collaboration -also improves user perceptions of trust and understanding, as well as intended and actual compliance. ...
Article
Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions—such as low trust or limited understanding—reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations—unless appropriately designed for the specific user—may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
... A social chatbot acts as an artificial companion, satisfying the human need for communication, affection, and social belonging Zhou et al., 2020). The primary purpose of a social chatbot is therapeutic, which is more critical for psychologically sensitive individuals who require emotional and social support (Benke et al., 2022). ...
... Evidence indicates that individuals lacking socialinteraction ability with real people are likely to connect with a social chatbot (Benke et al., 2022). Their frequent interactions with a social chatbot indicate a degree of compulsiveness in which individuals experience a loss of control and repeatedly perform the behavior (Liu et al., 2022). ...
Article
This study investigates the impact of social interaction anxiety on compulsive chat with a social chatbot named Xiaoice. To provide insights into the limited literature, the authors explore the role of fear of negative evaluation (FONE) and fear of rejection (FOR) as mediators in this relationship. By applying a variance-based structural equation modeling on a non-clinical sample of 366 Chinese university students who have interacted with Xiaoice, the authors find that social interaction anxiety increases compulsive chat with a social chatbot both directly and indirectly through fear of negative evaluation and rejection, with a more substantial effect of the former. The mediating effect of fear of negative evaluation transfers through fear of rejection, which establishes a serial link between social interaction anxiety and compulsive chat with a social chatbot. Further, frustration about unavailability (FAU) strengthens the relationship between FOR and compulsive chat with a social chatbot (CCSC). These findings offer theoretical and practical insights into our understanding of the process by which social interaction anxiety influences chat behavior with a social chatbot.
... Finding acceptable solutions will ensure the feasibility of e-health applications and lead to new tools and technologies for future healthcare applications. Emotion-aware AI identifies human emotions based on facial expressions [3]. Healthcare technology still has a long way to go before it can capture human emotions. ...
... The PHQ-9 technique distinguishes nine behaviors from those included in the diagnostic disorders (DSM-V). 3 According to the article. 4 PHQ-9 symptoms are then categorized into different disorders, including sleep, concentration, and eating disorders. ...
Article
Depression is a severe medical condition that substantially impacts people’s daily lives. Recently, researchers have examined user-generated data from social media platforms to detect and diagnose this mental illness. As a result, in this paper we have focused on phrases used in personal remarks to solve recognizing grief on social media. This research aims to develop generalized attention networks (GATs) that employ masked self-attention layers to overcome the depression text categorization problem. The networks distribute weight to each node in a neighborhood based on neighbors’ properties/emotions without using expensive matrix operations like similarity or architectural knowledge. This study expands the emotional vocabulary through the use of hypernyms. As a result, our architecture outperforms the competition. Our experimental results show that the emotion lexicon combined with an attention network achieves receiver operating characteristic (ROC)-0.87 while staying interpretable and transparent. After obtaining qualitative agreement from the psychiatrist, the learned embedding is used to show the contribution of each symptom to the activated word. By utilizing unlabeled forum text, the approach increases the rate of detecting depression symptoms from online data.
... In this respect, we need more research on how exactly to implement adaption and adaptability (Diederich et al., 2022). Novel AI developments could expand the possibilities for user adaptation, for example, when CAs are designed to be sensitive to the user's personality (Ahmad et al., 2021) or emotions (Benke et al., 2022). With this progress in the field of AI, the technical possibilities for implementing the individual design principles will evolve over the time. ...
Article
Full-text available
Due to significant technological progress in the field of artificial intelligence, conversational agents have the potential to become smarter, deepen the interaction with their users, and overcome a function of merely assisting. Since humans often treat computers as social actors, theories on interpersonal relationships can be applied to human-machine interaction. Taking these theories into account in designing conversational agents provides the basis for a collaborative and benevolent long-term relationship, which can result in virtual companionship. However, we lack prescriptive design knowledge for virtual companionship. We addressed this with a systematic and iterative design science research approach, deriving meta-requirements and five theoretically grounded design principles. We evaluated our prescriptive design knowledge by taking a two-way approach, first instantiating and evaluating the virtual classmate Sarah, and second analyzing Replika, an existing virtual companion. Our results show that with virtual companionship, conversational agents can incorporate the construct of companionship known from human-human relationships by addressing the need to belong, to build interpersonal trust, social exchange, and a reciprocal and benevolent interaction. The findings are summarized in a nascent design theory for virtual companionship, providing guidance on how our design prescriptions can be instantiated and adapted to different domains and applications of conversational agents.
Article
In a digitally empowered business world, a growing number of family businesses are leveraging the use of chatbots in an attempt to improve customer experience. This research investigates the antecedents of chatbots’ successful use in small family businesses. Subsequently, we determine the effect of two distinctive sets of human–machine communication factors—functional and humanoid—on customer experience. We assess the latter with respect to its effect on customer satisfaction. While a form of intimate attachment can occur between customers and small businesses, affective commitment is prevalent in customers’ attitudes and could be conflicting with the distant and impersonal nature of chatbot services. Therefore, we also test the moderating role of customers’ affective commitment in the relationship between customer experience and customer satisfaction. Data come from 408 respondents, and the results offer an explicit course of action for family businesses to effectively embed chatbot services in their customer communication. The study provides practical and theoretical insights that stipulate the dimensions of chatbots’ effective use in the context of small family businesses.
Article
Full-text available
There has been a recent surge of interest in social chatbots, and human–chatbot relationships (HCRs) are becoming more prevalent, but little knowledge exists on how HCRs develop and may impact the broader social context of the users. Guided by Social Penetration Theory, we interviewed 18 participants, all of whom had developed a friendship with a social chatbot named Replika, to understand the HCR development process. We find that at the outset, HCRs typically have a superficial character motivated by the users' curiosity. The evolving HCRs are characterised by substantial affective exploration and engagement as the users' trust and engagement in self-disclosure increase. As the relationship evolves to a stable state, the frequency of interactions may decrease, but the relationship can still be seen as having substantial affective and social value. The relationship with the social chatbot was found to be rewarding to its users, positively impacting the participants' perceived wellbeing. Key chatbot characteristics facilitating relationship development included the chatbot being seen as accepting, understanding and non-judgmental. The perceived impact on the users' broader social context was mixed, and a sense of stigma associated with HCRs was reported. We propose an initial model representing the HCR development identified in this study and suggest avenues for future research.
Article
Full-text available
Fueled by the pervasion of tools like Slack or Microsoft Teams, the usage of text-based communication in distributed teams has grown massively in organizations. This brings distributed teams many advantages, however, a critical shortcoming in these setups is the decreased ability of perceiving, understanding and regulating emotions. This is problematic because better team members’ abilities of emotion management positively impact team-level outcomes like team cohesion and team performance, while poor abilities diminish communication flow and well-being. Leveraging chatbot technology in distributed teams has been recognized as a promising approach to reintroduce and improve upon these abilities. In this article we present three chatbot designs for emotion management for distributed teams. In order to develop these designs, we conducted three participatory design workshops which resulted in 153 sketches. Subsequently, we evaluated the designs following an exploratory evaluation with 27 participants. Results show general stimulating effects on emotion awareness and communication efficiency. Further, they report emotion regulation and increased compromise facilitation through social and interactive design features, but also perceived threats like loss of control. With some design features adversely impacting emotion management, we highlight design implications and discuss chatbot design recommendations for enhancing emotion management in teams
Article
Full-text available
From past research it is well known that social exclusion has detrimental consequences for mental health. To deal with these adverse effects, socially excluded individuals frequently turn to other humans for emotional support. While chatbots can elicit social and emotional responses on the part of the human interlocutor, their effectiveness in the context of social exclusion has not been investigated. In the present study, we examined whether an empathic chatbot can serve as a buffer against the adverse effects of social ostracism. After experiencing exclusion on social media, participants were randomly assigned to either talk with an empathetic chatbot about it (e.g., “I’m sorry that this happened to you”) or a control condition where their responses were merely acknowledged (e.g., “Thank you for your feedback”). Replicating previous research, results revealed that experiences of social exclusion dampened the mood of participants. Interacting with an empathetic chatbot, however, appeared to have a mitigating impact. In particular, participants in the chatbot intervention condition reported higher mood than those in the control condition. Theoretical, methodological, and practical implications, as well as directions for future research are discussed.
Article
Full-text available
This article describes the development of Microsoft XiaoIce, the most popular social chatbot in the world. XiaoIce is uniquely designed as an artifical intelligence companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient and emotional quotient in system design, cast human–machine social chat as decision-making over Markov Decision Processes, and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components, including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intent, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million active users and succeeded in establishing long-term relationships with many of them. Analysis of large-scale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.
Article
Full-text available
AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AI-generated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry.
Conference Paper
Full-text available
Maintaining a positive group emotion is important for team collaboration. It is, however, a challenging task for self-managing teams especially when they conduct intra-group collaboration via text-based communication tools. Recent advances in AI technologies open the opportunity of using chatbots for emotion regulation in group chat. However, little is known about how to design such a chatbot and how group members react to its presence. As an initial exploration, we design GremoBot based on text analysis technology and emotion regulation literature. We then conduct a study with nine three-person teams performing different types of collective tasks. In general, participants find GremoBot useful for reinforcing positive feelings and steering them away from negative words. We further discuss the lessons learned and considerations derived for designing a chatbot for group emotion management.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Conference Paper
Full-text available
In this study, we develop two new perspectives for technostress mitigation from the viewpoint of coping. First, we examine users' emotional coping responses to stressful IT, focusing specifically on distress venting and distancing from IT. As these mechanisms may not always be effective for individuals' well-being, we extend our approach to self-regulation in coping, which concerns general stress-resistance. Thus, we specifically study how IT control moderates the effect of emotional coping responses to stressful situations involving IT use. We test the proposed model in a cross-sectional study of IT users from multiple organizations (N=1,091). The study contributes to information systems literature by uncovering mechanisms individuals' can use to mitigate the negative effects of technostress and by delineating the less-understood perspective of interrelated coping mechanisms; how emotional coping responses are moderated by IT control towards more favorable outcomes. Implications of the research are discussed.
Conference Paper
Full-text available
Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.
Article
Full-text available
Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive.
Article
Full-text available
We present artificial intelligent (AI) agents that act as interviewers to engage with a user in a text-based conversation and automatically infer the user's personality traits. We investigate how the personality of an AI interviewer and the inferred personality of a user influences the user's trust in the AI interviewer from two perspectives: the user's willingness to confide in and listen to an AI interviewer. We have developed two AI interviewers with distinct personalities and deployed them in a series of real-world events. We present findings from four such deployments involving 1,280 users, including 606 actual job applicants. Notably, users are more willing to confide in and listen to an AI interviewer with a serious, assertive personality in a high-stakes job interview. Moreover, users’ personality traits, inferred from their chat text, along with interview context, influence their perception of and their willingness to confide in and listen to an AI interviewer. Finally, we discuss the design implications of our work on building hyper-personalized, intelligent agents.
Article
Full-text available
Disclosing the current location of a person can seriously affect their privacy, but many apps request location information to provide location-based services. Simultaneously, these apps provide only crude controls for location privacy settings (sharing all or nothing). There is an ongoing discussion about rights of users regarding their location privacy (e.g. in the context of the General Data Protection Regulation – GDPR). GDPR requires data collectors to notify users about data collection and to provide them with opt-out options. To address these requirements, we propose a set of user interface (UI) controls for fine-grained management of location privacy settings based on privacy theory (Westin), privacy by design principles and general UI design principles. The UI notifies users about the state of location data sharing and provides controls for adjusting location sharing preferences. It addresses three key issues: whom to share location with, when to share it, and where to share it. Results of a user study (N=23) indicate that (1) the proposed interface led to a greater sense of control, that (2) it was usable and well received, and that (3) participants were keen on using it in real life. Our findings can inform the development of interfaces to manage location privacy.
Conference Paper
Full-text available
A future where the conversation with machines can potentially involve mutual emotions between the parties may be not so far in time. Inspired by the episode of Black Mirror "Be Right Back'' and Replika, a futuristic app that promises to be "your best friend'', in this work we are considering the positive and negative points of including an automated learning conversational agent inside the personal world of feelings and emotions. These systems can impact both single individuals and society, worsening an already critical situation. Our conclusion is that a regulation on the artificial emotional content should be considered before actually going beyond some one-way-only limits.
Conference Paper
Full-text available
We present and discuss a fully-automated collaboration system, CoCo, that allows multiple participants to video chat and receive feedback through custom video conferencing software. After a conferencing session, a virtual feedback assistant provides insights on the conversation to participants. CoCo automatically pulls audial and visual data during conversations and analyzes the extracted streams for affective features, including smiles, engagement, attention, as well as speech overlap and turn-taking. We validated CoCo with 39 participants split into 10 groups. Participants played two back-to-back team-building games, Lost at Sea and Survival on the Moon, with the system providing feedback between the two. With feedback, we found a statistically significant change in balanced participation---that is, everyone spoke for an equal amount of time. There was also statistically significant improvement in participants' self-evaluations of conversational skills awareness, including how often they let others speak, as well as of teammates' conversational skills. The entire framework is available at https://github.com/ROC-HCI/CollaborationCoach_PostFeedback.
Conference Paper
Full-text available
There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Conference Paper
Full-text available
Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.
Article
Full-text available
In 2016, Microsoft launched Tay, an experimental artificial intelligence chat bot. Learning from interactions with Twitter users, Tay was shut down after one day because of its obscene and inflammatory tweets. This article uses the case of Tay to re-examine theories of agency. How did users view the personality and actions of an artificial intelligence chat bot when interacting with Tay on Twitter? Using phenomenological research methods and pragmatic approaches to agency, we look at what people said about Tay to study how they imagine and interact with emerging technologies and to show the limitations of our current theories of agency for describing communication in these settings. We show how different qualities of agency, different expectations for technologies, and different capacities for affordance emerge in the interactions between people and artificial intelligence. We argue that a perspective of “symbiotic agency”—informed by the imagined affordances of emerging technology—is required to really understand the collapse of Tay.
Article
Full-text available
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what's new, and what's different this time? And is there an opportunity here to improve how our industry does technology transfer?
Article
Full-text available
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human-automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism-the degree to which an agent exhibits human characteristics-is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human-agent trust as well as novel automation design. (PsycINFO Database Record
Article
Conversational Artificial Intelligence (AI) backed Alexa, Siri and Google Assistants are examples of Voice-based digital assistants (VBDA) that are ubiquitously occupying our living spaces. While they gather an enormous amount of personal information to provide bespoke user experience, they also evoke serious privacy concerns regarding the collection, use and storage of personal data of the consumers. The objective of this research is to examine the perception of the consumers towards the privacy concerns and in turn its influence on the adoption of VBDA. We extend the celebrated UTAUT2 model with perceived privacy concerns, perceived privacy risk and perceived trust. With the assistance of survey data collected from tech-savvy respondents, we show that trust in technology and the service provider plays an important role in the adoption of VBDA. In addition, we notice that consumers showcase a trade-off between privacy risks and benefits associated with VBDA while adopting the VBDA such technologies, reiterating their calculus behaviour. Contrary to the extant literature, our results indicate that consumers' perceived privacy risk does not influence adoption intention directly. It is mediated through perceived privacy concerns and consumers’ trust. Then, we propose theoretical and managerial implications to conclude the paper.
Article
In the current era, interacting with Artificial Intelligence (AI) has become an everyday activity. Understanding the interaction between humans and AI is of potential value because, in future, such interactions are expected to become more pervasive. Two studies—one survey and one experiment—were conducted to demonstrate positive effects of anthropomorphism on interactions with smart-speaker-based AI assistants and to examine the mediating role of psychological distance in this relationship. The results of Study 1, an online survey, showed that participants with a higher tendency to anthropomorphize their AI assistant/s evaluated it/them more positively, and this effect was mediated by psychological distance. In Study 2, the hypotheses were tested in a more sophisticated experiment. Again, the results indicated that, in the high-anthropomorphism (vs. low-anthropomorphism) condition, participants had more positive attitudes toward the AI assistant, and the effect was mediated by psychological distance. Though several studies have demonstrated the effect of anthropomorphism, few have probed the underlying mechanism of anthropomorphism thoroughly. The current research not only contributes to the anthropomorphism literature, but also provides direction to research on facilitating human–AI interaction.
Article
Many of the world's leading brands and increasingly government agencies are using intelligent agent technologies, also known as chatbots to interact with consumers. However, consumer satisfaction with chatbots is mixed. Consumers report frustration with chatbots arising from misunderstood questions, irrelevant responses, and poor integration with human service agents. This study examines whether human-computer interactions can be more personalized by matching consumer personality with congruent machine personality using language. Although the idea that personality is manifested through language, and that people are more likely to be responsive to others with the same personality is well known, there is a dearth of research that examines whether this is consistent for human-computer interactions. Based on a sample of over 57,000 chatbot interactions, this study demonstrates that consumer personality can be predicted during contextual interactions, and that chatbots can be manipulated to ‘assume a personality’ using response language. Matching consumer personality with congruent chatbot personality had a positive impact on consumer engagement with chatbots and purchasing outcomes for interactions involving social gain.
Article
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
It is now common for people to encounter artificial intelligence (AI) across many areas of their personal and professional lives. Interactions with AI agents may range from the routine use of information technology tools to encounters where people perceive an artificial agent as exhibiting mind. Combining two studies (useable N = 266), we explore people's qualitative descriptions of a personal encounter with an AI in which it exhibits characteristics of mind. Across a range of situations reported, a clear pattern emerged in the responses: the majority of people report their own emotions including surprise, amazement, happiness, disappointment, amusement, unease, and confusion in their encounter with a minded AI. We argue that emotional reactions occur as part of mind perception as people negotiate between the disparate concepts of programmed electronic devices and actions indicative of human-like minds. Specifically, emotions are often tied to AIs that produce extraordinary outcomes, inhabit crucial social roles, and engage in human-like actions. We conclude with future directions and the implications for ethics, the psychology of mind perception, the philosophy of mind, and the nature of social interactions in a world of increasingly sophisticated AIs.
Article
Multivariate analysis of variance (MANOVA) is a powerful and versatile method to infer and quantify main and interaction effects in metric multivariate multi-factor data. It is, however, neither robust against change in units nor meaningful for ordinal data. Thus, we propose a novel nonparametric MANOVA. Contrary to existing rank-based procedures, we infer hypotheses formulated in terms of meaningful Mann–Whitney-type effects in lieu of distribution functions. The tests are based on a quadratic form in multivariate rank effect estimators, and critical values are obtained by bootstrap techniques. The newly developed procedures provide asymptotically exact and consistent inference for general models such as the nonparametric Behrens–Fisher problem and multivariate one-, two-, and higher-way crossed layouts. Computer simulations in small samples confirm the reliability of the developed method for ordinal and metric data with covariance heterogeneity. Finally, an analysis of a real data example illustrates the applicability and correct interpretation of the results.
Article
TODAY, PEOPLE INCREASINGLY rely on computer agents in their lives, from searching for information, to chatting with a bot, to performing everyday tasks. These agent-based systems are our first forays into a world in which machines will assist, teach, counsel, care for, and entertain us. While one could imagine purely rational agents in these roles, this prospect is not attractive for several reasons, which we will outline in this article. The field of affective computing concerns the design and development of computer systems that sense, interpret, adapt, and potentially respond appropriately to human emotions. Here, we specifically focus on the design of affective agents and assistants. Emotions play a significant role in our decisions, memory, and well-being. Furthermore, they are critical for facilitating effective communication and social interactions. So, it makes sense that the emotional component surrounding the design of computer agents should be at the forefront of this design discussion. © 2018 Association for Computing Machinery. All Rights Reserved.
Article
When we ask a chatbot for advice about a personal problem, should it simply provide informational support and refrain from offering emotional support? Or, should it show sympathy and empathize with our situation? Although expression of caring and understanding is valued in supportive human communications, do we want the same from a chatbot, or do we simply reject it due to its artificiality and uncanniness? To answer this question, we conducted two experiments with a chatbot providing online medical information advice about a sensitive personal issue. In Study 1, participants (N = 158) simply read a dialogue between a chatbot and a human user. In Study 2, participants (N = 88) interacted with a real chatbot. We tested the effect of three types of empathic expression-sympathy, cognitive empathy, and affective empathy-on individuals' perceptions of the service and the chatbot. Data reveal that expression of sympathy and empathy is favored over unemotional provision of advice, in support of the Computers are Social Actors (CASA) paradigm. This is particularly true for users who are initially skeptical about machines possessing social cognitive capabilities. Theoretical, methodological, and practical implications are discussed.
Article
This study investigates whether social- versus task-oriented interaction of virtual shopping assistants differentially benefits low versus high Internet competency older consumers with respect to social (perceived interactivity, trust), cognitive (perceived information load), functional (self-efficacy, perceived ease of use, perceived usefulness), and behavioral intent (website patronage intent) outcomes in an online shopping task. A total of 121 older adults (61–89 years) participated in a laboratory experiment with a 2 (digital assistant interaction style: (social-vs. task-oriented) × 2 (user Internet competency: low vs. high) × 2 (user exchange modality: text vs. voice) between-subjects design. The results revealed that users' Internet competency and the digital assistant's conversational style had significant interaction effects on social, functional, and behavioral intent outcomes. Social-oriented digital assistants lead to superior social outcomes (enhanced perceptions of two-way interactivity and trust in the integrity of the site) for older users with high Internet competency, who need less task-related assistance. On the other hand, low-competency older users showed significantly superior cognitive (lower perceived information load) and functional outcomes (greater perceived ease and self-efficacy of using the site) when the digital assistant employed a task-oriented interaction style. Theoretical and agent design implications are discussed.
Article
Although current social machine technology cannot fully exhibit the hallmarks of human morality or agency, popular culture representations and emerging technology make it increasingly important to examine human interlocutors’ perception of social machines (e.g., digital assistants, chatbots, robots) as moral agents. To facilitate such scholarship, the notion of perceived moral agency (PMA) is proposed and defined, and a metric developed and validated through two studies: (1) a large-scale online survey featuring potential scale items and concurrent validation metrics for both machine and human targets, and (2) a scale validation study with robots presented as variably agentic and moral. The PMA metric is shown to be reliable, valid, and exhibiting predictive utility.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
Disembodied conversational agents in the form of chatbots are increasingly becoming a reality on social media and messaging applications, and are a particularly pressing topic for service encounters with companies. Adopting an experimental design with actual chatbots powered with current technology, this study explores the extent to which human-like cues such as language style and name, and the framing used to introduce the chatbot to the consumer can influence perceptions about social presence as well as mindful and mindless anthropomorphism. Moreover, this study investigates the relevance of anthropomorphism and social presence to important company-related outcomes, such as attitudes, satisfaction and the emotional connection that consumers feel with the company after interacting with the chatbot.
Article
Future applications are envisioned in which a single human operator manages multiple heterogeneous unmanned vehicles (UVs) by working together with an autonomy teammate that consists of several intelligent decision-aiding agents/services. This article describes recent advancements in developing a new interface paradigm that will support human-autonomy teaming for air, ground, and surface (sea craft) UVs in defence of a military base. Several concise and integrated candidate control station interfaces are described by which the operator determines the role of autonomy in UV management using an adaptable automation control scheme. An extended play calling based control approach is used to support human-autonomy communication and teaming in managing how UV assets respond to potential threats (e.g. asset allocation, routing, and execution details). The design process for the interfaces is also described including: analysis of a base defence scenario used to guide this effort, consideration of ecological interface design constructs, and generation of UV and task-related pictorial symbology.
Article
Presents an integrative theoretical framework to explain and to predict psychological changes achieved by different modes of treatment. This theory states that psychological procedures, whatever their form, alter the level and strength of self-efficacy. It is hypothesized that expectations of personal efficacy determine whether coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences. Persistence in activities that are subjectively threatening but in fact relatively safe produces, through experiences of mastery, further enhancement of self-efficacy and corresponding reductions in defensive behavior. In the proposed model, expectations of personal efficacy are derived from 4 principal sources of information: performance accomplishments, vicarious experience, verbal persuasion, and physiological states. Factors influencing the cognitive processing of efficacy information arise from enactive, vicarious, exhortative, and emotive sources. The differential power of diverse therapeutic procedures is analyzed in terms of the postulated cognitive mechanism of operation. Findings are reported from microanalyses of enactive, vicarious, and emotive modes of treatment that support the hypothesized relationship between perceived self-efficacy and behavioral changes. (21/2 p ref)
Article
Privacy notice and choice are essential aspects of privacy and data protection regulation worldwide. Yet, today's privacy notices and controls are surprisingly ineffective at informing users or allowing them to express choice. Here, the authors analyze why existing privacy notices fail to inform users and tend to leave them helpless, and discuss principles for designing more effective privacy notices and controls.
Article
This article describes conversation-based assessments with computer agents that interact with humans through chat, talking heads, or embodied animated avatars. Some of these agents perform actions, interact with multimedia, hold conversations with humans in natural language, and adaptively respond to a person’s actions, verbal contributions, and emotions. Data are logged throughout the interactions in order to assess the individual’s mastery of subject matters, skills, and proficiencies on both cognitive and noncognitive characteristics. There are different agent-based designs that focus on learning and assessment. Dialogues occur between one agent and one human, as in the case of intelligent tutoring systems. Three-party conversations, called trialogues, involve two agents interacting with a human. The two agents can take on different roles (such as tutors and peers), model actions and social interactions, stage arguments, solicit help from the human, and collaboratively solve problems. Examples of assessment with these agent-based environments are presented in the context of intelligent tutoring, educational games, and interventions to help struggling adult readers. Most of these involve assessment at varying grain sizes to guide the intelligent interaction, but conversation-based assessment with agents is also currently being used in high stakes assessments.
Article
This study investigates how user satisfaction and intention to use for an interactive movie recommendation system is determined by communication variables and relationship between conversational agent and user. By adopting the Computers-Are-Social-Actors (CASA) paradigm and uncertainty reduction theory, this study examines the influence of self-disclosure and reciprocity as key communication variables on user satisfaction. A two-way ANOVA test was conducted to analyze the effects of self-disclosure and reciprocity on user satisfaction with a conversational agent. The interactional effect of self-disclosure and reciprocity on user satisfaction was not significant, but the main effects proved to be both significant. PLS analysis results showed that perceived trust and interactional enjoyment are significant mediators in the relationship between communication variables and user satisfaction. In addition, reciprocity is a stronger variable than self-disclosure in predicting relationship building between an agent and a user. Finally, user satisfaction is an influential factor of intention to use. These findings have implications from both practical and theoretical perspective.
Article
The statistical tests used in the analysis of structural equation models with unobservable variables and measurement error are examined. A drawback of the commonly applied chi square test, in addition to the known problems related to sample size and power, is that it may indicate an increasing correspondence between the hypothesized model and the observed data as both the measurement properties and the relationship between constructs decline. Further, and contrary to common assertion, the risk of making a Type II error can be substantial even when the sample size is large. Moreover, the present testing methods are unable to assess a model's explanatory power. To overcome these problems, the authors develop and apply a testing system based on measures of shared variance within the structural model, measurement model, and overall model.
Article
The use of bots as virtual confederates in online field experiments holds extreme promise as a new methodological tool in computational social science. However, this potential tool comes with inherent ethical challenges. Informed consent can be difficult to obtain in many cases, and the use of confederates necessarily implies the use of deception. In this work we outline a design space for bots as virtual confederates, and we propose a set of guidelines for meeting the status quo for ethical experimentation. We draw upon examples from prior work in the CSCW community and the broader social science literature for illustration. While a handful of prior researchers have used bots in online experimentation, our work is meant to inspire future work in this area and raise awareness of the associated ethical issues.
Article
The privacy calculus established that online self-disclosures are based on a cost-benefit tradeoff. For the context of SNSs, however, the privacy calculus still needs further support as most studies consist of small student samples and analyze self-disclosure only, excluding self-withdrawal (e.g., the deletion of posts), which is essential in SNS contexts. Thus, this study used a U.S. representative sample to test the privacy calculus' generalizability and extend its theoretical framework by including both self-withdrawal behaviors and privacy self-efficacy. Results confirmed the extended privacy calculus model. Moreover, both privacy concerns and privacy self-efficacy positively predicted use of self-withdrawal. With regard to predicting self-disclosure in SNSs, benefits outweighed privacy concerns; regarding self-withdrawal, privacy concerns outweighed both privacy self-efficacy and benefits.
Article
Eye behavior metrics, time of run, and subjective survey results were assessed during human-computer interaction with high, low, and intermediate system autonomy levels. The results of this study are provided as a contribution to knowledge on the relationship between cognitive workload physiology and automation. Research suggests that changes in eye behavior metrics are related to changes in cognitive workload. Few studies have investigated the relationship between eye behavior metrics physiology measures and levels of automation. A within-subjects experiment involving 18 participants who played an open-source real-time strategy game was conducted. Three different versions of the game were developed, each with a unique static autonomy level designed from Sheridan and Verplank's 10 levels of autonomy (levels 2, 4, and 9). NASA-TLX subject survey ratings, time to complete run, and visual fixation rate were found to be significantly different among automation levels. These findings suggest that assessing visual physiology may be a promising indicator for evaluating cognitive workload when interacting with static autonomy levels. This efforts takes us one step closer to using visual physiology as a useful method for evaluating operator workload in almost real-time. Relevance to industry Potential applications of this research include development of software that integrates adaptive automation to improve human-computer task performance for high cognitive workload tasks (Air Traffic Control, aircraft piloting, process control, information analyst, etc.).
Article
Twitter's design allows the implementation of automated programs that can submit tweets, interact with others, and generate content based on algorithms. Scholars and end-users alike refer to these programs to as “Twitterbots.” This two-part study explores the differences in perceptions of communication quality between a human agent and a Twitterbot in the areas of cognitive elaboration, information seeking, and learning outcomes. In accordance with the Computers Are Social Actors (CASA) framework (Reeves & Nass, 1996), results suggest that participants learned the same from either a Twitterbot or a human agent. Results are discussed in light of CASA, as well as implications and directions for future studies.
Article
This study empirically explored consumers’ response to the personalization–privacy paradox arising from the use of location-based mobile commerce (LBMC) and investigated the factors affecting consumers’ psychological and behavioral reactions to the paradox. A self-administered online consumer survey was conducted using a South Korean sample comprising those with experience using LBMC, and data from 517 respondents were analyzed. Using cluster analysis, consumers were categorized into four groups according to their responses regarding perceived personalization benefits and privacy risks: indifferent (n = 87), personalization oriented (n = 113), privacy oriented (n = 152), and ambivalent (n = 165). The results revealed significant differences across consumer groups in the antecedents and outcomes of the personalization–privacy paradox. Multiple regression analysis showed that factors influence the two outcome variables of the personalization–privacy paradox: internal conflict (psychological outcome) and continued use intention of LBMC (behavioral outcome). In conclusion, this study showed that consumer involvement, self-efficacy, and technology optimism significantly affected both outcome variables, whereas technology insecurity influenced internal conflict, and consumer trust influenced continued use intention. This study contributes to the current literature and provides practical implications for marketers and retailers aiming to succeed in the mobile commerce environment.
Article
Dunn's test is the appropriate nonparametric pairwise multiple-comparison procedure when a Kruskal-Wallis test is rejected, and it is now implemented for Stata in the dunntest command. dunntest produces multiple comparisons following a Kruskal-Wallis k-way test by using Stata's built-in kwallis command. It includes options to control the familywise error rate by using Dunn's proposed Bonferroni adjustment, the Sidak adjustment, the Holm stepwise adjustment, or the Holm-Sidak stepwise adjustment. There is also an option to control the false discovery rate using the Benjamini-Hochberg stepwise adjustment.