added a research item
The COVID-19 pandemic challenged the existing healthcare system by demanding potential patients to self-diagnose and self-test a potential virus contraction. In this process, some individuals need help and guidance. However, the previous modus-operandi to go to a physician is no longer viable because of the limited capacity and danger of spreading the virus. Hence, digital means had to be developed to help and inform individuals at home, such as conversational agents (CA). The human-like design and perceived social presence of such a CA are central to attaining users' compliance. Against this background, we surveyed 174 users of a commercial COVID-19 chatbot to investigate the role of perceived social presence. Our results provide support that the perceived social presence of chatbots leads to higher levels of trust, which are a driver of compliance. In contrast, perceived persuasiveness seems to have no significant effect.
Conversational Agents (CAs) are becoming part of our everyday lives, whether in the form of voice assistants (such as Siri or Alexa) or as chatbots (for instance, on Facebook). When looking at CAs, users frequently start to display aggressive behavior towards CAs, such as insulting it. However, why some users tend to harass CAs and how this behavior can be explained remains unclear. We conducted a two-conditions online experiment with 201 participants on the interrelation of human-like design, dissatisfaction, frustration, and aggression to address this circumstance. The results reveal that frustration drives aggression. Specifically, users with high impulsivity tend to severely insult a CA when it makes an error. To prevent such behavior, our results indicate that human-like design of CA reduces dissatisfaction, which is a driver of frustration. However, we also discovered that human-like design directly increases frustration, making it a double-edged sword that has to be further investigated.
The increasing application of Conversational Agents (CAs) is rapidly changing the way customers access services. In practice, however, CAs are frequently flawed, such as they are often misinterpreting users' requests. Even through future advancements in technology, it remains unlikely that a CA will be flawless and error-free. Thus, understanding how to design CAs so that users are more willing to overlook errors constitutes an important area of research. In this regard, our study positions the human-like design of CAs as a potential way to ensure service satisfaction when errors occur. We examined the effect of human-like designed flawed CAs by conducting a two-condition experiment with 421 participants, analyzing its effect on users' emotional state and service satisfaction. Our results show that the human-like design of flawed CAs leads to more positive emotions, which improve users' service satisfaction. Therefore, designers should adopt a human-like design to prevent dissatisfaction with flawed CAs.
So-called “lootboxes,” have recently developed into a quasi-standard for online gaming. Lootboxes are digital single-use containers, containing random items, which can be used to change the appearance of a player’s online persona or to progress faster through the game. Lootboxes are awarded for specific achievements (e.g., playing a certain number of hours) inclining players to play many hours and take on increasingly difficult tasks. Through the lens of gamification, lootboxes offer a new approach to shape user motivation and behavior. In this study, an online experiment with 414 participants was conducted to investigate the potential of lootboxes as a gamification element. Two lootbox designs were tested against awarding badges and a control treatment (no gamification) in a non-game context. Our results indicate that lootboxes, containing changes to the nature of a task (e.g., making it easier), show great potential to motivate users and increase performance.
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice, due to improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts like people's private life, education, and healthcare, as well as in organizations, to innovate and automate tasks, for example in marketing and sales or customer service. In addition to these application contexts, such agents take on different forms concerning their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are not able to fulfill expectations and to foster a positive user experience is a challenging endeavor. To better understand how CAs can be designed to fulfill their intended purpose, and how humans interact with them, a multitude of studies focusing on human-computer interaction have been carried out. These have contributed to our understanding of this technology. However, currently a structured overview of this research is missing, which impedes the systematic identification of research gaps and knowledge on which to build on in future studies. To address this issue, we have conducted an organizing and assessing review of 262 studies, applying a socio-technical lens to analyze CA research regarding the user interaction, context, agent design, as well as perception and outcome. We contribute an overview of the status quo of CA research, identify four research streams through a cluster analysis, and propose a research agenda comprising six avenues and sixteen directions to move the field forward.
In the human interaction with CAs, research has shown that elements of persuasive system design, such as praise, are perceived differently when compared to traditional graphical interfaces. In this experimental study, we will extend our knowledge regarding the relation of persuasiveness (namely dialog support), anthropomorphically designed CAs, and task performance. Within a three-conditions-between-subjects design, two instances of the CA are applied within an online experiment with 120 participants. Our results show that anthropomorphically designed CAs increase perceived dialog support and performance but adding persuasive design elements can be counterproductive. Thus, the results are embedded in the discourse of CA design for task support.
Conversational Agents (CA) in the form of digital assistants on smartphones, chatbots on social media, or physical embodied systems are an increasingly often applied new form of user interfaces for digital systems. The human-like design of CAs (e.g., having names, greeting users, and using self-references) leads to users subconsciously reacting to them as they were interacting with a human. In recent research, it has been shown that this social component of interacting with a CA leads to various benefits, such as increased service satisfaction, enjoyment, and trust. However, numerous CAs were discontinued because of inadequate responses to user requests or making errors because of the limited functionalities and knowledge of a CA, which can lead to frustration. Therefore, investigating the causes of frustration and other related emotions and reactions is highly relevant. Against this background, this study investigates via an online experiment with 169 participants how different communication patterns influence user's perception, frustration, and harassment behavior of an error producing CA.
Conversational agents (CAs), defined as software with which users interact through natural language, have gained increasing interest in education due to their potential to support individual learning on a large scale. With improved capabilities driven by advances in machine learning and natural language processing, these agents pave the way for a new generation of tutoring systems that offer an intuitive learning experience and can automatically tailor to individual learning styles and needs. A particular characteristic of CAs is their potential for anthropomorphic design, which can facilitate the feeling of a human contact in technology-enabled individual learning and contribute to a learner’s motivation. While recent studies provide valuable prescriptive knowledge on how to design pedagogical CAs, we still lack an understanding of the impact of a human-like design on individual motivational regulations and perceived inclusiveness. In this study, we contribute to closing this research gap by investigating individual learner’s perception of a CA by means of a 3x1 experiment with 149 participants. Drawing on Social Response Theory and Self-Determination Theory, we find empirical evidence that anthropomorphic design and associated perceptions of humanness contribute to individual intrinsic motivation of learners and discover a positive effect on inclusiveness.
Conversational agents (CAs) have attracted the interest of organizations due to their potential for automated service provision combined with the feeling of a human-like interaction. Emerging studies on CAs indicate a positive impact of humanness on customer perception and explore approaches for their anthropomorphic design, comprising both appearance and behavior of the agent. While these studies provide valuable knowledge on how to design human-like CAs, we still lack an understanding of the limited conversational capabilities of this technology and their impact on user perception. Oftentimes, these limitations lead to frustrated users and discontinued CAs in practice. We address this gap by investigating the impact of response failure, understood as the inability of a CA to provide a meaningful reply, in a service context drawing on Social Response Theory and the Theory of Uncanny Valley. By means of an experiment with 169 participants, we find that (1) response failure is detrimental to the perception of humanness and increases feelings of uncanniness, (2) humanness (uncanniness) positively (negatively) influences familiarity and service satisfaction, and (3) the negative impact of response failure on user perception is significant yet it does not lead to a sharp drop as posited by the Theory of Uncanny Valley.
Nachhaltiges Mobilitätsverhalten, beispielsweise durch das flexible Teilen von Autos oder der Nutzung umweltfreundlicher Alternativen wie E-Scootern oder E-Bikes, gewinnt zunehmend an Bedeutung und erfährt mehr und mehr Beachtung in unserer Gesellschaft (Brendel and Mandrella 2016). Insbesondere in urbanen Räumen stehen den Menschen vielfältige, umweltfreundliche Mobilitätsoptionen zur Verfügung (Willing et al. 2017). Vor diesem Hintergrund können Informationssysteme einen Beitrag dazu leisten, die individuelle Nutzung nachhaltiger Mobilitätsoptionen zu fördern (Flüchter and Wortmann 2014).
The increasing capabilities of conversational agents (CAs) offer manifold opportunities to assist users in a variety of tasks. In an organizational context, particularly their potential to simulate a human-like interaction via natural language currently attracts attention both at the customer interface as well as for internal purposes, often in the form of chatbots. Emerging experimental studies on CAs study the impact of anthropomorphic design elements, so-called social cues, on user perception. However, while these studies provide valuable prescriptive knowledge on selected social cues, they neglect the potential detrimental influence of the limited responsiveness of present-day conversational agents. In practice, many CAs fail to continuously provide meaningful responses in a conversation due to the open nature of natural language interaction, which negatively influences user perception and often led to CAs being discontinued in the past. Thus, designing a CA that provides a human-like interaction experience while minimizing the risks associated with limited conversational capabilities represents a substantial design problem. This study addresses the aforementioned problem by proposing and evaluating a design for a CA that offers a human-like interaction experience while mitigating negative effects due to limited responsiveness. Through the presentation of the artifact and the synthesis of prescriptive knowledge in the form of a nascent design theory for anthropomorphic enterprise CAs, this research adds to the growing knowledge base on designing human-like assistants and supports practitioners seeking to introduce them in their organizations.
Sustainable mobility behavior is increasingly relevant due to the vast environmental impact of current transportation systems. With the growing variety of transportation modes, individual decisions for or against specific mobility options become more and more important and salient beliefs regarding the environmental impact of different modes influence this decision process. While information systems have been recognized for their potential to shape individual beliefs and behavior, design-oriented studies that explore their impact, in particular on environmental beliefs, remain scarce. In this study, we contribute to closing this research gap by designing and evaluating a new type of artifact, a persuasive and human-like conversational agent, in a 2x2 experiment with 225 participants. Drawing on the Theory of Planned Behavior and Social Response Theory, we find empirical support for the influence of persuasive design elements on individual environmental beliefs and discover that anthropomorphic design can contribute to increasing the persuasiveness of artifacts.
Conversational agents currently attract strong interest for technology-based service provision due to increased capabilities driven by advances in machine learning and natural language processing. The interaction via natural language in combination with a human-like design promises service that is always available, fast, and with a consistent quality and at the same time resembles a human service encounter. However, current conversational agents exhibit the same inherent limitation that every interactive technology has, which is a lack of social skills. In this study, we make a first step towards overcoming this limitation by presenting a design approach that combines automatic sentiment analysis with adaptive responses to emulate empathy in a service encounter. By means of an experiment with 112 participants, we evaluate the approach and find empirical support that a CA with sentiment-adaptive responses is perceived as more empathetic, human-like, and socially present and, in particular, yields a higher service encounter satisfaction.
Currently, conversational agents (CAs) attract strong interest in research and practice alike (Luger and Sellen 2016). While the idea of natural language interaction with computers dates back to the 1960s (Weizenbaum 1966), significantly enhanced capabilities through developments in machine learning and natural language processing have led to a renewed interest in recent years (Knijnenburg and Willemsen 2016). However, many CA could not fulfill the high user expectations and were often discontinued (Ben Mimoun et al. 2012). This gap between user expectations and system capabilities can be better understood by drawing on the Social Response Theory (Nass and Moon 2000). The human-like characteristics of CAs, such as the communication via natural language, being named, or sharing artificial thoughts and emotions, trigger social responses by the users. While these social responses offer interesting design opportunities, such as the design of persuasive CAs (Adler et al. 2016) or empathetic CAs (Hu et al. 2018), they can lead users to similar expectations of the systems as they have towards humans (Seeger et al. 2017), which are often not in line with the system’s actual capabilities (Luger and Sellen 2016). In sum, successful design of CAs remains a practical challenge and an interesting phenomenon to study. [...]
Software that interacts with its users through natural language, so-called conversational agents (CAs), is permeating our lives with improving capabilities driven by advances in machine learning and natural language processing. For organizations, CAs have the potential to innovate and automate a variety of tasks and processes, for example in customer service or marketing and sales, yet successful design remains a major challenge. Over the last few years, a variety of platforms that offer different approaches and functionality for designing CAs have emerged. In this paper, we analyze 51 CA platforms to develop a taxonomy and empirically identify archetypes of platforms by means of a cluster analysis. Based on our analysis, we propose an extended taxonomy with eleven dimensions and three archetypes that contribute to existing work on CA design and can guide practitioners in the design of CA for their organizations.
Conversational agents (CA), i.e. software that interacts with its users through natural language, are becoming increasingly prevalent in everyday life as technological advances continue to significantly drive their capabilities. CA exhibit the potential to support and collaborate with humans in a multitude of tasks and can be used for innovation and automation across a variety of business functions, such as customer service or marketing and sales. Parallel to the increasing popularity in practice, IS researchers have engaged in studying a variety of aspects related to CA in the last few years, applying different research methods and producing different types of theories. In this paper, we review 36 studies to assess the status quo of CA research in IS, identify gaps regarding both the studied aspects as well as applied methods and theoretical approaches, and propose directions for future work in this research area.
Conversational agents continue to permeate our lives in different forms, such as virtual assistants on mobile devices or chatbots on websites and social media. The interaction with users through natural language offers various aspects for researchers to study as well as application domains for practitioners to explore. In particular their design represents an interesting phenomenon to investigate as humans show social responses to these agents and successful design remains a challenge in practice. Compared to digital human-to-human communication, text-based conversational agents can provide complementary, preset answer options with which users can conveniently and quickly respond in the interaction. However, their use might also decrease the perceived human likeness and social presence of the agent as the user does not respond naturally by thinking of and formulating a reply. In this study, we conducted an experiment with N=80 participants in a customer service context to explore the impact of such elements on agent anthropomorphism and user satisfaction. The results show that their use reduces perceived humanness and social presence yet does not significantly increase service satisfaction. On the contrary, our findings indicate that preset answer options might even be detrimental to service satisfaction as they diminish the natural feel of human-CA interaction.