ArticlePDF Available

Abstract and Figures

Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A Taxonomy of Social Cues for
Conversational Agents. International Journal of Human-Computer Studies, 132, 138-161. DOI
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Fritz-Erler-Strasse 23
76133 Karlsruhe - Germany
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe – Germany
© 2019. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license
... Following Krämer et al.'s (2011) prior work on VCS, however, we do not see designing virtual companions to replicate humans as an ideal, since machines that are too human-like can have negative perceptual effects (Mori, 2012;Seymour et al., 2021). Rather, we focus on integrating individual social elements on a well-balanced level to enrich the interaction with CAs (Feine et al., 2019;Seeger et al., 2018). ...
... Further findings have established that if the machine has features normally associated with humans, such as interactivity, the use of natural language, or human-like appearance, users respond with social behavior and social attributions (Moon, 2000). Since such human-like design can amplify humans' social responses, interactions between individuals and computers have the possibility of being social (Feine et al., 2019;Nass & Moon, 2000;Nass et al., 1994). Humans do this because they are accustomed to only humans showing social behavior. ...
... This emphasizes the importance of the CASA theory as one of the most significant theories in the context of CA research. Social reactions to a CA's behavior can be enhanced if the CA exhibits human-like behavior in giving social cues (Feine et al., 2019;Seeger et al., 2018). Many users perceive the integration of such human-like elements as pleasant (Feine et al., 2019). ...
Full-text available
Due to significant technological progress in the field of artificial intelligence, conversational agents have the potential to become smarter, deepen the interaction with their users, and overcome a function of merely assisting. Since humans often treat computers as social actors, theories on interpersonal relationships can be applied to human-machine interaction. Taking these theories into account in designing conversational agents provides the basis for a collaborative and benevolent long-term relationship, which can result in virtual companionship. However, we lack prescriptive design knowledge for virtual companionship. We addressed this with a systematic and iterative design science research approach, deriving meta-requirements and five theoretically grounded design principles. We evaluated our prescriptive design knowledge by taking a two-way approach, first instantiating and evaluating the virtual classmate Sarah, and second analyzing Replika, an existing virtual companion. Our results show that with virtual companionship, conversational agents can incorporate the construct of companionship known from human-human relationships by addressing the need to belong, to build interpersonal trust, social exchange, and a reciprocal and benevolent interaction. The findings are summarized in a nascent design theory for virtual companionship, providing guidance on how our design prescriptions can be instantiated and adapted to different domains and applications of conversational agents.
... Recent technological advancement has given rise to a plethora of chatbots, digital assistants, and social robots. They are conversational agents (CAs) -software-based systems that use natural language and are developed to interact with humans (Adam et al., 2021;Feine et al., 2019). CAs function at different levels of sophistication, from a simple, pre-determined input-response dialog flow to adaptive, AI-based pattern recognition and prediction. ...
... Social cues can positively influence digital humans' and other CAs' social presence. The richer and more human-like they are, the more social they are perceived (Feine et al., 2019). Naturalness also depends on social cues (Feine et al., 2019). ...
... The richer and more human-like they are, the more social they are perceived (Feine et al., 2019). Naturalness also depends on social cues (Feine et al., 2019). Facial expression is an instance of rich social cues, and it does increase digital human's perceived trustworthiness, as well as its perceived sociability and usefulness (Loveys et al., 2020;Philip et al., 2020). ...
Full-text available
Digital agents with human-like characteristics have become ubiquitous in our society and are increasingly relevant in commercial applications. While some of them closely resemble humans in appearance (e.g., digital humans), they still lack many subtle social cues that are important for interacting with humans. Among them are the so-called microexpressions— facial expressions that are short, subtle, and involuntary. We investigate to what extent microexpressions in digital humans influence people's perceptions and decision-making in order to inform the practices of digital human's design. Our two experiments applied four types of microexpressions based on emotion type (happiness and anger) and intensity (normal and extreme). This paper is among the first to design and evaluate microexpressions with different intensity levels in digital humans. In particular, we leverage the possibilities of digitally (re)designing humans and human perception. These possibilities are feasible only in a digital environment, where it is possible to explore various microexpressions beyond real human beings' physical capabilities.
... From the business automation perspective, anthropomorphism is stated as a basic psychological process that can facilitate social interactions between human and nonhuman entities, being considered as an essential construct for understanding people' perception of robots and by sustaining the humans' natural needs for social connection, understanding and control of their environment (Blut, et al., 2021). The anthropomorphic characteristics of AI devices cover a wide variety of elements, starting from the physical appearance and form of AI devices (Lu, Cai and Gursoy, 2019;Song, 2020;Song and Luximon, 2020;Chong, et al., 2021), the use of human voice and conversational skills (Feine, et al., 2019;Ashfaq, et al., 2020;Adam, Wessel and Benlian, 2021) to the transfer of a human-like behavior, psychological traits and characteristics (Lu, Cai and Gursoy, 2019;Mohanty, 2020;Pelau, Dabija and Ene, 2021a) and even to a social role conferred to them (Damiano and Dumouchel, 2018). We have identified some key challenges for each category of anthropomorphic AI devices. ...
... An important category of AI devices are chatbots which have the ability to interact with consumers by voice and by having conversational skills. From the point of view of the chatbot-human interaction, a critical issue for a positive or a negative reaction of consumers are certain social characteristics such as verbal approach, visual emphasizing of the words' meaning, auditory presence components like voice and vocalizations, or invisible behavioral added traits like a certain response time (Feine, et al., 2019;Ashfaq, et al., 2020). Other researchers studied the way in which verbal anthropomorphic design cues (like identity, small talk, and empathy) of an AI-based chatbot affect user request compliance. ...
Conference Paper
Full-text available
Anthropomorphic characteristics at AI devices and robots are an important topic for the development and their future acceptance in the business environment and society. Human like characteristics at AI devices can increase their friendliness and social acceptance, but in the same time the interaction with a human like AI device can be unnatural. In this paper we focus on the empirical comparative analysis of the perception of physical anthropomorphic characteristics at AI devices between genders. Based on an online survey with two conditions (anthropomorphic vs non-anthropomorphic) we measured the perception of men and women towards human like features at AI devices. In comparison to previous research the analysis has been done within the gender groups, so we analyzed the two condition for women and men separately. The results show that men are more sensitive to physical anthropomorphic characteristics of AI devices. While for women no significant differences for the two conditions have been observed, for men there are significant differences for the two condition. Men perceive a higher emotional involvement for the anthropomorphic AI device, but they rather trust and are willing to buy the robot with less anthropomorphic features.
... Berdasarkan database Scopus untuk kata kunci "conversational agents" and chatbots terdapat 53 artikel jurnal dari beragam disiplin ilmu yang diterbitkan dalam tiga tahun terakhir. Dari 53 artikel tersebut, ada 14 artikel berkaitan dengan topik konsumen dan/atau pemasaran(Araujo, 2018;Ashfaq, Yun, Yu, & Loureiro, 2020;Cacanindin, 2020;Feine, Gnewuch, Morana, & Maedche, 2019;Hauser-Ulrich, Künzli, Meier-Peterhans, & Kowatsch, 2020;Ho, Hancock, & Miner, 2018;Huang, Sing, Kumar, & Pradana, 2018;Liu, Huang, Wu, Zhu, & Ba, 2020;Nazir, Yaseen, Ahmed, Imran, & Wasi, 2019;Pantano & Pizzi, 2020;S. Park et al., 2019;Powell, 2019; Rhee ...
Research Proposal
Full-text available
... One stream of research argues that consumers anthropomorphize machines and adhere to social rules when interacting with them (Feine et al., 2019). Consumers apply the social heuristics of human interaction to their interactions with machines via the attribution of social traits (Edwards et al., 2019). ...
Full-text available
Influencer marketing has become a dominant and targeted means for brands to connect with consumers, but it also brings risks associated with influencer transgression and reputation damage. In recent years, virtual influencers have gained popularity and given rise to falsity, or artificially created and manipulated influencers that are revolutionizing the field of influencer marketing. A virtual influencer is an entity—humanlike or not—that is autonomously controlled by artificial intelligence and visually presented as an interactive, real-time rendered being in a digital environment. As brands increasingly seek to engage virtual influencers to connect with and sell to audiences, we take a step back and discuss the opportunities and challenges they present for firms and managers. To help marketers understand this emerging field, we first document the rise of virtual influencers. Next, we discuss consumer reactions to virtual influencers before unpacking their associated opportunities and challenges for brands and marketers. Finally, we conclude with an overview of implications and future considerations.
... These social cues may include indications of personality, emotion, and attitude through voice communication (Nass & Lee, 2001;Wichemann, 2000;Vaissière, 2008). While the terms social and anthropomorphic cues are often used analogously, this paper defines social cues as 'cues commonly also used in human interaction that invite a social response', similar to Feine et al. (2019). Furthermore, Mayer et al. (2003) propose in their 'social agency theory' that these social cues in multimedia can allow users to experience a conversation with a computer more akin to a human conversation. ...
Full-text available
With advancements in voice technology, there are benefits to be reaped by using more human-like voices in chatbots. Intonation patterns can be applied to such voices to give them a human-like attitude, which may allow users to perceive the agent as anthropomorphic. On the one hand, anthropomorphism subsequently may cause trust since human-like voices may be deemed more trustful, and these concepts often relate to each other in human-computer interaction. On the other hand, humans tend to trust someone less when confronted with a negative attitudinal intonation pattern, such as doubt. The present study investigated this contradiction with two hypothesized mediation models: (1) doubtful intonation leads to higher anthropomorphism mediated by social presence, and (2) doubtful intonation leads to higher trust mediated by anthropomorphism. In an interactive chatbot-guided museum tour, participants heard a chatbot with a doubtful intonation applied or the standard text-to-speech output. Two mediation analyses revealed that using a doubtful intonation pattern leads to a higher perception of anthropomorphism, mediated by social presence. However, it also showed that the usage of doubtful intonation directly lowers trust and found no evidence of anthropomorphism as a mediator. Therefore, the results indicate the importance of assessing the effects of separate intonation patterns in voice-based chatbots, and the recommendations and potential for future research on such patterns are discussed.
... This effect can be explained by the uncanny valley theory, as discussed by Mori et al. (2012). Based on the CASA paradigm, anthropomorphic software design elements (also called social cues), such as giving the VC a name or a particular design of the visual appearance of the avatar, can trigger social reactions in humans (e.g., trust or liking) (Feine et al. 2019). A persuasive system design is of pivotal importance for the effectiveness of the VC, with the human-coach relationship being a central factor (Bickmore and Picard 2005;Ding et al. 2010;Kamphorst 2017). ...
... The versatile features met with approval, although individual design recommendations (e.g., the anthropomorphic design) were discussed controversially, similar to the literature (e.g., Feine et al., 2019;Lester et al., 1997;Moyle et al., 2019), which is why the prototype also offers non-human alternatives (screen C). The DFs for overcoming the language function were rated as particularly important, with students emphasizing that incentives should be provided to continually improve language skills -for example, through a reward system for unlocking interesting additional features. ...
Conference Paper
Full-text available
International students often have difficulties in getting connected with other students (from their host country), or in fully understanding the lectures due to barriers such as interacting in a foreign language or adjusting to a new campus. eLearning Companions (eLCs) act as virtual friends, accompany students with dialog-based support for learning and provide individual guidance. We contribute to the lack of prescriptive design knowledge for that specific use case by deriving 16 design principles for eLCs and transferring them into an expository instantiation along the Design Science Research paradigm. We build upon 14 identified literature requirements and 15 condensed user requirements resulting from an empirical study with 76 Chinese-speaking exchange students at a German university. Our objective is to extend the knowledge base and support scientists and practitioners in eLC design for non-native students to initiate further research and discussion.
This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
Full-text available
Technological advances have enabled firms to automate customer service by employing artificial intelligence (AI) chatbots. Despite their many potential benefits, interactions with chatbots may still feel machine‐like and cold. The current study proposes the use of humor by chatbots as a gateway to humanizing them and thereby enhancing the customer experience. Across three experimental studies, the results reveal that (i) the use of humor enhances service satisfaction when it is used by a chatbot but not when it is used by a human agent, (ii) this chatbot humor effect is serially mediated by enhanced perceptions of anthropomorphism and interestingness of the interactions with the chatbot, and (iii) while both positively and negatively valenced chatbot humor may enhance the interestingness of the interactions, socially appropriate (i.e., affiliative) humor as opposed to inappropriate (i.e., aggressive) humor leads to enhanced service satisfaction. This study extends the understanding of the humanization processes of chatbots and provides guidelines for how firms should use chatbot humor to positively influence consumers’ service satisfaction.