ArticlePDF Available

Abstract and Figures

Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A Taxonomy of Social Cues for
Conversational Agents. International Journal of Human-Computer Studies, 132, 138-161. DOI
https://doi.org/10.1016/j.ijhcs.2019.07.009
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Fritz-Erler-Strasse 23
76133 Karlsruhe - Germany
http://iism.kit.edu
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe – Germany
http://ksri.kit.edu
© 2019. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
... As teaching and learning are increasingly shifting into the digital space and being technologically supported, socially inclusive design of information systems for educational purposes must be taken into greater consideration , especially because humans tend to apply social rules, norms, expectations, and attitudes from reality to situations in which they communicate with machines (Lee & Nass, 2010). In this context, the inclusive design of Pedagogical Conversational Agents (PCAs) appears to be particularly relevant, as they embody social cues such as a human identity (e.g., avatar, gender) and verbal cues (Feine et al., 2019;Seeger et al., 2018) like natural and potentially gender-sensitive language, while also gaining attention in research and practice . Despite several studies that target minorities as the key user group for interacting with PCAs such as disadvantaged learners (Gupta & Chen, 2022) or internationals with language barriers , research that explores the impact of their inclusive design is scarce to find. ...
... Further studies are needed to address this challenge and promote inclusion and diversity in PCAs and other information systems. Designers and developers should mindfully consider social cues (Feine et al., 2019), i.e. gendering and inclusiveness to mitigate bias without harming variables relevant to learning with PCAs. ...
Conference Paper
Full-text available
This study examines the impact of different avatar pictures (gender & disability representation) and gendering on students' perceptions of chatbots in an interaction on learning strategies with 180 students from a German university. In the first experiment, we manipulated the chatbot’s humanoid profile picture based on gender and the representation of a visible handicap (wheelchair). In the second experiment, we varied its language style. Statistical analysis revealed that displaying a physical disability significantly enhanced trust, credibility, and empathy but reduced perceived competence and dominance. Gender-sensitive language improved perceptions of competence, trust, credibility, and empathy, whereas we did not find significant interaction effects between both factors. Our results imply the necessity of a more inclusive design of information systems and highlight designers' responsibility in raising awareness and mitigating unconscious bias, as digital learning (technologies) continue to advance.
... The experts emphasized that the PCA should have human-like "social cues" to exude social presence [Fe19] (DG1). E.g., the PCA can have a human-like name and a personality, greet learners, use emojis, and tell jokes to appear natural [Fe19,St22]. ...
... The experts emphasized that the PCA should have human-like "social cues" to exude social presence [Fe19] (DG1). E.g., the PCA can have a human-like name and a personality, greet learners, use emojis, and tell jokes to appear natural [Fe19,St22]. The experts addressed that the PCA should act as a co-equal companion to ensure learners' trust [St22] (DG2). ...
Conference Paper
Full-text available
Pedagogical conversational agents (PCAs) are intelligent dialog systems that can support students as chatbots or voice assistants. However, many users find interactions with PCAs less engaging. One solution to increase learners' engagement is to embed the PCA in a virtual world, e.g., as a humanoid avatar that facilitates collaborative learning. Such a learning setting could be beneficial because virtual worlds positively affect fun and immersion. In this paper, we derive prescriptive design knowledge for PCAs in virtual worlds based on the results of nine expert interviews synthesized with findings from the literature. This design knowledge aims to enable the meaningful design of PCAs in virtual worlds. We contribute to research and practice by demonstrating how PCAs in virtual worlds can be designed to increase students' motivation to learn.
... The agent should thus display and employ such cues whenever possible. Feine et al. [10] provide a taxonomy for such social cues in conversational agents, finding 48 of them, and Amatulli et al. [11] show that they have an impact on the tendency of older consumers' choice of contemporary over traditional products. We also hypothesize that the usage of an advanced ChatGPT-like language model to improve part of the interaction (if not the whole) of the citizen with the EHR portal could be interesting to implement and study. ...
Chapter
Full-text available
Conversational agents provide new modalities to access and interact with services and applications. Recently, they saw a backfire in their popularity, due to the recent advancements in language models. Such agents have been adopted in various fields such as healthcare and education, yet they received little attention in public administration. We describe as a practical use case a service of the portal that provides citizens of the Italian region of Friuli-Venezia Giulia with services related to their own Electronic Health Records. The service considered allows them to search for the available doctors and pediatricians in the region's municipalities. We rely on the use case described to propose a model for a conversational agent-based access modality. The model proposed allows us to lay the foundation for more advanced chatbot-like implementations which will use also alternative input modalities, such as voice-based communication.
... The signals of trust building in the form of transparent (and traceable) interaction, as well as the use of helpful informational cues (e.g., trust seals), are factors that companies should address, with the goal of evoking a sense of personal connection and familiarity with the customer (Einwiller et al., 2000). Moreover, Feine et al. (2019) state that computer systems (AI-based chatbots) can elicit a response when interacting with human interaction partners (customers) using (design elements as) social signals (stimuli). The use of AI-based chatbots in practice-such as in the application area of customer service-is now finding wide corporate application, yet many customers are skeptical about interacting with these systems due to factors (Sonntag et al., 2022) related to security and traceability, social presence, and trust. ...
Article
Full-text available
In the present study, different trust factors regarding customers' perceptions of their intention to interact with or without trust-supporting design elements as signals (stimuli) in an artificial intelligence (AI)-based chatbot in customer service are identified. Based on 199 publications, a research model is derived for identifying and evaluating various variables influencing customers' views of their intention to interact with or without trust-supporting design elements as signals (stimuli) in AI-based chatbots in customer service. The research approach of the study model includes the influencing variables of perceived security and traceability, perceived social presence, and trust. A survey with 158 survey participants is used to empirically evaluate the model developed. One of the main findings of this research study is that perceived security and comprehensibility have a significant influence on the usage intention of an AI-based chatbot with trust-supporting design elements as signals (stimuli) in customer service.
... Users can chitchat with the PCA about everyday questions, and Charles can tell jokes and fun facts. In addition, Charles uses emojis to be perceived as friendly (Feine, Gnewuch, Morana, & Maedche, 2019). However, when players are in concentration phases, the PCA interacts without emojis to avoid distractions. ...
... For e.g., whereas assessing personality of humans, researchers in [22] identified some problematic areas where personality was found to vary based upon the language spoken. Likewise, in the CA context [23] proposed a taxonomy of social cues from a multidisciplinary perspective, and they concluded that verbal aspect of the CAs is an important determinant of the social cues. Similarly, researchers in [24] provided clear evidence that synthesized language variety (German and Austrian language) influence human perception about a conversational agent's extroversion. ...
Article
Full-text available
Recently there has been a tremendous growth in the popularity of artificial intelligence (AI) based conversational agents (CA). Their support for anthropomorphism and human-likeness makes them popular. However, being anthropomorphic raises a question - do these agents have a personality? Moreover, what effect may personality have on the different tasks these agents perform? Through this research, we aim to answer these two questions by focusing on Thai as the communication modality between the users and the CAs. We use a multi-model approach involving human, brand, and website personality frameworks for proposing our CA personality model. We use a series of steps right from creating the initial pool of personality traits to the final set of personality traits through a systematic approach. Our proposed personality model has 7 dimensions across the two-dimensional continuum ( calm - neuroticism, maturity - juvenility, intelligence - ineptness, openness - reserved, sociability - seclusion, self-control - instability, and aesthetics - unaesthetics ). For examining the effect of personality type on the nature of tasks, we identified two primary task categories (social and functional) and used a multi-criteria decision-making approach to examine the corresponding impacts. Social tasks are impacted most from the ( maturity - juvenility ) dimension, whereas functional tasks are mostly impacted from the ( intelligence - ineptness ) dimension. Based on the results we provide suitable recommendations for future research.
Article
With the development of digital virtual assistants(DVA), academics and practitioners have increased attention to the DVA user experience. However, the measurement scale of DVA user experience is still under-researched, which may hinder further empirical study on human-DVA interaction. This study rigorously developed dimensions and associated scales of the DVA user experience. We employed a mixed-method approach that integrated qualitative and quantitative methods. This study first developed multilevel dimensions of DVA user experience based on consumers’ online reviews (n = 21,314), then adopted the ten-step method to develop the associate measurement scale with reliability and validity by collecting and examining three data sets (pretest: n = 368, refinement and validation: n = 585, cross-validation: n = 567). This study fills the gap of a lack of research on the classification and measurement of DVA user experience and provides a reference for practitioners in developing DVA and continuously improving the DVA user experience.
Article
Full-text available
Der vorliegende Beitrag hat das Ziel, den Einsatz von Graphic Novels im Bildungskontext theoretisch zu begründen. Grundlage dafür bildet die Cognitive Affective Theory of Learning with Media (CATLM)von Roxana Moreno, die neben kognitionspsychologischen Prozessen auch den Einfluss motivationaler und emotionaler Faktoren auf den Lernprozess berücksichtigt. Dabei werden insbesondere letztere genauer beleuchtet, um deutlich zu machen, welche motivationspsychologischen Mechanismen beim Einsatz von Graphic Novels im Bildungskontext greifen. Als zentraler Faktor für die Lernmotivation wird das persönliche Interesse der Lernenden am Lerngegenstand herausgestellt. Es wird gezeigt, dass mithilfe des Einsatzes Pädagogischer Agenten, wie sie auch in Graphic Novels zu finden sind, positiv Einfluss auf das persönliche Interesse der Lernenden genommen werden kann. Zurückzuführen ist dies auf eine adäquate Umsetzung der auf Social Cues basierenden Gestaltungsprinzipien Personalization und Embodiment, die sich aus der CATLM ableiten. Auf Basis der gewonnenen Erkenntnisse werden Skalen identifiziert, die im Rahmen der Begleitstudie zur Erprobung zweier selbst entwickelter Graphic Novels eingesetzt werden sollen.
Conference Paper
Large language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.
Article
Full-text available
This article summarizes the panel discussion at the International Conference on Wirtschafts-informatik in March 2019 in Siegen (WI 2019) and presents different perspectives on AI-based digital assistants. It sheds light on (1) application areas, opportunities, and threats as well as (2) the BISE community’s roles in the field of AI-based digital assistants. The different authors’ contributions emphasize that BISE, as a socio-technical discipline, must address the designs and the behaviors of AI-based digital assistants as well as their interconnections. They have identified multiple research opportunities to deliver descriptive and prescriptive knowledge, thereby actively shaping future interactions between users and AI-based digital assistants. We trust that these inputs will lead BISE researchers to take active roles and to contribute an IS perspective to the academic and the political discourse about AI-based digital assistants.