Article

How social is social responses to computers? The function of the degree of anthropomorphism in computer representations

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Testing the assumption that more anthropomorphic (human-like) computer representations elicit more social responses from people, a between-participants experiment (N = 168) manipulated 12 computer agents to represent four levels of anthropomorphism: low, medium, high, and real human images. Social responses were assessed with users’ social judgment and homophily perception of the agents, conformity in a choice dilemma task, and competency and trustworthiness ratings of the agents. Linear polynomial trend analyses revealed significant linear trends for almost all the measures. As the agent became more anthropomorphic to being human, it received more social responses from users.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The influence of loneliness on social perception has also been studied in the context of anthropomorphism, a construct defined as "attributing humanlike properties, characteristics, or mental states to real or imagined nonhuman agents and objects" (p. 865) [20], such as animals, plants, and robots [20,21,[23][24][25][26][27][28][29]. This may include the attribution of mind [30,31] or social characteristics [32]. ...
... In general, technological devices could fulfill two different roles [79]: On the one hand, technologies can be understood as mediating elements or facilitators because they can be used (i) to socialize with other humans across distances [71,80,81], for example, via online multiplayer games [69], or (ii) to facilitate local social interactions, for example, via location-based multiplayer games such as Pokémon Go [67,82]. On the other hand, technologies can themselves be perceived and treated as some sort of "social actors" [24,25,73,83], for example, to fulfill a sociality need by anthropomorphization of a social robot [20,21,34]. Thus, a new branch of research has emerged in which lonelinessrelated effects are considered in the context of technology use. ...
... This means that humans tend to ascribe social attributes to a robot when (a) anthropocentric knowledge is accessible and applicable (elicited agent knowledge), (b) when they are motivated to explain and understand the behavior of some agent (effectance motivation), and (c) when the desire for social contact is high (sociality motivation), for example, in cases of chronic loneliness and social disconnection [20]. The research fields of human-robot interaction (HRI) and social robotics in particular have increasingly applied socialpsychological methods [84] and theories [24,25,29] to understand social effects in the interaction between humans and robotic machines. HRI researchers tested the hypothesis that loneliness is associated with the ascription of human characteristics to robots based on Epley et al.'s [20] anthropomorphism theory. ...
Article
Full-text available
Loneliness is a human experience that affects our social perception, including how we perceive robots as social beings. According to the Three-Factor Theory of Anthropomorphism, sociality motivation causes lonely humans to ascribe more social characteristics to robots. We tested this in a sequence of three studies (total N=558N = 558) within an art-science collaboration based on the installation Contact (2021), in which an industrial robot interacts with humans via organic movement patterns to make physical contact through a pane of glass. In the online study using a video and without real-world interaction, we found no significant effects. However, our two field studies with real-world human-robot interactions showed that humans who felt lonelier ascribed significantly more social characteristics to the robot and were more likely to also exhibit social behavior. The study shows how emotional states such as loneliness influence the perception of a robot. It enhances the literature by (i) providing high statistical power, (ii) incorporating real-world face-to-face human-robot interactions (in addition to online), (iii) utilizing behavioral variables (in addition to perceptual measures), and (iv) employing social movement cues (instead of humanoid appearance). We encourage researchers to engage in art-science collaborations to gain new perspectives that benefit multiple disciplines.
... Specifically, we project the insights on genAI's human-likeness into the uncanny valley theory, positing that genAI systems may have reached a level of humanlikeness that is uncannily familiar to humans, leading to adverse effects on individuals' attitudes and behaviour (Mori, 1970;Mori et al., 2012). In particular, we aim to push the boundaries of genAI's human-likeness through anthropomorphism -'the technological efforts of imbuing computers with human characteristics and capabilities' (Gong, 2008(Gong, , p. 1495. Moreover, the emergent and sophisticated reasoning competencies of clinical genAI systems, manifested in human language (Singhal, Azizi, et al., 2023), have raised important questions regarding how variations in the elaboration and depth of verbal advice affect physicians' reliance outcomes. ...
... Psychology research defines anthropomorphism as the inductive inference about non-human entities, that leads to the attribution of humanlike characteristics to such entities (Epley et al., 2007(Epley et al., , 2008. In contrast, research on human-system interactions refers to anthropomorphism as the extent to which systems exhibit additional human-like characteristics (De Visser et al., 2016;Gong, 2008). In accordance with human-system research, we define anthropomorphism as 'the technological efforts of imbuing computers with human characteristics and capabilities' (Gong, 2008(Gong, , p. 1495. ...
... In contrast, research on human-system interactions refers to anthropomorphism as the extent to which systems exhibit additional human-like characteristics (De Visser et al., 2016;Gong, 2008). In accordance with human-system research, we define anthropomorphism as 'the technological efforts of imbuing computers with human characteristics and capabilities' (Gong, 2008(Gong, , p. 1495. While embodied systems can exhibit a wide range of multi-modal anthropomorphic cues, such as facial expressions, gaze, or human-like body movements, disembodied systems are limited to exhibit a more restricted set of anthropomorphic cues (Araujo, 2018). ...
Article
Full-text available
Generative AI (genAI) has revolutionized clinical AI systems by leveraging human language. Yet, challenges remain in its integration into clinical settings, particularly regarding the risk of physicians relying on hallucinated advice. We conducted an experimental study with 368 novice physicians who diagnosed patient cases while being augmented with clinical genAI systems. A theoretical model was empirically tested to examine how anthropomorphism and advice elaboration affect trust and cognitive load as mediators for appropriate reliance. Findings show that augmenting clinical decisions with genAI systems can improve physicians’ diagnostic accuracy but also frequently results in inappropriate reliance on hallucinated advice due to miscalibrated trust. Moreover, we emphasize the uncanny familiarity evoked by anthropomorphizing genAI systems, which diminishes trust while reducing cognitive load. Our findings highlight the benefits and ethical challenges of genAI in clinical decision support, underscoring the need to balance its advantages with safeguarding the integrity of physicians’ decision agency.
... First, CASA defines computers and virtual entities as social actors capable of engaging in social interactions and providing humanlike responses (Epley et al., 2007;Gong, 2008). For example, previous research finds that Twitterbots are seen as trustworthy, appealing, and proficient in communication and interactions (Edwards et al., 2014). ...
... For example, previous research finds that Twitterbots are seen as trustworthy, appealing, and proficient in communication and interactions (Edwards et al., 2014). In the context of VIs, they are recognized as social agents employing anthropomorphic features, such as facial expressions and a humanlike communication style, to elicit humanlike perceptions in their interactions (Epley et al., 2007;Gong, 2008). ...
... The degree of form realism significantly influences consumers' expectations of behavioral realism; for instance, if digital characters closely resemble humans, consumers anticipate humanlike behaviors (Miao et al., 2022). In the context of VIs, anthropomorphism reflects how closely digital entities mimic human appearance and behavior (Gong, 2008). ...
Article
Full-text available
While prior work suggests the significant role virtual influencers (VIs) can play in addressing social and welfare issues, it remains unclear how they can increase social media users' engagement in prosocial behaviors. The current study examines the role of influencer flattery in driving the prosocial behavior of social media users. Specifically, the research investigates how flattery and humanlike appearance influence the perceived authenticity of VIs and the subsequent prosocial behavior of social media users (Studies 1 and 2). Furthermore, the research substantiates these effects by exploring how flattery and anthropomorphism via humanlike traits impact perceived authenticity and prosocial behavior when social media users witness the influencer praising others (Study 3). Beyond intention to donate (Study 1), the study also investigated its impact on related prosocial behaviors, such as click‐through behavior (Study 2) and the willingness to pay for fundraising merchandise (Study 3). The findings of this research offer concrete implications for nonprofit organizations by demonstrating the persuasive impact of flattery, perceived authenticity, and anthropomorphism in mobilizing public support, donations, and merchandise purchases for social causes.
... Anthropomorphism (Cronbach's α = 0.87) was measured using a scale adapted from Gong (2008) [32], focusing on participants' perception of the AI's human-like characteristics. The scale included items such as "The AI felt human-like in its interaction", "The AI's behavior was similar to that of a human ", "The AI expressed emotions in a human-like way ", and "The AI seemed capable of understanding me like a person. ...
... Adapted from Gong (2008) [32]. ...
Article
Full-text available
As AI becomes increasingly embedded in the sharing economy, understanding its impact on consumer behavior is crucial. This research aims to examine how different human–AI relationship types—equal vs. human-dominant—influence responsible consumption, framed within Social Identity Theory. Specifically, we investigate the mediating role of social identification and the moderating effect of anthropomorphism in shaping consumer responses to AI interactions. Across three experimental studies, we demonstrate that (1) equal human–AI relationships lead to higher responsible consumption than human-dominant relationships; (2) social identification mediates this relationship, as equal AI fosters greater consumer identification, which subsequently enhances responsible behavior; and (3) anthropomorphism moderates these effects, such that the positive influence of equal relationships on responsible consumption is significant only when anthropomorphism is low, whereas this effect diminishes when anthropomorphism is high. These findings contribute to the growing literature on AI–consumer interactions by offering insights into AI’s social dynamics and practical recommendations for designing AI systems that effectively promote responsible behavior in shared services.
... Another alternative explanation is that the visual anthropomorphism cue in this experiment may have been insufficient to exert sufficient social influence compared with other studies. According to Gong (2008), the ability of an AI agent to elicit social responses from users is linearly related to the level of anthropomorphism. In other words, the higher the level of anthropomorphism, the stronger the social influence. ...
... As researchers study anthropomorphism in AI agents to enhance the effectiveness of emotional expressions, they should consider the potential compensatory effects of contingent interactivity cues, such as nonverbal expressions. Additionally, when distinguishing visual anthropomorphic cues, there is a need to classify them based on more detailed criteria (Gong 2008). ...
... The scale used in this research is comprise of three primary components: competence, trust, and acceptance. The evaluation scale of robot competence can be obtained from relevant literature Berlo et al., 1969;Gong, 2008;McCroskey et al., 1974McCroskey et al., , 1975Ohanian, 1990), which contains six items (1. unintelligent-intelligent; 2. knowledgeable-unknowledgeable; 3. incompetent-competent; 4. uninformed-informed; 5. inexpert-expert; and 6. experienced-inexperienced). ...
... unintelligent-intelligent; 2. knowledgeable-unknowledgeable; 3. incompetent-competent; 4. uninformed-informed; 5. inexpert-expert; and 6. experienced-inexperienced). The evaluation scale of trust can be obtained from the literature (Gong, 2008;Jian et al., 2000;Wheeless & Grotz, ...
Article
Full-text available
As social robots may be used by a single user or multiple users different social scenarios are becoming more important for defining human-robot relationships. Therefore, this study explored human-robot relationships between robots and users in different interaction modes to improve user interaction experience. Specifically, education and companion were selected as the most common areas in the use of social robots. The interaction modes used include single-user interaction and multi-user interaction. The three human-robot relationships were adopted. The robot competence scale, human-robot trust scale, and acceptance of robot scale were used to evaluate subjects’ views on robots. The results demonstrate that in the two scenarios, people were more inclined to maintain a more familiar and closer relationship with the social robot when the robot interacted with a single user. When multiple persons interact in an education scenario, setting the robot to Acquaintance relationships is recommended to improve its competence and people’s trust in the robot. Similarly, in multi-person interaction, Acquaintance relationships would be more accepted and trusted by people in a companion scenario. Based on these results, robot sensors can be added to further optimize human-robot interaction sensing systems. By identifying the number of users in the interaction environment, robots can automatically employ the best human-robot relationship for interaction. Optimizing human-robot interaction sensing systems can also improve robot performance perceived in the interaction to meet different users’ needs and achieve more natural human-robot interaction experiences.
... For example, [48] found that a robot that had static eye color (more human-like) was more trustworthy than a robot whose eye color changed according to its status (e.g., green when speaking; blue when it recognized a person). Other work has examined how presence or absence of a human-like face shapes trust [6,22]. For example, [22] found users had greater trust for a computer when it was presented with a more human-like face. ...
... Other work has examined how presence or absence of a human-like face shapes trust [6,22]. For example, [22] found users had greater trust for a computer when it was presented with a more human-like face. ...
... Later, animated instructors were represented as human-like characters including cartoon characters (Dunsworth & Atkinson, 2007) and 3D virtual characters (Makransky et al., 2018). Thus, a trend can be seen in which animated instructors have become increasingly more real and lifelike because it is assumed that the more real and lifelike the on-screen instructor is, the better the learning effect and learning experience may be (Gong, 2008). However, some researchers based on the uncanny valley phenomenon contend that with the increase in on-screen instructors' realism level, on-screen instructors may have a negative impact on learning (Rokeman et al., 2020). ...
... Which kind of on-screen instructor should be added to an e-learning environment to promote effective learning: an embodied cartoon-like animal, a cartoon-like human or a real human? Although these three kinds of instructors have all been used previously in learning environments, it may seem like common sense to use a real human whenever possible (Gong, 2008). However, in contrast, our findings lead us to disagree with such a recommendation. ...
Article
Background Although adding embodied instructors on the screen is considered an effective way to improve online multimedia learning, its effectiveness is still controversial. The level of realism of embodied onscreen instructors may be an influencing factor, but it is unclear how it affects multimedia learning. Aims We explored whether and how embodied onscreen instructors rendered with different levels of realism in multimedia lessons affect learning process and learning outcomes. Samples We recruited 125 college students as participants. Methods Students learned about neural transmission in an online multimedia lesson that included a real human, cartoon human, cartoon animal, or no instructor. Results Students learning with cartoon human or cartoon animal instructors tended to fixate more on the relevant portions of the screen and performed better on retention and transfer tests than no instructor group. The real human group fixated more on the instructor, fixated less on the relevant portion of the screen, and performed worse on a retention test in comparison to the cartoon human group. Fixation time on the instructor fully mediated the relationship between instructor realism and retention score. Conclusions The addition of embodied onscreen instructors can promote multimedia learning, but the promotion effect would be better if the embodied instructor is a cartoon animal or cartoon human rather than a real human. This suggests an important boundary condition in which less realism of onscreen embodied instructors produces better learning processes and outcomes.
... Anthropomorphism can be triggered by a series of situational, psychological, and cultural factors including interface cues and heuristics, similarity perceptions, and norms (Epley et al., 2007;Gong, 2008;Waytz et al., 2014). Humanlike user interface features and cues, although not mandatory, can induce perceived anthropomorphism among users. ...
... Humanlike user interface features and cues, although not mandatory, can induce perceived anthropomorphism among users. For example, Gong (2008) elicited perceived anthropomorphism of a computer agent by facial image representations with different fidelity levels, supporting the claim that more anthropomorphic (humanlike) features trigger more user social responses. Kim and Sundar (2012) elicited greater perceived anthropomorphism of a chatbot from its users by adding a composite of humanlike cues (name, avatar, occupation). ...
Article
Full-text available
Recommendation systems (RSs) leverage data and algorithms to generate a set of suggestions to reduce consumers’ efforts and assist their decisions. In this study, we examine how different framings of recommendations trigger people’s anthropomorphic perceptions of RSs and therefore affect users’ attitudes in an online experiment. Participants used and evaluated one of four versions of a web-based wine RS with different source framings (i.e. “recommendation by an algorithm,” “recommendation by an AI assistant,” “recommendation by knowledge generated from similar people,” no description). Results showed that different source framings generated different levels of perceived anthropomorphism. Participants indicated greater trust in the recommendations and greater confidence in making choices based on the recommendations when they perceived an RS as highly anthropomorphic; however, higher perceived anthropomorphism of an RS led to a lower willingness to disclose personal information to the RS.
... For example, using ostensibly realistic virtual assistants can enhance consumers' attitudes toward them (e.g. trustworthiness [David-Ignatie® et al. (2023); Gong (2008); Mull et al. (2015); Westerman et al. (2015)]; aptitude [Westerman et al. (2015)]; homophily c [Gong (2008)]; attitude toward websites and brand [David-Ignatie® et al. (2023)]). In addition, the nonphysical reality of virtual assistants improves consumers' attitudes toward virtual assistants. ...
... For example, using ostensibly realistic virtual assistants can enhance consumers' attitudes toward them (e.g. trustworthiness [David-Ignatie® et al. (2023); Gong (2008); Mull et al. (2015); Westerman et al. (2015)]; aptitude [Westerman et al. (2015)]; homophily c [Gong (2008)]; attitude toward websites and brand [David-Ignatie® et al. (2023)]). In addition, the nonphysical reality of virtual assistants improves consumers' attitudes toward virtual assistants. ...
Article
Full-text available
Purpose: Recently, there has been enthusiasm for the relatively new technology, the metaverse. One technology related to and important for the metaverse is the Avatar, which is the graphical representation of users. Avatar has been studied for many years and is beginning to be used as a marketing tool. Although we think that there are three streams of avatar marketing (i.e. Virtual Social and Game World (VSGW) Avatar Marketing; Model Avatar and Virtual Fitting Room Marketing (MAVFR) Marketing; Virtual Assistant Marketing), little research has summarized them. This absence of categorization of three types of avatar marketing results in a disorder of research findings and makes it difficult for researchers and marketers to conduct avatar marketing research and best practices. This paper aims to explore the potential, specific examples, and challenges of each of the three types of avatar marketing. Approach and Contributions: In this narrative review paper, the authors first explore the definition and brief history of each type of avatar marketing. Then, the authors explore the potential, specific examples, and challenges of each type of avatar marketing by reviewing and referencing research papers, websites, newspaper articles, and books. Related to this, this paper discusses the importance of understanding each kind of avatar marketing in the developed future virtual worlds. Originality: To the best of our knowledge, this is the first review paper that covers three streams of avatar marketing that will inspire practitioners and researchers about the challenges and opportunities of the three types of avatar marketing with specific examples.
... • There is mixed evidence on whether informational treatments can influence personal automation concern and policy preferences (e.g., Zhang, 2022a;Ladreit, 2022;Golin and Rauh, 2022;Magistro et al., 2024). ...
... To create a human identity for our agent, we relied on the three dimensions of anthropomorphic design proposed by (Seeger et al. 2018). We first gave our agent's name and gender (Cowell and Stanney 2005;Nunamaker et al. 2011), as well as an avatar representing a customer service employee (Gong 2008), which allowed for more natural https://msijournal.com/?print-my-blog=1&post-type=post&status…show_featured_image=1&show_content=1&pmb_f=pdf&pmb-post=4996 responses. Next, we integrated verbal cues to enrich the experience, such as self-disclosures (Schuetzler et al. 2018), self-references (Sah and Peng 2015), and personalized greetings (Cafaro et al. 2016). ...
Article
Research was conducted to investigate the influence of flow and social presence on users' behavioral intention based on the perceived anthropomorphism of the conversational agent. By using a S-O-R model, we utilized a sample of 329 participants, and the results obtained through the PLS-SEM method confirm that anthropomorphism, enhanced by social cues, significantly increases behavioral intention. Furthermore, the relationship between anthropomorphism and behavioral intention is mediated by flow and social presence through partial mediation. These findings have significant theoretical implications for understanding how anthropomorphism promotes flow state among online experience.
... Moreover, computers that express positive emotions are perceived as more likable and trustworthy [14]. These effects seem to be amplified when there are more cues present to convey humanity, such as more realistic avatar images or added information such as animation or facial expression [15,16]. ...
Article
Full-text available
Conversational agents (CAs) are effective tools for health behavior change, yet little research investigates the mechanisms through which they work. Following the Computer as Social Actors (CASA) paradigm, we suggest that agents are perceived as human-like actors and hence influence behavior much as human coaches might. As such, agents should be designed to resemble ideal interaction patterns, for example, by resembling their users. In this registered report, we evaluated this paradigm by testing the impact of customization on similarity and reciprocity, which in turn were hypothesized to improve perceptions of the agent and compliance with the agent’s recommendations to complete a cognitive training exercise. In an online study, 2437 participants were randomly assigned to one of two surface-level CA customization conditions (present/absent) and to one of two deep-level CA customization conditions (present/absent) in a between-subject experimental design. As part of a conversation flow with a CA, participants assigned to the present surface- and/or deep-level customization conditions were able to choose their preferred CA based on the four personality summaries and/or choose their CA’s gender (male/female/agender robotic), avatar (choice between seven avatars corresponding to the chosen gender), and name. While the ability to customize increased similarity to the user and the perceptions of customizability, our findings show that customization did not impact experience or compliance. However, the perceived customizability of the agent was linked to increases in the likeability and usefulness of the agent. We conclude that our work finds no negative effects of customization; yet, its impact on the relationship between the agent and its user is complex and can benefit from more research as merited by its applicability to public health. As aging and ill populations increase the burden on health systems worldwide, CAs have the potential to transform the landscape of accessible care.
... This approach is substantiated by observed human psychological phenomenon such as anthropomorphism-the tendency to attribute human qualities to non-human entities (Złotowski et al., 2015), highlighting a bias towards human-centric interaction with the environment. And research has shown that anthropomorphism has a positive effect on perceptions of competence and trustworthiness in autonomous technologies (Waytz et al., 2014;De Visser et al., 2016;Gong, 2008). ...
Article
Full-text available
Introduction In human-agent interaction, trust is often measured using human-trust constructs such as competence, benevolence, and integrity, however, it is unclear whether technology-trust constructs such as functionality, helpfulness, and reliability are more suitable. There is also evidence that perception of “humanness” measured through anthropomorphism varies based on the characteristics of the agent, but dimensions of anthropomorphism are not highlighted in empirical studies. Methods In order to study how different embodiments and qualities of speech of agents influence type of trust and dimensions of anthropomorphism in perception of the agent, we conducted an experiment using two agent “bodies”, a speaker and robot, employing four levels of “humanness of voice”, and measured perception of the agent using human-trust, technology-trust, and Godspeed series questionnaires. Results We found that the agents elicit both human and technology conceptions of trust with no significant difference, that differences in body and voice of an agent have no significant impact on trust, even though body and voice are both independently significant in anthropomorphism perception. Discussion Interestingly, the results indicate that voice may be a stronger characteristic in influencing the perception of agents (not relating to trust) than physical appearance or body. We discuss the implications of our findings for research on human-agent interaction and highlight future research areas.
... These two forms of anthropomorphism are not entirely separate, and they generally work together in the anthropomorphism design of a particular product or service (Yanxia et al., 2024). The majority of studies suggest that, compared to non-anthropomorphism designs, users have a more positive attitude or willingness to use anthropomorphism designs (Chandler & Schwarz, 2010;Gong, 2008;Kim et al., 2019;Li & Sung, 2021). However, it has also been shown that users do not always enjoy interacting with anthropomorphism robots (Zhu & Chang, 2020). ...
Article
Full-text available
Although an increasing number of studies explore the factors influencing users' privacy concerns regarding social robots, the existing understanding of this issue remains largely fragmented. Previous studies have mainly focused on the "net effect" between variables, leaving the complexity of causal configurations, and the holistic impact of design characteristics of social robots on user privacy concerns remains unclear. Based on the Stimuli-Organism-Response (S-OR) framework and Communication Privacy Management Theory (CPMT), this study integrates social robot design characteristics such as Anthropomorphism, Warmth, Competence, and Transparency into causal configurations, and uses Perceived Privacy Risk and Perceived Privacy Control as mediating variables to propose a Comprehensive conceptual model. Based on valid data from a sample of 198 Chinese social robot users, this study conducted empirical analyses of the conceptual model using Partial Least Squares Structural Equation Modeling (PLS-SEM) and Fuzzy-set Qualitative Comparative Analysis (FsQCA). PLS-SEM results show that anthropomorphism, warmth, competence , and transparency are key factors influencing privacy concerns, and perceived privacy risk mediates the relationship between warmth, information transparency, and privacy concerns. The FsQCA results further validated the findings of PLS-SEM and identified five configurations of factor combinations that led to higher levels of user privacy concerns. Among them, the combination of high anthropomorphism design, high competence, and low warmth of social robots is the core configuration that leads to users' privacy concerns. Overall, this study broadens our understanding of social robot users' privacy concerns and reveals the causal complexity behind social robot users' privacy concerns. It provides some theoretical and practical insights for subsequent scholars and designers.
... If the features of the system are not comprehensible a mismatch between system and user is given leading to a decreased effectiveness of the information system (Barbosa & Hirko, 1980). UX and UI optimization may be a fundamental part of acceptance research for information technologies since Gong (2008) shows that an anthropomorphized interface leads to higher social responses from users. Scheuer (2020) identified that anthropomorphizing a system interface has a positive influence on the perception of the system as technology and as a person. ...
Article
Full-text available
This thesis explores the acceptance of decision-aiding technologies in management, which is a challenging component in their use. To address the lack of research on algorithmic decision support at the managerial level, the thesis conducted a vignette study with two scenarios, varying the degree of anthropomorphizing features in the system interface. Results from the study, which included 281 participants randomly assigned to one of the scenarios, showed that the presence of anthropomorphized features did not significantly affect acceptance. However, results showed that trust in the system was a crucial factor for acceptance and that trust was influenced by users' understanding of the system. Participants blindly trusted the system when it was anthropomorphized, but the study emphasized that system design should not focus on the benefits of blind trust. Instead, comprehensibility of the system results is more effective in creating acceptance. This thesis provided practical implications for managers on system design and proposed a structural model to fill a research gap on acceptance at the managerial level. Overall, the findings may assist companies in developing decision support systems that are more acceptable to users.
... As discussed earlier, the CASA paradigm posits that humans interact with computers-and by extension, computergenerated entities-using the same social rules they apply to human interactions (Reeves & Nass, 1996). This paradigm has been foundational in understanding user interactions with VIs, as indicated by studies demonstrating that users attribute personality traits to VIs and can form parasocial relationships with them (Gong, 2008). Despite these insights, the role of narrative persuasion in the context of VIs has been overlooked. ...
Article
This research addresses the rising prominence of virtual influencers (VIs) by asking a crucial question: “How can we effectively use virtual influencers to not only reach audiences but also deeply resonate with them, particularly in promoting socially responsible behaviors?” We propose employing narrative messaging to enhance virtual influencers’ effectiveness in delivering prosocial messages. In a 2 (VI appearance: human-like vs. anime-like) × 2 (message style: narrative vs. non-narrative) between-subjects design, 320 Gen-Z and younger Millennials were exposed to simulated Instagram posts by a VI discussing cyberbullying. Results indicated that human-like virtual influencers led to higher supporting intent and message credibility, especially in the non-narrative condition. However, in the narrative message condition, the advantage of human-like appearance diminished. These findings highlight the significant role of VI appearance in prosocial message reception and the conditional influence of message style. Actionable insights for practitioners leveraging VIs in social marketing strategies are discussed.
... HVIs are designed to resonate with viewers on a deep emotional level, aligning seamlessly with the audience's intrinsic social expectations, thereby amplifying trust (gong 2008;Honeycutt and Bryan 2011;gambino, Fox, and Ratan 2020). This resonance is in harmony with the postulates of the CASA paradigm (Reeves and Nass 1996). ...
Article
Full-text available
This research explores influencer trustworthiness and stimulus novelty as important paths for the effectiveness of virtual influencers. We compare human-like virtual influencers (HVIs) and anime-like virtual influencers (AVIs) operating either with sponsorship disclosure or non-disclosure. In two experiments, we found that HVI (vs. AVIs) produced higher content engagement and purchase intent via greater influencer trustworthiness only when sponsorship was not disclosed. Upon disclosure, the advantage of HVIs over AVIs disappeared. We also found that the influencer’s status, whether mega or micro, significantly moderated the influence of sponsorship disclosure. Perceived novelty of AVIs are consistently higher than HVIs, and perceived novelty mediates the effect of the AVIs (vs. HVIs) on content engagement, but only when sponsorship was disclosed. These findings offer important insights into how virtual influencers may influence consumer engagement and purchase decisions.
... Against this background and given the inclination of people to respond to technological systems in social ways (Nass & Moon, 2000) and the empirical importance of the social dimensions as antecedents of (voice) shopping decisions, it is rather surprising that only a few studies have looked at the impact of perceived relationship to the conversational AI on home shopping behavior. Much research has focused on relational proxies, assessing constructs such as perceived warmth, psychological distance, or anthropomorphism (e.g., Gong, 2008;Pitardi & Marriott, 2021) or role ascriptions (e.g., Sundar et al., 2017). Furthermore, and as mentioned above, inconsistent results raise further questions: Hu et al. (2022) have found that presenting conversational AI as servants enabled a power experience for users as masters and increased voice shopping intentions (given that they had a desire for power). ...
Article
Full-text available
In the emerging field of voice shopping with quasi-sales agents like Amazon's Alexa, we investigated the influence of perceived human-AI relationships (i.e., authority ranking, market pricing, peer bonding) on (voice-)shopping intentions. In our cross-sectional survey among experienced voice shoppers, we tested hypotheses specifically differentiating voice shopping for low- and high-involvement products. The results emphasized the importance of socio-emotional elements (i.e., peer bonding) for voice shopping for high-involvement products. While calculative decision-making (i.e., market pricing) was less relevant, the master-servant relationship perception (i.e., authority ranking) was important in low-involvement shopping. An exploratory analysis of users’ desired benefits of voice shopping reinforces our claims. The outcomes are relevant for conversation designers, business developers, and policymakers.
... In the past, the believability of virtual characters was often associated exclusively with the physical appearance of the character and its animation quality [48]. For example, anthropomorphic characters are seen as more competent and trustworthy, which consequently increases the social reactions of users [22]. This does not mean that the character must have a photorealistic appearance, to create behavioral realism. ...
Conference Paper
The presence of virtual characters in digital games influences play-ers' experiences; however, the specific impact of various emotional states exhibited by these characters remains unclear. Theories like Emotional Contagion and Emotional Similarity seek to elucidate how the emotions of others affect our own: We might let the emotional state of others infect us, or our emotions may be intensified when others exhibit similar feelings. In the gaming context, we pondered: What impact does playing a horror game alongside a confident virtual character, compared to an anxious one, have on players' emotional states and experiences? We conducted a lab study with 69 participants and compared a VR horror game played with either a confident or anxious virtual character or alone. Horror games were chosen for their ability to evoke intense emotions, like fear. Our results show that participants could accurately recognize the emotional states portrayed by the virtual character. The horror game significantly increased fear and positive affect, according to the nature of this genre. Not in line with expectations, there was no significant difference between groups regarding player experience and emotional state. We discuss these findings and explore implications for emotional virtual characters within and beyond gaming contexts. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI.
... To encourage participants to anthropomorphize and attribute a human-like mind to our chatbot, we utilized several anthropomorphic and social cues suggested by previous studies (Gong, 2008;Araujo, 2018;Go and Sundar, 2019;Schuetzler et al., 2020;Adam et al., 2021;Schanke et al., 2021). Those cues facilitate anthropomorphism by increasing an agent's social presence (i.e., the degree to which an agent is salient in the interaction; Short et al., 1976) and signaling its identity. ...
Article
Full-text available
The social support provided by chatbots is typically designed to mimic the way humans support others. However, individuals have more conflicting attitudes toward chatbots providing emotional support (e.g., empathy and encouragement) compared to informational support (e.g., useful information and advice). This difference may be related to whether individuals associate a certain type of support with the realm of the human mind and whether they attribute human-like minds to chatbots. In the present study, we investigated whether perceiving human-like minds in chatbots affects users’ acceptance of various support provided by the chatbot. In the experiment, the chatbot posed questions about participants’ interpersonal stress events, prompting them to write down their stressful experiences. Depending on the experimental condition, the chatbot provided two kinds of social support: informational support or emotional support. Our results showed that when participants explicitly perceived a human-like mind in the chatbot, they considered the support to be more helpful in resolving stressful events. The relationship between implicit mind perception and perceived message effectiveness differed depending on the type of support. More specifically, if participants did not implicitly attribute a human-like mind to the chatbot, emotional support undermined the effectiveness of the message, whereas informational support did not. The present findings suggest that users’ mind perception is essential for understanding the user experience of chatbot social support. Our findings imply that informational support can be trusted when building social support chatbots. In contrast, the effectiveness of emotional support depends on the users implicitly giving the chatbot a human-like mind.
... To understand whether face communication influences the relationship between inserted advertisements and audience feedback, we turn to social presence theory, which refers to the degree of sociable, warm and personal of the communication perceived by people (Kim 2021). In a computer-mediated communication environment, more anthropomorphic (human-like) computer representations elicit more social responses from people (Gong 2008). Theory of self-disclosure indicates that communication through sharing intimate personal information about themselves enhances feelings of connection (Utz 2015) and perceived closeness (Lin and Utz 2017). ...
... In summary, our investigation into user preferences has encompassed four distinct presentation formats: Textual, Graphical, Tabular, and Voice. These choices are inherently connected to the fundamental concept of anthropomorphism within Social Response Theory (Gong, 2008). As the literature underscores, the relationship among presentation style, anthropomorphism, and user responses can be significant, with profound implications for AI design (Sproull, Subramani, Kiesler, Walker and Waters, 1996). ...
Article
• Users’ reactions differ when interacting with an AI-enabled tools that sends dynamically delayed feedback to improve the quality of their work, but, generally, a communication delay of approximately one to three seconds is found to be most effective in boosting performance. • Explicitly informing participants that they are interacting with an AI increases the impact of the AI tool on performance improvement. This indicates a decreased aversion to AI-generated feedback in light of the growing presence of AI applications. • The study contributes to the knowledge management literature by examining how communication delay affects actual knowledge contribution behaviors and the importance of AI features in pro-knowledge- contributing mechanisms. • The research highlights the temporal factors in human-algorithm interactions and emphasizes the role of anthropomorphism in algorithm design, including managing customer expectations and the disclosure of AI involvement. • The findings extend the anthropomorphism literature by specifically and practically considering communication delay as a signal of algorithmic anthropomorphism, demonstrating its influence on human perceptions and subsequent behaviors.
... One primary difference between virtual humans and non-embodied voices is the level of anthropomorphism. Anthropomorphism affects participants' social responses and perception of personality in human-computer interactions [11,26]. This distinction highlights the importance of further investigating the influence of vocalics and personality matching in the context of virtual humans designed for mental health interventions. ...
Conference Paper
Virtual humans are employed in various contexts, including mental health interventions, to encourage users to adopt healthy behaviors. Establishing rapport with users can enhance the effectiveness of these virtual agents. One way to build rapport is by matching the personality between the virtual human and the participant, which may elicit a similarity-attraction effect, shown to increase trust and likeability in interactions. Despite the potential of computer-generated voices to convey personality through the manipulation of vocalic properties, prior research has primarily focused on non-vocal aspects of personality. To address this gap, we conducted an online study that altered a virtual human's vocalic properties to represent high or low extroversion, with a focus on the role of rapport in promoting mental wellness. In this study, a virtual human provided information on stress-reducing mental-wellness practices to 165 participants. Our findings suggest that synthesizing vocalic properties to resemble a low extroversion level can enhance the persuasiveness of virtual humans and improve rapport, as indicated by participants' self-reported intention to engage in mental-wellness practices.
... Attraction and reliability are determinants of trust in HRI (Siddike & Kohda, 2018). Gong (2008) also suggests that avatars with a more humanlike appearance are perceived to be more capable of decision-making ability and more trustworthy. Therefore, we propose the hypothesis: H3:Task attraction mediates between the degree of service robot anthropomorphism and consumer trust. ...
... From the design of agents or stimuli, scholars explain anthropomorphism as "the degree to which a character has the properties of human appearance or behavior" (Kim et al., 2023;Murphy et al., 2021). In such a domain, a technology agent with more humanlike features including voice, motion, and appearance is described as more anthropomorphism (e.g., Gong, 2008;Kim, Lee, & Kang, 2023). We believe that the level of human likeness is a more straightforward way to explain the degree a character has attributes of a human, which is also frequently used in existing research (Gammoh, Jiménez, & Wergin, 2018;Tsai, Liu, & Chuan, 2021). ...
... Therefore, they share anthropomorphic design elements (Feine et al. 2019) as they reinforce their human interactors to exhibit social behaviors towards the LC (Nass et al. 1994). These social cues (Seeger et al. 2021) are represented in an embodied avatar (Gong 2008) in both prototypes. This gives the LC a personal appearance, which can positively affect the user's perceived social presence (Kim et al. 2013;Lester et al. 1997) of the LC and is motivational (Baylor 2011). ...
Conference Paper
Full-text available
Learning Companions (LCs) are bonding conversational agents designed to facilitate learning through natural communication. The heterogeneity of students requires adaptable LCs that can better match their individual needs. In this paper, we propose a value-oriented perspective on LC adaptation, focusing on enhancing the learner's value-in-interaction (ViI) with LCs to establish a strong companionship in the long term. We conceptualize a model that centers around an adaptation-enhanced ViI and translate it into design requirements with incorporated adaptable specifications. In an online experiment (within-subject design) with 48 students, we compared an adaptable and a non-adaptable LC instantiation teaching digital literacy. Results indicate that the adaptable LC significantly improves the perceived ViI across all introduced value layers (relationship, matching, service). Our findings highlight the importance of (currently underrepresented) value-oriented LC adaptation and its potential to enhance the learning experience.
Article
Full-text available
Since the release of OpenAI's ChatGPT in 2022, AI activity has reached a fever pitch. Calls for effective ethical responses to the pressurised AI environment have in turn abounded. Posthumanism, which seeks to build ethical futures by de-centring the ‘human’, is an obvious candidate to act as a lynchpin of theoretical intervention. In their responses, posthumanist scholars appear to have embraced AI’s potential to destabilise Humanist philosophical ideas. We critically interrogate this initial enthusiasm. Conceptually distinguishing ‘post-dualist self-development’ (PDSD) from ‘technical self-development’ (TSD), we show how AI prompts an urgent need to advance posthumanist engagement with how technical development unsupervised by humans is ontologically discrete from other forms of material agency. We argue that specific engagement with TSD as distinct from PDSD is a key to avoid ignoring or underestimating Humanist and anthropocentric aspects of current AI innovation, and the influence of anthropomorphism. Without a theoretical reckoning with these tensions, posthumanism in the AI-era runs the risk of potentially promoting technologies that reinvigorate Humanist and anthropocentric expansion. To conclude, we show how a posthumanist ethics of generative AI that pays requisite attention to both TSD and PDSD may enable more anticipatory and nuanced assessments of the risks and benefits of discrete AI technologies to inform public discourse, appropriate social, institutional, policy and governance responses, and direct AI research and development priorities.
Article
Purpose This study aims to investigate the impact of mental simulation triggered by avatar realism on product attitudes. Specifically, this study applies a mental simulation framework when consumers try fashion items on avatars in the metaverse. As metaverse consumers envision themselves as avatars, mental simulation can explain how avatar realism makes them perceive and evaluate fashion products. Design/methodology/approach Across two experimental studies, this study manipulates the level of avatar realism. Two versions of a short video clip depicting various avatars in the metaverse were used as stimuli. A total of 106 participants for Study 1 and 137 participants for Study 2 were recruited through an online research company. Data were analyzed with SPSS 26.0 using the PROCESS macro. Findings The avatar realism influenced consumers to perceive greater similarity and to easily simulate the fashion item on their own, resulting in a better product attitude. In addition, this study demonstrated a serial moderated mediation effect. According to CLT, where individuals’ construal levels (i.e. abstract vs concrete) differ according to the characteristics of the given decision, individuals with an abstract processing mode focus on commonalities. Thus, they perceived avatars to be visually similar to themselves regardless of the degree of avatar realism. Originality/value The findings of the study contribute to the literature on metaverse marketing, focusing on consumer–brand interaction through avatars. This further helps industry practitioners understand and employ avatar features to attract consumers to virtual fashion products.
Article
Implementing artificial intelligence also requires examinations of public attitudes and perceptions. One approach is by examining media framing of artificial intelligence, including news coverage, which is a reflection of societal perceptions and a key influence over people’s understanding. As such, this study examines the framing of communicative artificial intelligence in Singapore, looking at how the news media frame communicative artificial intelligence and characterize it as a social actor. Through a manual content analysis of 336 news articles from three major news websites in Singapore, this study found that the news media in Singapore tend to focus on the benefits and advances of communicative artificial intelligence and portray communicative artificial intelligence as a tool rather than social actor. However, when comparing news coverage of communicative artificial intelligence after the advent of ChatGPT, the news framed communicative artificial intelligence more in terms of risks, regulations, responsibilities, and conflict.
Article
Full-text available
Generative AI systems like chatbots are increasingly being introduced into learning, teaching and assessment scenarios at universities. While previous research suggests that users treat chatbots like humans, computer systems are still often perceived as less trustworthy, potentially impairing their usefulness in learning contexts. How are processes of social cognition applied to chatbots compared to humans? Our study focuses on the role of politeness in communication. We hypothesise that polite communication improves the perception of trustworthiness of chatbots. University students read a feedback dialogue between a student and a feedback provider. In a 2 × 2 between‐subjects experimental design, we manipulated the feedback's author (chatbot vs. human teacher) and the feedback formulation (polite vs. direct). Participants evaluated the feedback giver on measures of epistemic trustworthiness (expertise, benevolence and integrity) and on two basic dimensions of social cognition, namely agency and communion. Results showed that a polite feedback giver was rated higher on benevolence and communion, whereas a direct feedback giver was rated higher on agency. Unexpectedly, the chatbot was rated lower on benevolence than the human. This suggests that social cognition does apply to interactions with chatbots, with caveats. We discuss the findings regarding the design of feedback chatbots and their use in higher education. Practitioner notes What is already known about this topic Technology users tend to treat computer systems like humans, but computers are usually trusted less. Polite communication, that is mitigation of face threats is expected to enhance the evaluation of a chatbot as trustworthy. The research is relevant for the use and acceptance of chatbots as feedback providers in educational contexts. What this paper adds We test the assumption that polite language reduces the gap in epistemic trustworthiness between chatbots and human teachers as feedback givers. We describe an empirical study with 284 university student participants who report their perceptions of a feedback dialogue between a student and either a human teacher or a chatbot. We analyse the impact of feedback source as well as politeness on trustworthiness perceptions and social cognition. Implications for practice and/or policy The study confirms that users are receptive to politeness in communication. They treat chatbots in a similar manner to human interaction partners. The results highlight the significance of politeness of chatbots' language in learning contexts. Feedback chatbots need to be equipped with suitable linguistic strategies, such as politeness, for communicating in a socially appropriate manner at critical points in the instructional dialogue.
Article
One prolific growth area for artificial intelligence (AI) is counselors for mental health. Earlier studies have reported that anthropomorphic features and haptic interaction can promote user engagement in conversations and foster the development of relationships between users and intelligent agents. This study examined the main and interaction effects of anthropomorphism and touch behavior on self-disclosure intention, attachment, and cerebral activity in the context of agents as AI mental health counselors (AIMHC). The results indicated that users tend to disclose information to the non-anthropomorphism AIMHC, regardless of with or without touch behavior. Users reported the highest attachment towards the anthropomorphism AIMHC with touch behavior. Additionally, privacy concerns and perceived empathy were determined as significant mediators. Moreover, anthropomorphism induced increased activity in the frontopolar area, correlating with self-disclosure intention. Anthropomorphism AIMHC’s touch behavior evoked the greatest increases in left DLPFC activity. This study explains the mechanism of effect and analyzes the theoretical and practical implications of these findings.
Article
Smartphone overuse around family and friends has been shown to be increasing over the past years and often leads to limited one-to-one interaction between co-located individuals. Smartphone-based virtual agents have been shown to be effective for behavior intervention and mediation, such as promoting physical activity. Little is known about leveraging smartphone-based agents to play a role in communication and facilitate conversation between co-located individuals. In this paper, we explore strengthening conversations between co-located couples by introducing a smartphone-based agent that acts as a conversation facilitator between them. We contrast the results with a text-based alternative. Our findings suggest that virtual agents serve as a valuable social entity mediating support in couples' communication and relationship dynamics. Through this, we suggest design considerations for this context that leverage the unique qualities of virtual agents.
Article
With the exponential rise in the prevalence of automation, trust in such technology has become more critical than ever before. Trust is confidence in a particular entity, especially in regard to the consequences they can have for the trustor, and calibrated trust is the extent to which the judgments of trust are accurate. The focus of this paper is to reevaluate the general understanding of calibrating trust in automation, update this understanding, and apply it to worker’s trust in automation in the workplace. Seminal models of trust in automation were designed for automation that was already common in workforces, where the machine’s “intelligence” (i.e., capacity for decision making, cognition, and/or understanding) was limited. Now, burgeoning automation with more human-like intelligence is intended to be more interactive with workers, serving in roles such as decision aid, assistant, or collaborative coworker. Thus, we revise “calibrated trust in automation” to include more intelligent automated systems.
Article
Full-text available
Marketers use augmented reality (AR) to place virtual brand‐related information into a consumer's physical context. Grounded in the literature on AR, brand love, metaphor theory, and closeness as interpreted by the neural theory of language, the authors theorize that branded AR content can reduce the perceived physical, spatial distance between a consumer and a brand. This perceived closeness subsequently drives the closeness of the emotional relationship in the form of brand love. Two empirical studies validate this framework. Study 1 shows that using an AR app (vs. non‐AR) increases the perceived physical closeness of the brand, which in turn drives brand love (i.e., relationship closeness). Study 2 replicates this finding in a pre‐/post‐use design. Here, high levels of local presence (i.e., the extent to which consumers perceive a brand as actually being present in their physical environment) drive perceived physical closeness, which leads to brand love. We also find that AR's power to generate brand love increases when the consumer is already familiar with the brand. We discuss managerial implications for AR marketing today and in a metaverse future in which AR content might be prevalent in consumers' everyday perceptions of the real world.
Conference Paper
This study explored participants’ response to male versus female voices in drone campus tour guides. The sample comprised 60 undergraduate and graduate students, of which 50% were female, from a University in South Korea. A between-subjects experimental design was employed with each drone voice type (male and female) to examine the influence on perceived credibility, attitude, and the sense of presence. The results revealed that neither participants’ perception of credibility nor sense of presence was affected by voice type, but results demonstrated that female participants displayed a stronger positive attitude toward the male-voiced drone. In participants’ gender difference index, female participants felt a stronger perceived presence than did male participants. These findings suggest possible guidelines for designing social drone agents that use speech-enabled technology.
Article
Full-text available
This article asks whether, and when, participants benefit from seeing each other's faces in computer-mediated communication. Although new technologies make it relatively easy to exchange images over the Internet, our formal understanding of their impacts is not clear. Some theories suggest that the more one can see of one's partners, the better one will like them. Others suggest that long-term virtual team members may like each other better than would those who use face-to-face interaction. The dynamic underlying this latter effect may also pertain to the presentation of realistic images compared with idealized virtual perceptions. A field experiment evaluated the timing of physical image presentations for members of short-term and long-term virtual, international groups. Results indicate that in new, unacquainted teams, seeing one's partner promotes affection and social attraction, but in long-term online groups, the same type of photograph dampens affinity.
Article
Full-text available
The purpose of this study was to develop a scale for measuring celebrity endorsers' perceived expertise, trustworthiness, and attractiveness. Accepted psychometric scale-development procedures were followed which rigorously tested a large pool of items for their reliability and validity. Using two exploratory and two confirmatory samples, the current research developed a 15-item semantic differential scale to measure perceived expertise, trustworthiness, and attractiveness. The scale was validated using respondents' self-reported measures of intention to purchase and perception of quality for the products being tested. The resulting scale demonstrated high reliability and validity.
Article
Full-text available
The authors investigated basic properties of social exchange and interaction with technology in an experiment on cooperation with a human-like computer partner or a real human partner. Talking with a computer partner may trigger social identity feelings or commitment norms. Participants played a prisoner's dilemma game with a confederate or a computer partner. Discussion, inducements to make promises, and partner cooperation varied across trials. On Trial 1, after discussion, most participants proposed cooperation. They kept their promises as much with a text-only computer as with a person, but less with a more human-like computer. Cooperation dropped sharply when any partner avoided discussion. The strong impact of discussion fits a social contract explanation of cooperation following discussion. Participants broke their promises to a computer more than to a person, however, indicating that people make heterogeneous commitments.
Article
Full-text available
When individuals apply social rules and social expectations while working on a computer, are they directly interacting with the computer as an independent social actor or source (the CAS model), or are they orienting to an unseen programmer or imagined person in another room (the CAM model)? Two studies provide critical tests of these competing models. In Study 1, all participants were exposed to an identical interaction with computers. In one condition, participants were told that they were dealing with computers; in another, they were told that they were interacting with the software programmers. Consistent with the CAS model, there were significant differences between the two conditions. Study 2 performed a constructive replication of Study 1 by replacing the programmer with a hypothetical networker. Again, differences between the two conditions provide evidence that people respond to the computer as an independent source of information.
Article
Full-text available
People behave differently in the presence of other people than they do when they are alone. People also may behave differently when designers introduce more human-like qualities into computer interfaces. In an experimental study we demonstrate that people's responses to a talking-face interface differ from their responses to a text-display interface. They attribute some personality traits to it; they are more aroused by it; they present themselves in a more positive light. We use theories of person perception, social facilitation, and self-presentation to predict and interpret these results. We suggest that as computer interfaces become more "human-like," people who use those interfaces may change their own personas in response to them.
Article
Full-text available
While computer-mediated communication use and research are proliferating rapidly, findings offer contrasting images regarding the interpersonal character of this technology. Research trends over the history of these media are reviewed with observations across trends suggested so as to provide integrative principles with which to apply media to different circumstances. First, the notion that the media reduce personal influences—their impersonal effects—is reviewed. Newer theories and research are noted explaining normative “interpersonal” uses of the media. From this vantage point, recognizing that impersonal communication is sometimes advantageous, strategies for the intentional depersonalization of media use are inferred, with implications for Group Decision Support Systems effects. Additionally, recognizing that media sometimes facilitate communication that surpasses normal interpersonal levels, a new perspective on “hyperpersonal” communication is introduced. Subprocesses are discussed pertaining to receivers, senders, channels, and feedback elements in computer-mediated communication that may enhance impressions and interpersonal relations.
Article
Full-text available
This article investigates a new communication medium—public computer conferencing—by separately and jointly analyzing two basic aspects of human communication: (1) content, the extent to which such systems can support socioemotional communication, and (2) connectivity, communication patterns among system users. Results indicate that (1) computer-mediated communication systems can facilitate a moderate exchange of socioemotional content and (2) basic network roles did not generally differ in percentage of socioemotional content. Some fundamental issues in analyzing content and networks in computer-mediated systems, such as structural equivalence versus cohesion network approaches, are discussed in light of these results.
Article
Full-text available
This article asks whether, and when, participants benefit from seeing each other's faces in computer-mediated communication. Although new technologies make it relatively easy to exchange images over the Internet, our formal understanding of their impacts is not clear. Some theories suggest that the more one can see of one's partners, the better one will like them. Others suggest that long-term virtual team members may like each other better than would those who use face-to-face interaction. The dynamic underlying this latter effect may also pertain to the presentation of realistic images compared with idealized virtual perceptions. A field experiment evaluated the timing of physical image presentations for members of short-term and long-term virtual, international groups. Results indicate that in new, unacquainted teams, seeing one's partner promotes affection and social attraction, but in long-term online groups, the same type of photograph dampens affinity.
Article
Full-text available
This paper reports the development of a measure of perceived homophily. In both an initial investigation and in four subsequent studies employing samples from diverse populations, four dimensions of response were observed. These dimensions were labeled Attitude, Morality, Appearance, and Background. Additional results indicated that opinion leaders are perceived as more homophilous than non-opinion leaders on the dimensions of Attitude, Morality, and Background. The scales found to measure these dimensions are suggested for consideration by researchers concerned with homophily or interpersonal similarity in human communication.
Article
Full-text available
Since it is impractical to prerecord human speech for dynamic content such as email messages and news, many commercial speech applications use recorded human speech for fixed content (e.g. system prompts) and synthetic speech for dynamic content. However, mixing human speech and synthetic speech may not be optimal from a consistency perspective. A two-condition between-participants experiment (N = 24) was conducted to compare two versions of a telephony application for Personal Information Management (PIM). In the first condition, all the system output was delivered with synthetic speech. In the second condition, users heard a mix of human speech and synthetic speech. Users managed several email and calendar tasks. Users'' task performance was rated by two independent judges. Their self-ratings of task performance and attitudinal responses were also measured by means of questionnaires. Users interacting with the interface that used only synthetic speech performed the task significantly better, while users interacting with the mixed-speech interface thought they did better and had more positive attitudinal responses. A consistency framework drawn from human psychological processing is offered to explain the difference in task performance. Cognitive processing and attitudinal response are differentiated. Design implications and directions for future research are suggested.
Article
Full-text available
This project examined how information provided in virtual worlds influences social judgment. It specifically tested the influence of anthropomorphism and agency on the level of uncertainty and social judgment, using a between-subjects experimental design. Anthropomorphism had three levels; a high anthropomorphic image, a low anthropomorphic image and no image. Agency had two levels; whether the participants were told they were interacting with a human (avatar condition) or a computer (agent condition). The results showed that the virtual image influenced social judgment. The less anthropomorphic image was perceived to be more credible and likeable than no image, which was more credible and likeable than the anthropomorphic image. There were no discernable differences in social judgment between participants who were told they were interacting with a human as compared to those told they were interacting with a computer agent, consistent with findings from previous reports. Neither anthropomorphism nor agency influenced reported levels of uncertainty. Implications of these results for those designing and using virtual environments are discussed.
Article
Full-text available
We report on an experiment that examined the inè uence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual en- vironment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Per- ceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer- controlled entities, and that the existence of a virtual image increases tele- presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners repre- sented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.
Article
Full-text available
The authors investigated basic properties of social exchange and interaction with technology in an experiment on cooperation with a human-like computer partner or a real human partner. Talking with a computer partner may trigger social identity feelings or commitment norms. Participants played a prisoner's dilemma game with a confederate or a computer partner. Discussion, inducements to make promises, and partner cooperation varied across trials. On Trial 1, after discussion, most participants proposed cooperation. They kept their promises as much with a text-only computer as with a person, but less with a more human-like computer. Cooperation dropped sharply when any partner avoided discussion. The strong impact of discussion fits a social contract explanation of cooperation following discussion. Participants broke their promises to a computer more than to a person, however, indicating that people make heterogeneous commitments.
Conference Paper
Full-text available
A robot's appearance and behavior provide cues to the robot's abilities and propensities. We hypothesize that an appropriate match between a robot's social cues and its task improve the people's acceptance of and cooperation with the robot. In an experiment, people systematically preferred robots for jobs when the robot's humanlikeness matched the sociability required in those jobs. In two other experiments, people complied more with a robot whose demeanor matched the seriousness of the task.
Article
Two experiments investigated if and how visual representation of interactants affects depersonalization and conformity to group norms in anonymous computer-mediated communication (CMC). In Experiment 1, a 2 (intergroup versus interpersonal) χ 2 (same character versus different character) between-subjects design experiment (N = 60), each participant made a decision about social dilemmas after seeing two other (ostensible) participants' unanimous opinions and then exchanged supporting arguments. Consistent with the Social Identity model of Deindividuation Effects (SIDE), when the group level of self-identity was rendered salient in an intergroup encounter, uniform virtual appearance of CMC partners triggered depersonalization and subsequent conformity behavior. By contrast, when the personal dimension of the self was salient, standardized representation tended to reduce conformity. To elucidate the mediation process, Experiment 2 investigated the causal links between depersonalization, group identification, and conformity. The results show that depersonalization accentuated adherence to group norms, both directly and indirectly via group identification.
Article
Two experiments addressed the questions of if and how normative social influence operates in anonymous computer-mediated communication (CMC) and human-computer interaction (HCI). In Experiment 1, a 2 (public response vs. private response) × 2 (one interactant vs. four interactants) × 3 (textbox vs. stick figure vs. animated character) mixed-design experiment (N = 72), we investigated how conformity pressure operates in a simulated CMC setting. Each participant was asked to make a decision in hypothetical social dilemmas after being presented with a unanimous opinion by other (ostensible) participants. The experiment examined how the visual representation of interaction partners on the screen moderates this social influence process. Group conformity effects were shown to be more salient when the participant's responses were allegedly seen by others, compared to when the responses were given in private. In addition, participants attributed greater competence, social attractiveness, and trustworthiness to partners represented by anthropomorphic characters than those represented by textboxes or stick figures. Experiment 2 replicated Experiment 1, replacing interaction with a computer(s) rather than (ostensible) people, to create an interaction setting in which no normative pressure was expected to occur. The perception of interaction partner (human vs. computer) moderated the group conformity effect such that people expressed greater public agreement with human partners than with computers. No such difference was found for the private expression of opinion. As expected, the number of computer agents did not affect participants' opinions whether the responses were given in private or in public, while visual representation had a significant impact on both conformity measures and source perception variables.
Article
This study investigated the relationship between interaction behavior in a small group setting and the resulting perceptions group members have of one another. Trained raters coded the interaction behavior of subjects, who discussed a task-oriented topic in small groups. Results indicate that interaction behavior can account for a substantial percentage of the variance in group members’ perceptions of one another. Apparently, the same interaction behavior may simultaneously result in both more positive and more negative perceptions on the part of other group members, suggesting that different interaction strategies are appropriate for varying desired personal outcomes.
Article
Following Langer (1992), this article reviews a series of experimental studies that demonstrate that individuals mindlessly apply social rules and expectations to computers. The first set of studies illustrates how individuals overuse human social categories, applying gender stereotypes to computers and ethnically identifying with computer agents. The second set demonstrates thatpeople exhibit overlearned social behaviors such as politeness and reciprocity toward computers. In the third set of studies, premature cognitive commitments are demonstrated: A specialist television set is perceived as providing better content than a generalist television set. A final series of studies demonstrates the depth of social responses with respect to computer ‘personality.’ Alternative explanations for these findings, such asanthropomorphism and intentional social responses, cannot explain the results. We conclude with an agenda for future research.
Article
This study examined the relationship of trust to self-disclosure. A measure of individualized trust was developed and used in conjunction with a multidimensional measure of disclosure to reassess the relationship between the two. A modest, linear relationship between individualized trust and various dimensions of self-disclosure was discovered. Moreover, a higher level of trust (as opposed to lesser trust as well as distrust) was found to be associated with more consciously intended disclosure and a greater amount of disclosure.
Article
Computer-generated anthropomorphic characters are a growing type of communicator that is deployed in digital communication environments. An essential theoretical question is how people identify humanlike but clearly artificial, hence humanoid, entities in comparison to natural human ones. This identity categorization inquiry was approached under the framework of consistency and tested through examining inconsistency effects from mismatching categories. Study 1 (N = 80), incorporating a self-disclosure task, tested participants’ responses to a talking-face agent, which varied in four combinations of human versus humanoid faces and voices. In line with the literature on inconsistency, the pairing of a human face with a humanoid voice or a humanoid face with a human voice led to longer processing time in making judgment of the agent and less trust than the pairing of a face and a voice from either the human or the humanoid category. Female users particularly showed negative attitudes toward inconsistently paired talking faces. Study 2 (N = 80), using a task that stressed comprehension demand, replicated the inconsistency effects on judging time and females’ negative attitudes but not for comprehension-related outcomes. Voice clarity overshadowed the consistency concern for comprehension-related responses. The overall inconsistency effects suggest that people treat humanoid entities in a different category from natural human ones.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
Over the last years, the animation of interface agents has been the target of increasing interest. Largely, this increase in attention is fuelled by speculated effects on human motivation and cognition. However, empirical investigations on the effect of animated agents are still small in number and differ with regard to the measured effects. Our aim is two-fold. First, we provide a comprehensive and systematic overview of the empirical studies conducted so far in order to investigate effects of animated agents on the user's experience, behaviour and performance. Second, by discussing both implications and limitations of the existing studies, we identify some general requirements and suggestions for future studies.
Article
Advancements in computer technology have allowed the development of human-appearing and -behaving virtual agents. This study examined if increased richness and anthropomorphism in interface design lead to computers being more influential during a decision-making task with a human partner. In addition, user experiences of the communication format, communication process, and the task partner were evaluated for their association with various features of virtual agents. Study participants completed the Desert Survival Problem (DSP) and were then randomly assigned to one of five different computer partners or to a human partner (who was a study confederate). Participants discussed each of the items in the DSP with their partners and were then asked to complete the DSP again. Results showed that computers were more influential than human partners but that the latter were rated more positively on social dimensions of communication than the former. Exploratory analysis of user assessments revealed that some features of human–computer interaction (e.g. utility and feeling understood) were associated with increases in anthropomorphic features of the interface. Discussion focuses on the relation between user perceptions, design features, and task outcomes.
Article
Computer are becoming the vehicle for an increasing range of everyday activities. Acquisition of news and information, mail and even social interactions and entertainment have become more and more computer based. This article focuses on a novel approach to building interface agents. It presents results from several prototype agents that have been built using this approach, including agents that provide personalized assistance with meeting schedules, email handling, electronic news filtering and selection of entertainment.
Article
The research reported here extends the work of Hovland and his colleagues on source credibility by investigating the criteria actually used by receivers in evaluating message sources. Three dimensions are isolated: Safety, Qualification, and Dynamism. The authors argue that source “image” should be defined in terms of the perceptions of the receiver, not in terms of objective characteristics of the source.