ArticlePDF Available

The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla

Authors:

Abstract

Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are "polite" to computers; that they treat computers with female voices differently than "male" ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses. Using everyday language to engage readers interested in psychology, communication, and computer technology, Reeves and Nass detail how this knowledge can help in designing a wide range of media.
... These technologies are simultaneously becoming more adaptive and human-like, capable of responding to people using natural affordances such as spoken dialog and gesture. This means that people often expect them to act as humans would, ascribing to them human-like social capabilities [85]. Their use in home and communications contexts also means that AI systems increasingly come to mediate interactions between people, changing the ways that we relate to each other, thus becoming intertwined with the performance and experience of social practices. ...
... These scenarios can be understood through early work by Reeves and Nass on the paradigm of computers as social actors, which demonstrated how people treat interactive systems as if they were people even though they know that they are not [75,85]. The 'magic word' please-and-thank-you mode added to Amazon Alexa appears to be an extension of this, where children are asked to perform deferential respect towards a smart speaker. ...
Preprint
Critical examinations of AI systems often apply principles such as fairness, justice, accountability, and safety, which is reflected in AI regulations such as the EU AI Act. Are such principles sufficient to promote the design of systems that support human flourishing? Even if a system is in some sense fair, just, or 'safe', it can nonetheless be exploitative, coercive, inconvenient, or otherwise conflict with cultural, individual, or social values. This paper proposes a dimension of interactional ethics thus far overlooked: the ways AI systems should treat human beings. For this purpose, we explore the philosophical concept of respect: if respect is something everyone needs and deserves, shouldn't technology aim to be respectful? Despite its intuitive simplicity, respect in philosophy is a complex concept with many disparate senses. Like fairness or justice, respect can characterise how people deserve to be treated; but rather than relating primarily to the distribution of benefits or punishments, respect relates to how people regard one another, and how this translates to perception, treatment, and behaviour. We explore respect broadly across several literatures, synthesising perspectives on respect from Kantian, post-Kantian, dramaturgical, and agential realist design perspectives with a goal of drawing together a view of what respect could mean for AI. In so doing, we identify ways that respect may guide us towards more sociable artefacts that ethically and inclusively honour and recognise humans using the rich social language that we have evolved to interact with one another every day.
... Through this research study, we aim to answer the following research questions: The CASA Paradigm which stands for "Computers As Social Actors" [12] along with the Media Equation theory [13] refer to humans treating computers or any software interfaces as fellow humans by displaying characteristic traits such as politeness, assigning gender, etc [13], [14]. This behavior is similar to humans' anthropomorphic interpretations of social robots [15]. ...
... Through this research study, we aim to answer the following research questions: The CASA Paradigm which stands for "Computers As Social Actors" [12] along with the Media Equation theory [13] refer to humans treating computers or any software interfaces as fellow humans by displaying characteristic traits such as politeness, assigning gender, etc [13], [14]. This behavior is similar to humans' anthropomorphic interpretations of social robots [15]. ...
Preprint
Full-text available
This paper presents the design, implementation, and evaluation of a novel collaborative educational game titled "Land of Hands", involving children and a customized social robot that we designed (\emph{HakshE}). Through this gaming platform, we aim to teach proper hand hygiene practises to children and explore the extent of interactions that take place between a pro-social robot and children in such a setting. We blended gamification with Computers as Social Actors (CASA) paradigm to model the robot as a social actor or a fellow player in the game. The game was developed using Godot's 2D engine and Alice 3. In this study, 32 participants played the game online through a video teleconferencing platform \emph{Zoom}. To understand the influence a pro-social robot's nudges has on children's interactions, we split our study into two conditions: With-Nudges and Without-Nudges. Detailed analysis of rubrics and video analyses of children's interactions show that our platform helped children learn good hand hygiene practises. We also found that using a pro-social robot creates enjoyable interactions and greater social engagement between the children and the robot although learning itself wasn't influenced by the pro-sociality of the robot.
... A unique characteristic of AT is its perceived agency: Imbuing technology with human traits increases perceptions that the technology possesses a mind and is capable of acting with intentions (Epley & Waytz, 2009). This distinctive feature has turned technology into a social actor (Nass & Moon, 2000;Reeves & Nass, 1996) such that, interactions with anthropomorphized machines engender a sense of "automated social presence"-a feeling of being in the presence of another social being (Van Doorn et al., 2017). User interactions with AT are therefore often a semblance for human−human interactions (Mourey et al., 2017). ...
Article
Full-text available
Extant work suggests that unsuccessful human−technology interactions can elicit negative affective reactions, prompting users to engage in compensatory behavior including seeking affiliation with others. The current work presents one mechanism to explain these findings. Specifically, we propose that users may construe incidents of technology failure akin to incidents of social rejection: Across three studies, we demonstrate that when an anthropomorphized (vs. nonanthropomorphized) technology fails to function as expected, users experience feelings of rejection, and subsequently express a greater desire to connect with others. In doing so, we contribute to extant research on human−technology interactions by uniquely demonstrating that feelings of social rejection may arise from technology failure. Our work also deepens our understanding of the unintended negative consequences of product anthropomorphism and, as such, provides insight into technology design.
... As early as 1996, Reeves and Nass, in their research about media equation, presented results from numerous psychological studies concluding that people treat computers like real people and places, being polite, treating female voices differently, and reacting to full-size faces as well as movements [23]. The computers are social actor (CASA) paradigm, whose origin is based on the media equation, has been validated to this day, maintaining the idea that people apply social rules and expectations to computers as if they were human, thus identifying its social potential. ...
Article
Full-text available
The purpose of this study is to understand how future employees in the hospitality and tourism industry envision the use of artificial intelligence in the organizations where they wish to work in the future. Through open-ended questions applied to undergraduate and master’s students in the area of tourism and hospitality, we capture their opinions when thinking about the partial or total use of robots in hospitality. Despite the increasing implementation of artificial intelligence in hospitality and tourism, existing research mainly focuses on current hoteliers and/or customers. However, anticipating how digital generations expect their future roles in a close engagement with robots allows researchers to predict and focus their attention on future problems. Their statements were subjected to a qualitative content analysis methodology, based on themes and sentiment. Participants expressed a negative view of the presence of robots in hospitality, mostly associated with a fear of job loss. Many also reported that interacting with robots is negative for both staff and customers due to robots’ lack of emotions. However, there is some division concerning the impact of robots on service quality: some believe that the service will be more efficient and with fewer failures; others believe that the limitations of robots will lead to worse service. The findings suggest that the acceptability and desirability of robotization may vary depending on the level of robotization in hotels, on the type of customer, and on the level of service provided.
... Anthropomorphism refers to the attribution of humanlike characteristics (e.g., human forms, voices, behaviors) to inanimate, artificial agents such as robots and agents (Bartneck et al., 2009;Waytz et al., 2014). While interacting with computers and systems, people mindlessly apply interpersonal social rules as if they interact with human beings (Nass et al., 1995;Reeves and Nass, 1996). The involvement of anthropomorphism reinforces the tendency to interact with computers and systems in social ways, which further bring natural interactions in various contexts. ...
Article
Full-text available
As technological development is driven by artificial intelligence, many automotive manufacturers have integrated intelligent agents into in-vehicle information systems (IVIS) to create more meaningful interactions. One of the most important decisions in developing agents is how to embody them, because the different ways of embodying agents will significantly affect user perception and performance. This study addressed the issue by investigating the influences of agent embodiments on users in driving contexts. Through a factorial experiment (N = 116), the effects of anthropomorphism level (low vs. high) and physicality (virtual vs. physical presence) on users' trust, perceived control, and driving performance were examined. Results revealed an interaction effect between anthropomorphism level and physicality on both users' perceived control and cognitive trust. Specifically, when encountering high-level anthropomorphized agents, consumers reported lower ratings of trust toward the physically present agent than toward the virtually present one, and this interaction effect was mediated by perceived control. Although no main effects of anthropomorphism level or physicality were found, additional analyses showed that anthropomorphism level significantly improved users' cognitive trust for those unfamiliar with IVIS. No significant differences were found in terms of driving performances. These results indicate the influences of in-vehicle agents' embodiments on drivers' experience.
Chapter
Full-text available
In an attempt to curtail and prevent the spread of Covid-19 infection, social distancing has been adopted globally as a precautionary measure. Statistics shows that 75% of appointments most especially in the health sector are being handled by telephone since the outbreak of the Covid-19 pandemic. Currently most patients access health care services in real time from any part of the World through the use of Mobile devices. With an exponential growth of mobile applications and cloud computing the concept of mobile cloud computing is becoming a future platform for different forms of services for smartphones hence the challenges of low battery life, storage space, mobility, scalability, bandwidth, protection and privacy on mobile devices has being improved by combining mobile devices and cloud computing which rely on wireless networks to create a new concept and infrastructure called Mobile Cloud Computing (MCC). The introduction of Mobile cloud computing (MCC) has been identified as a promising approach to enhance healthcare services, with the advent of cloud computing, computing as a utility has become a reality thus a patient only pays for what he uses. This paper, presents a systematic review on the concept of cloud computing in mobile Environment; Mobile Payments and Mobile Healthcare Solutions in various healthcare applications, it describes the principles, challenges and opportunity this concept proffers to the health sector to determine how it can be harnessed is also discussed.
Chapter
An integrative perspective is developed on memory performance in the context of voice-based conversational agents (VCAs). Memory, as an outcome of human-technology interaction, matters where technology is designed to enhance knowledge and to function as a cognitive support tool and aide. To date, this has meant focusing on the evaluation of specialist applications such as assistive and educational technologies. The increased use of VCAs in everyday life, in the form of smart speakers or as assistants for mobile phones, however, makes effects on memory more relevant to a large user base. Adding VCAs to multi-functional devices such as smartphones increases their potential to be taken as digital companions and to make interactions with them more meaningful. This has implications for different types of memory, semantic and episodic/autobiographical. Since memory is not stable, but malleable and context-dependent, its performance is likely to be influenced by the wide range of social cues that VCAs can convey. Further, VCAs can be expected to contribute to conversational engagement, the general state of involvement in a conversational setting, which in turn influences cognitive outcomes.
Chapter
Stereotypes and scripts guide human perception and expectations in everyday life. Research has found that a robot’s appearance influences the perceived fit in different application domains (e.g. industrial or social) and that the role a robot is presented in predicts its perceived personality. However, it is unclear how the surroundings as such can elicit a halo effect leading to stereotypical perceptions. This paper presents the results of an experimental study in which 206 participants saw 8 cartoon pictures of the robot Pepper in different application domains in a within-subjects online study. Results indicate that the environment a robot is placed in has an effect on the users’ evaluation of the robot’s warmth, competence, status in society, competition, anthropomorphism, and morality. As the first impression has an effect on users’ expectations and evaluation of the robot and the interaction with it, the effect of the application scenarios has to be considered carefully.
Chapter
Intelligent voice assistants (IVA) are on the rise: They are implemented into new smart mobile and stationary devices such as smartphones, smart watches, tablets, cars, and smart speakers. Being surrounded by always-on microphones, however, can be perceived as a threat to one’s privacy since audio recordings are saved in the cloud and processed for e.g., marketing purposes. However, only a minority of users adapts their user behavior such as self-disclosure according to their privacy concerns. Research has attempted to find answers for this paradoxical outcome through concepts such as the privacy calculus or the privacy cynicism. Moreover, literature revealed that a large group of users lacks privacy awareness and privacy literacy preventing them from engaging in privacy-preserving behavior. Since previous studies in the scope of IVAs focused primarily on interviews or cross-sectional studies testing models predicting user behavior, desiderata for elevating future research are presented. This leads to a more user-centric approach incorporating e.g., a motivational-affective perspective and investigation of causalities.
ResearchGate has not been able to resolve any references for this publication.