ArticlePDF Available

The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla

Authors:

Abstract

Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are "polite" to computers; that they treat computers with female voices differently than "male" ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses. Using everyday language to engage readers interested in psychology, communication, and computer technology, Reeves and Nass detail how this knowledge can help in designing a wide range of media.
... Against this philosophical argument, a large body of evidence shows that people actually treat new technologies as real people and may therefore perceive technology as a direct object of trust (Reeves & Nass, 1996; see also Gillath et al., 2021;Song et al., 2022). It has also been argued that because technology is not flawless, deciding to rely on it (without being able to control the reliability of every decision it makes) resembles accepting vulnerability based on positive expectations, that is, trusting (Ferrario et al., 2021;Kaplan et al., 2021). ...
... There are many parallels between trust in AI and interpersonal trust. Notably, work shows that people project similar sentiments on machines and on humans (Birnbaum et al., 2016;Song et al., 2022) and that personal traits and dispositions similarly affect both types of trust (Gillath et al., 2021;Kaplan et al., 2021), supporting the assumption that people treat technology as real people (Reeves & Nass, 1996). ...
... Increased benevolence, on the other hand, may reflect the impression that an AI that remembers the user by name and addresses him or her directly, harbours more benevolent intentions towards them. While this would reflect purely subjective impressions that do not align with any AI "real" intentions, it is in line with the general idea that people anthropomorphise technology and attribute intentions to it (Reeves & Nass, 1996). It also speaks to other work that found anthropomorphism to increase perceived benevolence of the technology (Bach et al., 2022;Calhoun et al., 2019). ...
Article
Full-text available
The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation.
... This task can be formally defined as π ϕ (p 1 , p 2 |D). Scene Reconstruction This task requires reasoning and summarizing scene elements S from the given dialogue D. The context specifies the dialogue topic, interaction type, and the relationship and familiarity between the participants, crucial elements for making the dialogue more dynamic and nuanced (Reeves and Nass, 1996;Pickering and Garrod, 2004). Specifically, this task requires the model π ϕ to reconstruct the pre-existing relationship, interaction type, and dialogue topic before the conversation starts, as well as to deduce the information flow throughout the dialogue and summarize the conversation for each participant. ...
Preprint
Large language models (LLMs) have made dialogue one of the central modes of human-machine interaction, leading to the accumulation of vast amounts of conversation logs and increasing demand for dialogue generation. A conversational life-cycle spans from the Prelude through the Interlocution to the Epilogue, encompassing various elements. Despite the existence of numerous dialogue-related studies, there is a lack of benchmarks that encompass comprehensive dialogue elements, hindering precise modeling and systematic evaluation. To bridge this gap, we introduce an innovative research task D\textbf{D}ialogue E\textbf{E}lement MO\textbf{MO}deling, including Element Awareness\textit{Element Awareness} and Dialogue Agent Interaction\textit{Dialogue Agent Interaction}, and propose a novel benchmark, DEMO\textbf{DEMO}, designed for a comprehensive dialogue modeling and assessment. Inspired by imitation learning, we further build the agent which possesses the adept ability to model dialogue elements based on the DEMO benchmark. Extensive experiments indicate that existing LLMs still exhibit considerable potential for enhancement, and our DEMO agent has superior performance in both in-domain and out-of-domain tasks.
... The SIfT '12 Unit grid' and 'Coursework framework' aid the teachers' conceptualisation within the VLE and provide a platform for the development of clear 'bite-sized' learning, incorporating the characteristics of the SIfT 'virtual tutor'. Reeves and Nass (1996) identify that when media conforms to social and natural rules, instruction for use is not necessary, thus, the SIfT 'virtual tutor' introduces the personality of the tutor and delivers Gagné's nine events of instruction, identified for any desired learning (Gagné1985). The final component of the SIfT model is the 'navigational features', which Britain and Liber (2000) reflect are important and intrinsically part of using a VLE. ...
Article
Full-text available
The New Opportunities Fund ‘ICT for Teachers’ initiative, aimed to deliver professional development training for all ‘serving’ teachers in the UK . The scale of the task, suggested that an alternative delivery format to traditional face-to-face training should be trialled. SIfT, as an approved training provider to the Government scheme, has created a highly structured model for designing and delivering course materials to remote learners, via a Virtual Learning Environment. The poster session includes a demonstration of SIfT content and illustrates the SIfT model, consisting of the ‘12 Unit grid’, ‘Coursework framework’, ‘virtual tutor’ and ‘navigational features’, developed within the virtual space.
... In other words, the sense of "being there" in an environment mediated by a communication medium is known as telepresence [53,54]. To varied degrees, all forms of media contribute to the perceptual illusion of being [55]. For instance, users browsing a tourism video may experience a sense of being inside the scene shown in the video. ...
Article
Full-text available
Tourism advertising and tourism promotion have over the years been the core functions of tourism departments and major tourist sites. In relation to the progressing development of new media, the mobile short-form videos, which are short, focused, and have an engaging content, appear to be a useful means of advertising tourist destinations. In the digital era, short videos have become a new communication tool between destinations and consumers. This current study, based on the S-O-R model and flow experience, investigated the psychological processes through which TikTok attributes and technology evoke flow and lead to tourists’ behavioral intention. Moreover, the TAM, i.e., PU and PEOU, as two technology factors, as well as three content attributes (entertainment, informativeness, and interactivity) were examined. The study utilized a quantitative approach and collected data from 412 respondents in China. The authors adopted the PLS-SEM method to confirm the directions hypothesized in this model. There are significant effects of PU, PEOU, and entertainment on flow experience (telepresence, time distortion, and focused attention). Interactivity impacts telepresence and time distortion, while informativeness impacts focused attention. Moreover, time distortion and focused attention impact tourists’ behavioral intention. The results highlight several limitations and offer implications for future research as well.
... However, research shows that humans have an innate tendency to treat artifacts like social actors: Whether it's the toddler who talks to, cuddles, and feeds their dolls and stuffed animals, or the office worker who insults the pausing printer or cheers on the rebooting desktop computer. Theoretical conceptualizations of anthropomorphization (i.e., perceiving and treating artifacts similarly to humans) [67] and the CASA (Computers are Social Actors) approach [68] explain why and how many humans manage to develop parasocial relationships with artifacts, and what characteristics of artifacts facilitate them (e.g., human-like appearance, human-like voice, natural language interaction, perceived intelligence). More research is needed to better understand the characteristics and effects of the parasocial (i.e., onesided) relationships that some users develop with their counseling and therapy bots. ...
Article
Full-text available
Purpose of Review Millions of people now use generative artificial intelligence (GenAI) tools in their daily lives for a variety of purposes, including sexual ones. This narrative literature review provides the first scoping overview of current research on generative AI use in the context of sexual health and behaviors. Recent Findings The review includes 88 peer-reviewed English language publications from 2020 to 2024 that report on 106 studies and address four main areas of AI use in sexual health and behaviors among the general population: (1) People use AI tools such as ChatGPT to obtain sexual information and education. We identified k = 14 publications that evaluated the quality of AI-generated sexual health information. They found high accuracy and completeness. (2) People use AI tools such as ChatGPT and dedicated counseling/therapy chatbots to solve their sexual and relationship problems. We identified k = 16 publications providing empirical results on therapists’ and clients’ perspectives and AI tools’ therapeutic capabilities with mixed but overall promising results. (3) People use AI tools such as companion and adult chatbots (e.g., Replika) to experience sexual and romantic intimacy. We identified k = 22 publications in this area that confirm sexual and romantic gratifications of AI conversational agents, but also point to risks such as emotional dependence. (4) People use image- and video-generating AI tools to produce pornography with different sexual and non-sexual motivations. We found k = 36 studies on AI pornography that primarily address the production, uses, and consequences of – as well as the countermeasures against – non-consensual deepfake pornography. This sort of content predominantly victimizes women and girls whose faces are swapped into pornographic material and circulated without their consent. Research on ethical AI pornography is largely missing. Summary Generative AI tools present new risks and opportunities for human sexuality and sexual health. More research is needed to better understand the intersection of GenAI and sexuality in order to a) help people navigate their sexual GenAI experiences, b) guide sex educators, counselors, and therapists on how to address and incorporate AI tools into their professional work, c) advise AI developers on how to design tools that avoid harm, d) enlighten policymakers on how to regulate AI for the sake of sexual health, and e) inform journalists and knowledge workers on how to report about AI and sexuality in an evidence-based manner.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Article
This study leverages recent theoretical and methodological advancements to design and evaluate a Metacognitive Artificial Intelligence (MAI) system. This study employs the WOz approach from the field of human-computer interaction, to explore the design of technological systems that support human-AI collaboration. This study supports the view that human-AI collaboration in research with multidisciplinary joint forces will facilitate empirical evidence and design work to articulate human-AI collaboration.
ResearchGate has not been able to resolve any references for this publication.