Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... O conceito de humanização surge não apenas em estudos de HMI, mas também se faz presente em situações que envolvem empatia e/ou conexão emocional entre pessoas, nas relações de consumo -como na personalidade de marcas (Ouwersloot & Tudorica, 2001) -na relação com animais (Marler, 1999;Antonacopoulos & Pychyl, 2010), objetos, produtos e, como dito, máquinas, incluídos robôs -termo cujo entendimento pode ou não envolver sua forma física (Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj & Eimler, 2014). ...
... Diferentes estudos avaliam -e confirmam -que figuras antropomórficas exercem um importante papel na empatia entre seres humanos e máquinas (Duffy, 2002;Osawa, Muraki & Imai, 2007;Bartneck et al., 2010;Zlotowski et al., 2015;Batista & Peixoto, 2017), dentre outras maneiras, monitorando a reação do cérebro de participantes por meio de Ressonância Eletromagnética Funcional (fMRI) quando expostos a vídeos de humanos, robôs e objetos inanimados sendo tratados de maneira afetuosa ou agressiva (Rosenthal-von der Pütten et al, 2014). ...
... Como projeto de pesquisa, encara-se o diagnóstico como à luz do referencial teórico e justificativa. Aprofundando estudos anteriores que tratam da relação empática entre seres humanos e S.A.S. (Anzalone, Boucenna, Ivaldi & Chetouani, 2015;Hegel, Krach, Kircher, Wrede & Sagerer, 2008;Rosenthal-von der Pütten et al., 2014), busca-se averiguar como tal relação afetaria a tomada de decisão em situações em que um serno caso, o humano -poderia realizar ações com base na cumplicidade para com orientações formuladas por seu interlocutor artificial. ...
Research Proposal
Full-text available
Conforme máquinas, independente de sua forma, passam a exibir elementos sociais, uma relação afetiva é criada entre estes seres artificiais (S.A.) e seus interlocutores humanos. Como resultado, indivíduos podem atribuir características humanas a tais seres, considerá-los como iguais e, por consequência, exibir um comportamento interpessoal e de interação similar ao que ocorreria com outro ser humano, resultando em empatia, vínculos de confiança e cumplicidade. Por meio de uma abordagem experimental, este projeto busca avaliar como a empatia, moderada pela antropomorfização, contribuiria em um maior nível de percepção de humanidade em seres artificiais e, por conseguinte, aumentaria o nível de confiança em sugestões de comportamento dadas por um S.A. (empático vs. não-empático; antropomorfizado vs. não-antropomorfizado) e na cumplicidade para com o S.A. em situações que possam resultar em ganho, perda ou indiferença para o participante humano.
... In contrast, other studies suggested that the victim's character influences people's reactions. When participants watched videos showing abusive behavior toward a robot and human, they experienced more emotional distress and showed negative empathy for the human (Rosenthal-von der Pütten et al., 2014). Their self-reported emotional states supported this finding in neurological differences. ...
... We offer a different angle for probing human-AV social interactions in mixed traffic, that is, to understand the observers' appraisals of aggression toward AVs, which might facilitate our understanding of the human motivation for abusing AVs. Human-robot interaction studies (Rosenthal-von der Pütten et al., 2014;Sanoubari et al., 2021) suggest a different appraisal of aggressive behaviors toward humans and robots and less negative judgment of these behaviors toward robots. We investigate four appraisals: acceptability, risk perception, negative affect, and moral judgment. ...
... Certain social cues might influence the appraisals of aggression, including the aggressive behavior per se, the aggressor's character, and the victim's character. Our above hypotheses and current evidence (Rosenthal-von der Pütten et al., 2014;Sanoubari et al., 2021) hold that the appraisals of aggressive behaviors depend on the victim's character. Furthermore, we believe the salience of victim character influences people's appraisals of aggressive behaviors. ...
Article
To integrate automated vehicles (AVs) into our transportation network, we should consider how human road users will interact with them. Human aggression toward AVs could be a new risk in mixed traffic and reduce AV adoption. Is it OK to drive aggressively toward AVs? We examined how identical aggressive behavior toward an AV or human driver is appraised differently by observers. In our 2 (scenario type: human driver vs. AV) × 2 (victim identity salience: low vs. high) between-subjects survey, we randomly allocated participants (N = 956) to one of four conditions where they viewed a video clip from an AV or a human driver showing a car suddenly braking continuously ahead of the AV or human driver’s car. The salience of victim identity influenced the observers’ appraisals of aggressive behavior. When asked to judge the front car’s behavior toward this AV or human driver (the victim identity is salient), they reported more acceptability and less risk perception, negative affect, and immoral judgment while judging this behavior toward the AV. When asked to judge the front car’s behavior (the victim identity not highlighted), they reported non-different appraisals. This finding implies that AVs might need to hide their identity to blend in visually and behaviorally as regular cars.
... To support the development of more socially-capable robots, and advance designers' understanding of the role of gender in mediating social impacts of abuse in human-robot interactions, and building upon the seminal research by Rosenthalvon der Pütten and colleagues [11], [12], we designed a 2 × 2 fully factorial experiment wherein we quasi-manipulated the gendering and humanness of a victimized agent across four repetitions of a physically abusive interaction. We then showed participants videos of these depictions and assessed associations between participants' reactions to the videos and their gender socialization, past adverse experiences, and social attitudes, as well as the gendering and humanness of the victimized agent. ...
... Based on work by Astrid Rosenthal-von der Pütten and colleagues [11], [12], we showed participants a video depicting an abusive interaction between a male-presenting perpetrator and a victimized agent (a man, a woman, or a NAO robot gendered as "male" or "female") and evaluated the emotionality induced by observing the interaction, participants' humanization/dehumanization of the victim, and several attitudinal, experiential, and social traits of participants themselves. ...
... Effects of the manipulations: Using the Positive and Negative Affect Schedule [36] and the Mind Perception Scale [37], we assessed participants' negative affect and humanization of the victimized agent (inferred from their attributions of agency and experiential capacity), and, via factor analysis of 35 indices curated by Rosenthal von-der Pütten and colleagues ( [11], [12]), we derived five further constructs defined by agreement/disagreement as follows: • distress induced by the video (7 items): the video was depressing, disturbing, emotionally heavy, repugnant, shocking, and unpleasant; on the other hand, the participant didn't mind (Item was reverse-scored.) and was unaffected 0 by the video; • empathy for the victimized agent (3 items): the victim seemed to be in pain, frightened, and suffering; • sympathy extended to the victim (6 items): the perpetrator's actions were incomprehensible; the participant felt for, pitied, and sympathized with the victim; and the participant wished the perpetrator would've stopped and not hurt the victim; • antipathy towards the victim (5 items): the video was amusing, entertaining, funny, and hilarious, and the participant found [the perpetrator's abuse of the victim] funny; and • unlikability of the victimized agent (4 items): the agent seemed cold, unlikable, unfriendly, and stupid. ...
Conference Paper
Full-text available
Researchers are seeing more and more cases of abusive disinhibition towards robots in public realms. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based violence. It is essential that robots deployed in social settings be able to recognize and respond to abuse in a way that minimises ethical risk. Enabling this capability requires designers to first understand the risk posed by abuse of robots, and hence how humans perceive robot-directed abuse. To that end, we experimentally investigated reactions to a physically abusive interaction between a human perpetrator and a victimized agent. Given extensions of gendered biases to robotic agents, as well as associations between an agent's human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim's humanness (via use of a human actor vs. NAO robot) and gendering (via inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction. Analysis of data from 417 participants, each of whom watched one of the four videos, indicates that the intensity of emotional distress felt by an observer is associated with their gender identification, previous experience with victimization, hostile sexism, and support for social stratification, as well as the victim's gendering.
... This has been established by a number studies in human-robot interaction (HRI) (e.g. Rosenthal-von der Pütten et al. 2014). And it is generally believed that people empathize with social robots because they attribute human properties to them (Crowell et al. 2019); a process popularly known as mental anthropomorphism (Epley et al. 2007;Airenti 2015;Damiano and Dumouchel 2018). ...
... In some case studies, test subjects have selfreported emotional responses from interacting with robots, while others supplement this by measuring neural activity (e.g. Rosenthal-von der Pütten et al. 2014;Wang and Quadflieg 2015) or electroencephalography (Suzuki et al. 2015). ...
... Empathic responses make sense as a consequence of perceiving that entity as another mind. 3 If true that we engage in mental model-building on the neurological level when interacting with humanoid robots, the findings of one particularly interesting study by Rosenthal-von der Pütten et al. (2014) makes sense. In this study, researchers had test subjects watching videos of affectionate and violent treatment of a robot and a human, and then comparing the emotional reaction of participants toward robots and humans in turn. ...
Article
Full-text available
Empirical research on human–robot interaction (HRI) has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ (Bryson and Kime, Proc Twenty-Second Int Jt Conf Artif Intell Barc Catalonia Spain 16–22:1641–1646, 2011)—or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’ (Gunkel, Robot rights, MIT Press, London, 2018; Coeckelbergh, Kairos J Philos Sci 20:141–158, 2018)?. In this research paper, I weave extant HRI studies that demonstrate empathic responses toward robots with the recent debate on moral status for robots, on which the ethical evaluation of moral behavior toward them is dependent. Patienthood for robots has standardly been thought to obtain on some intrinsic ground, such as being sentient, conscious, or having interest. But since these attempts neglect moral experience and are curbed by epistemic difficulties, I take inspiration from Coeckelbergh and Gunkel’s ‘relational approach’ to explore an alternative way of accounting for robot patienthood based on extrinsic premises. Based on the ethics of Danish theologian K. E. Løgstrup (1905–1981) I argue that empathic responses can be interpreted as sovereign expressions of life and that these expressions benefit human subjects—even if they emerge from social interaction afforded by robots we have anthropomorphized. I ultimately develop an argument in defense of treating robots as moral patients.
... Furthermore, investigating implicit responses can uncover the psychological mechanisms that underlie human-AI interactions. While research suggested interpersonal trust as the mechanism that distinguishes interactions with humans from those with avatars (Riedl et al., 2014), empathy has been considered another feature that distinguishes interactions with humans from those with robots (Rosenthal-Von Der Pütten et al., 2014). In the present study, using the neuroimaging method would be invaluable for differentiating the true perceptions between the human and the AI agent. ...
... For example, consumers implicitly believed that more expensive wines tasted better than less expensive wines when the wines were identical (Plassmann et al., 2008). Moreover, fMRI studies have clearly delineated distinguishable processes during interactions between humans and robots or avatars (Riedl et al., 2014;Rosenthal-Von Der Pütten et al., 2014). Interacting with humans involves the neural processes of mentalizing, trust, and empathy, which interacting with robots/avatars does not. ...
... Brain areas correlated with the mentalizing network are implicated in social trait attributions and trust situations, including the medial prefrontal cortex (Krueger et al., 2009;Riedl et al., 2014). However, brain areas broadly correlated with the empathy network comprise multiple regions including the medial frontal parts, cingulate cortex, anterior insula, and temporal gyri (Coll et al., 2017;Rameson et al., 2012;Rosenthal-Von Der Pütten et al., 2014). The empathy-related network is particularly salient when observing someone else's pain (Jackson et al., 2005;Singer et al., 2004) such as pictures of hands in painful situations (Gu & Han, 2007). ...
Article
Full-text available
Will consumers accept artificial intelligence (AI) as a medical care provider? On the basis of evolution theory, we investigate the implicit psychological mechanisms that underlie consumers' interactions with medical AI and a human doctor. In a beha-vioral investigation (Study 1), consumers expressed a positive intention to use medical AI's healthcare services when it used personalized rather than mechanical conversation. However, neural investigation (Study 2) using functional magnetic resonance imaging revealed that some consumers' implicit attitudes toward medical AI differed from their expressed behavioral intentions. The brain areas linked with implicitly apathetic emotions were activated even when medical AI used a perso-nalized conversation, whereas consumers' brains were activated in areas associated with prosociality when they interacted with a human doctor who used a persona-lized conversation. On the basis of our neural evidence, consumers perceive an identical personalized conversation differently when it is offered by a medical AI versus a human doctor. These findings have implications for the area of human-AI interactions and medical decision-making and suggest that replacing human doctors with medical AI is still an unrealistic proposition. K E Y W O R D S apathy, artificial intelligence, consumer neuroscience, fMRI, medical decision-making, personalization, prosociality
... Robots are becoming more and more common in our professional lives, especially in areas such as entertainment and rehabilitation robots, as well as affective robots, which can be used for emotional escort (Suzuki et al., 2015;Rosenthal-Von der Pütten et al., 2014;Leite et al., 2013). It's therefore becoming necessary to investigate the effects of emotions elicited by robots in human-robot interaction (HRI), and whether these effects differ from those to humans. ...
... For example, empathy is an emotional response towards the situation of another person even though one has not experienced that situation (Decety and Jackson, 2004;Rosenthal-von der Pütten et al., 2013). Recent studies in empathy and affective robotic interaction have found that humans have the ability to empathize towards robots as they do to humans (Suzuki et al., 2015;Rosenthal-Von der Pütten et al., 2014;Leite et al., 2013). One widely used model of empathy involves bottom-up emotion sharing and top-down executive control (Decety and Jackson, 2004;Damiano et al., 2015;Decety and Lamm, 2006). ...
... The activity of these brain areas also correlates with subjects' estimation of intensity for observed pain (Jackson et al., 2005;Jackson et al., 2006). When subjects observed painful actions towards robots, similar patterns of neural activation in the AI and ACC were observed (Rosenthal-Von der Pütten et al., 2014;Jackson et al., 2005;Jackson et al., 2006). ...
Article
Humans can show emotional reactions toward humanoid robots, such as empathy. Previous neuroimaging studies have indicated that neural responses of empathy for others’ pain are modulated by an early automatic emotional sharing and a late controlled cognitive evaluation process. Recent studies about pain empathy for robots found humans present similar empathy process towards humanoid robots under painful stimuli as well as to humans. However, the whole-brain functional connectivity and the spatial dynamics of neural activities underlying empathic processes are still unknown. In the present study, the functional connectivity was investigated for ERPs recorded from 18 healthy adults who were presented with pictures of human hand and robot hand under painful and non-painful situations. Functional brain networks for both early and late empathy responses were constructed and a new parameter, empathy index (EI), was proposed to represent the empathy ability of humans quantitatively. We found that the mutual dependences between early ERP components was significantly decreased, but for the late components, there were no significant changes. The mutual dependences for human hand stimuli were larger than to robot hand stimuli for early components, but not for late components. The connectivity weights for early components were larger than late components. EI value shows significant difference between painful and non-painful stimuli, indicating it is a good indicator to represent the empathy of humans. This study enriches our understanding of the neurological mechanisms implicated in human empathy, and provides evidence of functional connectivity for both early and late responses of pain empathy towards humans and robots.
... Robots capable empathy underlies adequate social communication [42][43][44], it may help if a robot can raise empathy in its human users. Previous research investigated whether humans feel empathy for social robots in similar ways as they do for human beings when in pain [45,46]. These studies, however, did not include facial expressions of humanoid robots. ...
... These studies, however, did not include facial expressions of humanoid robots. Rosenthal-von der Pütten's study [45] compared a human dressed in black and filmed from the back with Pleo, a baby toy-animal robot. Suzuki's study [46] compared cutting a human finger versus a robot finger with a knife. ...
... Attribution of feelings was measured with a variety of six items [cf. 44,45]: 'X was in pain"; "X felt pleasure" (R); "X felt relaxed" (R); "X felt afraid"; "X suffered"; "X felt sad." A higher score indicated Stimulus materials, neutral images for human actress, robot Alice and robot Nao/Zora, respectively. ...
Article
Full-text available
Life-like humanoid robots are on the rise, aiming at communicative purposes that resemble humanlike conversation. In human social interaction, the facial expression serves important communicative functions. We examined whether a robot’s face is similarly important in human-robot communication. Based on emotion research and neuropsychological insights on the parallel processing of emotions, we argue that greater plasticity in the robot’s face elicits higher affective responsivity, more closely resembling human-to-human responsiveness than a more static face. We conducted a between-subjects experiment of 3 (facial plasticity: human vs. facially flexible robot vs. facially static robot) × 2 (treatment: affectionate vs. maltreated). Participants (N = 265; Mage = 31.5) were measured for their emotional responsiveness, empathy, and attribution of feelings to the robot. Results showed empathically and emotionally less intensive responsivity toward the robots than toward the human but followed similar patterns. Significantly different intensities of feelings and attributions (e.g., pain upon maltreatment) followed facial articulacy. Theoretical implications for underlying processes in human-robot communication are discussed. We theorize that precedence of emotion and affect over cognitive reflection, which are processed in parallel, triggers the experience of ‘because I feel, I believe it’s real,’ despite being aware of communicating with a robot. By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and appears not relevant anymore.
... Furthermore, Benninghoff and colleagues [32] investigated that people are able to imagine humanoid robots to have a ToM and showed that this ToM attribution can for example influence people's perception of a robot's social attractiveness. Additionally, analyses of neural activation patterns showed partly similar results for human and robotic stimuli with regard to empathy [38], indicating that people's mental representations of humanoid robots might be comparable to those of humans to some extent. But also, with regard to source credibility assessments, this attribution of a ToM could be an important influencing factor. ...
... From this point of view, it seems that more realistically human-like robotic devices are needed (e.g., a more similar surface look including skin) in order to observe more human-like credibility attributions. However, given that more realism in surface features does not necessarily correlate with better evaluations of robots [38] and that people are often willing to surrender to a willing suspension of disbelief eliminating the need for a completely perfect realistic design [48], a more detailed anthropomorphically embodiment does not necessarily have to be a promising approach. On the contrary, a second explanation for our results includes the assumption of a human superiority effect. ...
Article
Full-text available
Source credibility is known as an important prerequisite to ensure effective communication (Pornpitakpan, 2004). Nowadays not only humans but also technological devices such as humanoid robots can communicate with people and can likewise be rated credible or not as reported by Fogg and Tseng (1999). While research related to the machine heuristic suggests that machines are rated more credible than humans (Sundar, 2008), an opposite effect in favor of humans’ information is supposed to occur when algorithmically produced information is wrong (Dietvorst, Simmons, and Massey, 2015). However, humanoid robots may be attributed more in line with humans because of their anthropomorphically embodied exterior compared to non-human-like technological devices. To examine these differences in credibility attributions a 3 (source-type) x 2 (information’s correctness) online experiment was conducted in which 338 participants were asked to either rate a human’s, humanoid robot’s, or non-human-like device’s credibility based on either correct or false communicated information. This between-subjects approach revealed that humans were rated more credible than social robots and smart speakers in terms of trustworthiness and goodwill. Additionally, results show that people’s attributions of theory of mind abilities were lower for robots and smart speakers on the one side and higher for humans on the other side and in part influence the attribution of credibility next to people’s reliance on technology, attributed anthropomorphism, and morality. Furthermore, no main or moderation effect of the information’s correctness was found. In sum, these insights offer hints for a human superiority effect and present relevant insights into the process of attributing credibility to humanoid robots.
... Top-down processes can furthermore alter the initial evaluation after it has been formed. Rosenthal-Von Der Pütten et al. [49] compared participants' brain activation patterns in response to viewing the abuse of a cardboard box, a robot and a human. They found that in the questionnaires participants attributed equal levels of emotion to the human and the robot, and reported feeling the same amount of empathy towards the robot and the human when they were mistreated. ...
... In contrast, fMRI scans showed greater activation in participants' right putamen when watching the human being mistreated than when watching the robot being mistreated. This area has been associated with empathy and emotional distress [49]. ...
Article
Full-text available
Mind perception is a fundamental part of anthropomorphism and has recently been suggested to be a dual process. The current research studied the influence of implicit and explicit mind perception on a robot’s right to be protected from abuse, both in terms of participants condemning abuse that befell the robot as well as in terms of participants’ tendency to humiliate the robot themselves. Results indicated that acceptability of robot abuse can be manipulated through explicit mind perception, yet are inconclusive about the influence of implicit mind perception. Interestingly, explicit attribution of mind to the robot did not make people less likely to mistreat the robot. This suggests that the relationship between a robot’s perceived mind and right to protection is far from straightforward, and has implications for researchers and engineers who want to tackle the issue of robot abuse.
... being kicked-showed that they reacted similarly when they watched people being hurt. In both cases, brain activity associated with affective empathising was observed [53,121,122] and it also included the activation of mirror neurons. 6 Although their functions are still the subject of heated discussions, numerous studies indicate that the activity of mirror neurons is an important element of the empathising process [80,104], 106]. ...
... At the same time, such activity was not observed when the same people watched the needles being stuck into tomatoes. Considering this example, together with the studies on empathising with robots mentioned in the paragraph above [53,121,122], it could be an important argument indicating that mirror systems respond to robots in a similar manner to which they respond to people, in contrast to "ordinary" objects [87,88]. This is probably due to the fact that, by incorporating robots into social relations, we begin to automatically treat them as members of our own group, as individuals [155, p. 5]. ...
Article
Full-text available
This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people's reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of the most important among them is a sense of group membership with robots, as it modulates the empathic responses to representatives of our-and other-groups. The sense of group membership with robots may be co-shaped by socio-cognitive factors such as one's experience, familiarity with the robot and its history, motivation, accepted ontology, stereotypes or language. Finally, I argue in favour of the formulation of a pragmatic and normative framework for manipulations in the level of empathy in human-robot interactions.
... Articles on HRI in which the term 'empathy' appears have been rapidly accumulating in recent years (Malinowska, 2020(Malinowska, , 2021. In most cases, researchers borrow the term from psychology (especially social psychology) (Riek, Rabinowitch, Chakrabarti, & Robinson, 2009a, 2009bNiculescu, van Dijk, Nijholt, Li, & See, 2013;Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, & Eimler, 2013, Rosenthal-Von Der Pütten et al., 2014. HRI scholars also often use a highly reduced interpretation of empathy (Kozima, Nakagawa, & Yano, 1 3 2004; Leite et al. 2013). ...
... The term 'empathy' is often used to describe or study human-robot interactions on three levels: (1) on the level of declared beliefs, (2) on the behavioural level, and (3) on the level of neuronal activity (Darling, 2015(Darling, , 2016Darling et al., 2015;Malinowska, 2020Malinowska, , 2021Scheutz & Arnold, 2016. As for the first point, there are many reports in both social media and empirical research that use the term 'empathy' to describe people's reactions to robots with which they interact (Rosenthal-von der Pütten et al., 2013Pütten et al., , 2014Darling, 2015). Regarding second point, people's behaviour towards robots (for example, when they refuse to turn robots off, lock them in a closed space, damage them; or when they speak to and about robots in terms that lend them subjectivity and sentience) is often described and analysed with the use of the category of 'empathy' (Riek et al., 2009a(Riek et al., , 2009bNiculescu et al. 2013;Rosenthal-von der Pütten et al. 2013. ...
Article
Full-text available
Given that empathy allows people to form and maintain satisfying social relationships with other subjects, it is no surprise that this is one of the most studied phenomena in the area of human–robot interaction (HRI). But the fact that the term ‘empathy’ has strong social connotations raises a question: can it be applied to robots? Can we actually use social terms and explanations in relation to these inanimate machines? In this article, I analyse the range of uses of the term empathy in the field of HRI studies and social robotics, and consider the substantial, functional and relational positions on this issue. I focus on the relational (cooperational) perspective presented by Luisa Damiano and Paul Dumouchel, who interpret emotions (together with empathy) as being the result of affective coordination. I also reflect on the criteria that should be used to determine when, in such relations, we are dealing with actual empathy.
... As is well-known people have social emotions towards computers naturally [1], and social behaviors, such as praise and criticism, reciprocity, stereotypes, flattery or politeness [2]. With regards to the negative behaviors, human's abusing and destructive actions to computers reduced due to their intelligence [3], this made researchers turned to focus on the why people show empathy to the computer [4] and how it is generated [5]. As technology improves, the experimental subject changes from computer to more intelligent robots, the methods measuring empathy became diverse, such as fMRI and electroencephalo-graph (EEG). ...
Article
Full-text available
This study examined young children’s empathy towards interacted entity and none interacted agent, and whether the interaction or the appearance of the agent is more relevant on children’s the empathy, the entities include an interacted robot, a stuffed toy dog and a stone. Preschoolers (5-6 years of age, N=69) watched videos of three agents, including agent introducing cuts, agent struck by human hands cuts, agent placing in a box struck by human hands cuts. All of these three agents are non-living entities, an interacted robot dog with metal surface, a non-interacted stuffed toy dog (appearance alike a real dog), a stone. The preschoolers were required to ask a list of questions to obtain the data indicating their empathy towards each agent. The results revealed that the young children ascribe more anthropomorphism to robot dog relative to stuffed toy dog, while the empathy to both have no significance difference.
... Neuere Studien im MRI-Bereich rekurrieren in diesem Zusammenhang auf die grundsätzlichen Annahmen aus der media equation-Theorie (u. a. Brandstetter, 2017;Horstmann et al., 2018), zu denen sich inzwischen eine große Literaturbasis und empirische Fundierung entwickelt hat (Horstmann et al., 2018;Rosenthal-von der Pütten et al., 2014). Broadbent (2017, S. 640f.) ...
Book
Full-text available
Im Fokus dieser Arbeit steht das Vertrauen (und Misstrauen) von Mitarbeitenden in kollaborationsfähige Roboter (sog. Cobots) am industriellen Arbeitsplatz. Der empirische, interdisziplinäre und anwendungsnahe Forschungsansatz greift auf Theorien aus verschiedenen Disziplinen und auf quantitative sowie qualitative Untersuchungsmethoden zurück. Die Stichproben umfassen Mitarbeitende auf operativer und leitender Ebene in produzierenden Unternehmen sowie Studierende. Die Ergebnisse verdeutlichen u. a. den signifikanten Einfluss des sprachlichen Framings auf das Vertrauen der Produktionsmitarbeitenden. Das Framing bezieht sich dabei einerseits auf die Vermenschlichung eines Cobots und andererseits auf die wahrgenommene Mensch-Cobot-Relation im Spannungsfeld zwischen Kooperation und Konkurrenz. Ein vertrauensförderlicher Effekt der Vermenschlichung stellt sich ein, wenn sich die Mitarbeitenden in einer kooperierenden Relation zum Cobot sehen. Ferner beeinflussen näher zu untersuchende personenspezifische und kontextuelle Faktoren die Wirkkraft des Framings. Vertrauen und Misstrauen erscheinen letztlich als konzeptionell unterschiedliche, multidimensionale und sich zeitdynamisch entwickelnde Konstrukte. Daraus ergeben sich unternehmerische und gesellschaftliche Implikationen auch in Hinblick auf ähnliche Technologien und Anwendungskontexte sowie Bedarfe für anknüpfende anwendungsnahe und theoriebildende Forschungsarbeiten.
... And, on the neurological level, embodied robots activate our mirror neurons (Gazzola et al. 2007) and the brain states associated with mental model building (Krach et al. 2008). Robots famously solicit our empathic responses (Rosenthal-von der Pütten et al. 2014;Suzuki et al. 2015), and people tend to regulate their behavior around them as if they have moral status (Balle 2021). ...
Article
As robotic artifacts begin to appear in religious practice, they become compelling objects for digital religion studies. The authors explore current research on robotics in religion to develop conceptual and theoretical space for robots within digital religion. A brief history of automata in diverse religions provides important grounding for then reviewing foundational philosophical and culturally variable aspects of robots. The authors then introduce the most prominent religious robots employed in Buddhist and Christian contexts today. Within and beyond these religious contexts, robots pose unique opportunities and challenges: the authors catalogue and develop these within the frame of three central digital religion themes: identity, community, and authority. These endeavors set the stage for further analyses of robots in religion, starting with authenticity and ritual as additional themes in digital religion studies.
... On the one hand, the technical complexity and capabilities of a connected smart device and a robot are quite similar, so one may expect robots to be treated and perceived similarly to machines. On the other hand, some studies point to humans feeling some level of empathy towards robots (Menne & Schwab, 2018;Rosenthal-von der Pütten et al., 2014), and some soldiers have been burying their bomb disposal robots (Carpenter, 2013). At the same time, many researchers and companies seek to deploy mobile robots into human environments to accomplish useful tasks, but they often struggle to explain people's reactions to their robots' behavior. ...
... The brain is the place in which message reception, motivation, and memory take place (Huang, Kuo, Luu, Tucker, & Hsieh, 2015;Kranzler et al., 2018;Rosenthal-von der Pütten et al., 2014). Neuroimaging techniques, such as functional magnetic Resonance Imaging (fMRI) or Electroencephalography (EEG), allow for monitoring of the neurocognitive processes engaged by different message contents occurring in our brain regions. ...
Article
Full-text available
Traditional psychological theories of message persuasion typically conclude that messages that are able to facilitate an optimal allocation of cognitive resources in the audience will increase memory encoding, will be better retrieved and recalled, and will likely be more persuasive. The growing competition in online advertising has led to a need to evaluate which type of banners are able to allocate cognitive resources more efficiently, as this has a positive impact on the ability to remember the banner and potentially increase the purchase frequency of the advertised product. By means of functional Magnetic Resonance Imaging (fMRI), this study provides the first evidence of neural differences during the exposure and reimagination of two widely used banner appeals; namely, hedonic (i.e., banners that vividly emphasize the social, personal, and experiential benefits of buying the product) and utilitarian (i.e., banners focused on informative, convenient, and functional arguments). Our findings reveal that, when compared to utilitarian banners, hedonic static advertisements engage stronger neurocognitive processes, which translate into higher brain activations related to memory encoding and retrieval, ultimately correlating to higher recall. These findings advise the design of static and hedonic banners to improve the ad recall.
... Sympathetic and empathetic robots have become an increasingly popular topic of research within HRI. While a number of experiments have suggested that humans can feel sympathy or empathy for social robots (Riek et al., 2009;Leite et al., 2013;Rosenthal-von der Pütten et al., 2014;Leite, 2015;Ceh and Vanman, 2018;Menne and Schwab, 2018), the theoretical foundations of both the empathy and sympathy concepts, as well as their connections to ascriptions of moral standing, have been underexamined within the field of HRI. This paper will draw on philosophical, sociological, and psychological research to argue that not only are the concepts (and associated phenomena) of sympathy and empathy distinct, but that the tendency to employ one or both of these concepts without sufficiently clarifying in what sense they are intended has acted as a limiting factor on the progress of HRI research investigating these phenomena. ...
Article
Full-text available
This paper discusses the ethical nature of empathetic and sympathetic engagement with social robots, ultimately arguing that an entity which is engaged with through empathy or sympathy is engaged with as an “experiencing Other” and is as such due at least “minimal” moral consideration. Additionally, it is argued that extant HRI research often fails to recognize the complexity of empathy and sympathy, such that the two concepts are frequently treated as synonymous. The arguments for these claims occur in two steps. First, it is argued that there are at least three understandings of empathy, such that particular care is needed when researching “empathy” in human-robot interactions. The phenomenological approach to empathy—perhaps the least utilized of the three discussed understandings—is the approach with the most direct implications for moral standing. Furthermore, because “empathy” and “sympathy” are often conflated, a novel account of sympathy which makes clear the difference between the two concepts is presented, and the importance for these distinctions is argued for. In the second step, the phenomenological insights presented before regarding the nature of empathy are applied to the problem of robot moral standing to argue that empathetic and sympathetic engagement with an entity constitute an ethical engagement with it. The paper concludes by offering several potential research questions that result from the phenomenological analysis of empathy in human-robot interactions.
... In the debate on granting rights to robots, Turner has termed this as an "argument from compassion" (96,155). Psychological research has demonstrated that humans can empathize with robot "pain" [81], and other research has confirmed this finding [82,83]. Many people believe that it is morally questionable to act violently against robots. ...
Article
Full-text available
This paper addresses the following question: “Should violence against robots be banned?” Such a question is usually associated with a query concerning the moral status of robots. If an entity has moral status, then concomitant responsibilities toward it arise. Despite the possibility of a positive answer to the title question on the grounds of the moral status of robots, legal changes are unlikely to occur in the short term. However, if the matter regards public violence rather than mere violence, the issue of the moral status of robots may be avoided, and legal changes could be made in the short term. Prohibition of public violence against robots focuses on public morality rather than on the moral status of robots. The wrongness of such acts is not connected with the intrinsic characteristics of robots but with their performance in public. This form of prohibition would be coherent with the existing legal system, which eliminates certain behaviors in public places through prohibitions against acts such as swearing, going naked, and drinking alcohol.
... Moreover, the simulation conditions that caused participants to abuse an agent directly without any restrictions could not pass an ethics board (Bartneck & Keijsers, 2020). Therefore, as an agent abuse (or punishment) research method, the vignette-based approach, where participants read about abusive behaviors and then express their emotions toward the action described, or the video-based method, where participants are shown a video clip of agent abuse, is commonly used (Bartneck & Keijsers, 2020;Rosenthal-Von Der Pütten et al., 2014;Slater et al., 2006). ...
Article
Conversational agents (CAs) offer new functionality and convenience. While their sales have been soaring, they have also rapidly become victims of verbal abuse by their users. Without proper handling of abusive usage, abusers’ actions can be reinforced and transferred to real life. This study investigates whether alternative response styles of empathy orientation and emotional expressivity of voice-activated virtual assistants influence users’ moral emotions found to reduce verbal aggression as well as whether they affect user perceptions of the agent’s capability. Ninety-eight participants were assigned to one of the three emotional expressivity conditions (no-facial expression, fixed-facial expression, varied-facial expression) and interacted with two alternative empathy orientation conditions (other-oriented, self-oriented) of agents. The experimental results show that, regardless of the emotional expressivity types, the agent’s empathy orientation has a significant effect on the moral emotions and agent capability perceptions. Overall, an agent that employed other-oriented empathy style elicited most positive responses from the users. However, the preference was not across the board, as about one-third of the participants showed preference to the self-oriented CA. Users valued agents’ verbal contents and vocal characteristics above their facial expressions. Based on the study findings, we draw several design guidelines and suggest avenues for future research.
... Our brains are wired to recognize social signals even when they are associated with machines, and humanoid robots are a perfect target for that. For instance, research showed that Human and Robot elicit similar neural activation patterns in limbic structures when observing video of affectionate interactions [70]. Indeed, we are designing social robots based on human social patterns. ...
Article
Full-text available
Attentional control does not have fix functioning and can be strongly impacted by the presence of other human beings or humanoid robots. In two studies, this phenomenon was investigated while focusing exclusively on robot gaze as a potential determinant of attentional control along with the role of participants’ anthropomorphic inferences toward the robot. In study 1, we expected and found higher interference in trials including a direct robot gaze compared to an averted gaze on a task measuring attentional control (Eriksen flanker task). Participants’ anthropomorphic inferences about the social robot mediated this interference. In study 2, we found that averted gazes congruent with the correct answer (same task as study 1) facilitated performance. Again, this effect was mediated by anthropomorphic inferences. These two studies show the importance of anthropomorphic robotic gaze on human cognitive processing, especially attentional control, and also open new avenues of research in social robotics.
... On the other hand, there are also differences between people's interactions with machines and humans (Krämer et al., 2012), which several studies show (Bartneck et al., 2005;Blascovich et al., 2002;Gallagher et al., 2002;Gazzola et al., 2007;Lucas et al., 2014;Rosenthal-von der Pütten et al., 2014). According to Blascovich (2002), people only respond socially to a person or an avatar controlled by a person. ...
Article
Previous research focused on differences between interacting with a person-controlled avatar and a computer-controlled virtual agent. This study however examines an aspiring form of technology called agent representative which constitutes a mix of the former two interaction partner types since it is a computer agent which was previously instructed by a person to take over a task on the person's behalf. In an experimental lab study with a 2 × 3 between-subjects-design (N = 195), people believed to study either together with an agent representative, avatar, or virtual agent. The interaction partner was described to either possess high or low expertise, while always giving negative feedback regarding the participant's performance. Results show small but interesting differences regarding the type of agency. People attributed the most agency and blame to the person(s) behind the software and reported the most negative affect when interacting with an avatar, which was less the case for a person's agent representative and the least for a virtual agent. Level of expertise had no significant effect and other evaluation measures were not affected.
... Additional studies had investigated neural activation patterns related to emotional reactions when people observe a robot or a human into a violent situation. The fMRI data showed no differences in activation patterns in areas of emotional resonance when a violent action was experienced by a human or robot (Rosenthal-von der Pütten et al., 2014). Suzuki et al., (2015) found a similar brain response measured through EEG when observing images showing either a finger of a robotic hand or a human hand getting cut with a scissor. ...
Article
Full-text available
The effectiveness of social robots has been widely recognized in different contexts of humans’ daily life, but still little is known about the brain areas activated by observing or interacting with a robot. Research combining neuroscience, cognitive science and robotics can provide new insights into both the functioning of our brain and the implementation of robots. Behavioural studies on social robots have shown that the social perception of robots is influenced by at least two factors: physical appearance and behavior (Marchetti et al., 2018). How can neuroscience explain such findings? To date, studies have been conducted through the use of both EEG and fMRI techniques to investigate the brain areas involved in human-robot interaction. These studies have mainly addressed brain activations in response to paradigms involving either action performance or charged of an emotional component.
... One perspective is that users assign humanness and social characteristics to intelligent agents and perceive it positively [3,4]. Conversely, such systems also trigger perceptions of threat [5,6]. According to Rzepka and Benedikt [2] this phenomenon can be explained by the uncanny valley hypothesis [7,8]. ...
... Based on interaction norms between humans, people treat robots as social partners in many respects. Thus robots are expected to behave in a socially acceptable manner and comply with social rules to some extent (e.g., Computers Are Social Actors paradigm, Nass and Moon, 2000;Rosenthal-von der Pütten et al., 2014). Thereby, amongst others, user characteristics (e.g., personality, Walters et al., 2005) were found to influence the individual reaction to robots. ...
Article
Full-text available
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is a significant psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes towards robots on state anxiety, trust, and comfort distance towards a robot were explored. Therefore, participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. Equally, the mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processing mechanisms through which personality traits operate and determine interindividual outcomes in human-robot interaction. The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in specific.
... In an empirical experiment, for instance, Rosenthal-von der Pütten et al. (2014) showed 10-s video clips to participants presenting three different situations. The first was an interaction between human and human (HHI), the second an interaction between human and robot (HRI), and the last an interaction between human and a box (HBI). ...
Article
Full-text available
How do we enter into empathic relations with inanimate objects (IO)? Do we indirectly infer that they possess mental states, or directly perceive them as mental things? In recent years these questions have been addressed by a number of authors. Some argue in favor of an indirect approach that involves mediatory procedures; others defend a direct approach that postulates no intermediate. In this paper I argue on the side of the latter. I show that Simulation Theory (ST), one of the most elaborated versions of the indirect approach, does not have the capacity to account for our empathy with IO. Investigating ST paves the way for criticizing a special kind of indirect theory, namely Imaginative Perception, which is tailored specifically to fit the problem. Both of these indirect theories face more or less similar problems. Motor Imagining is another indirect approach that must be considered, but in spite of its capacity to overcome some of the aforementioned problems, it suffers from over-inclusiveness. In contrast with these indirect approaches, I develop a phenomenologically inspired framework for empathy, according to which the scope of objects with which we can enter into empathic relations is broadened to include IO. I argue that this direct framework is a more promising way of addressing the problem of empathic engagement with IO.
Article
Full-text available
The challenge of long-term interaction between humans and robots is still a bottleneck in service robot research. To gain an understanding of sustained relatedness with robots, this study proposes a conceptual framework for bond formation. More specifically, it addresses the dynamics of children bonding with robotic pets as the basis for certain services in healthcare and education. The framework presented herein offers an integrative approach and draws from theoretical models and empirical research in Human Robot Interaction and also from related disciplines that investigate lasting relationships, such as human-animal affiliation and attachment to everyday objects. The research question is how children’s relatedness to personified technologies occurs and evolves and what underpinning processes are involved. The subfield of research is child-robot interaction, within the boundaries of social psychology, where the robot is viewed as a social agent, and human-system interaction, where the robot is regarded as an artificial entity. The proposed framework envisions bonding with pet-robots as a socio-affective process towards lasting connectedness and emotional involvement that evolves through three stages: first encounter, short-term interaction and lasting relationship. The stages are characterized by children’s behaviors, cognitions and feelings that can be identified, measured and, maybe more importantly, managed. This model aims to integrate fragmentary and heterogeneous knowledge into a new perspective on the impact of robots in close and enduring proximity to children.
Chapter
Social robotics is undergoing significant transformations and a new class of tools is emerging. Though most technologies related to the physical aspects are becoming well understood, human-robot interaction is still far from human level skills. However, there is a growing pressure from society to absorb these sophisticated technologies. As robots move from factories to home, the study and optimization of Human-Robot Interaction (HRI) becomes an increasingly important factor. The main issue to success is ensuring that users are interested and satisfied by the devices they own. The issue of technological acceptance has been thoroughly studied in the context of human-computer interaction. Robots, on the other hand, can use any natural communication channel employed by their users, resulting in much higher potential for user-adapted behavior. Thus, it becomes interesting to study the phenomenon of user-adaptivity in the context of HRI. User-adaptive systems are based on information from the users, usually (but not necessarily) contained in user models. This model encodes the attributes of the user that are relevant to the operation of the system, such as the user’s expertise level and preferences. This information is used by the system to generate behavior that conforms to the idiosyncrasies of the user, resulting in higher levels of user satisfaction and acceptance. User-adaptive systems have been shown to be easier to accept by end-users than non-adaptive ones. This article gives an overview on the usage of user-adaptive techniques on robotic systems, based on an analysis of a number of recent scientific and technological works.
Article
In a digitally empowered business world, a growing number of family businesses are leveraging the use of chatbots in an attempt to improve customer experience. This research investigates the antecedents of chatbots’ successful use in small family businesses. Subsequently, we determine the effect of two distinctive sets of human–machine communication factors—functional and humanoid—on customer experience. We assess the latter with respect to its effect on customer satisfaction. While a form of intimate attachment can occur between customers and small businesses, affective commitment is prevalent in customers’ attitudes and could be conflicting with the distant and impersonal nature of chatbot services. Therefore, we also test the moderating role of customers’ affective commitment in the relationship between customer experience and customer satisfaction. Data come from 408 respondents, and the results offer an explicit course of action for family businesses to effectively embed chatbot services in their customer communication. The study provides practical and theoretical insights that stipulate the dimensions of chatbots’ effective use in the context of small family businesses.
Article
Full-text available
A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI (Human–Social Robot Interaction). In the first part of the paper, we will analyse the semantics of an HSRI. This is what leads a human being (x) to assign or receive a meaning of sociality (z) by interacting with a social robot (y). Hence, we will question the ontological structure underlying HSRIs, suggesting that HSRIs may lead to a peculiar kind of user alienation. By combining all these variables, we will formulate some final recommendations for an ethics of social robots.
Article
Realistic looking humanoid love and sex dolls have been available on a somewhat secretive basis for at least three decades. But today the industry has gone mainstream with North American, European, and Asian producers using mass customization and competing on the bases of features, realism, price, and depth of product lines. As a result, realistic life size artificial companions are becoming more affordable to purchase and more feasible to patronize on a service basis. Sexual relations may be without equal when it comes to emotional intimacy. Yet, the increasingly vocal and interactive robotic versions of these dolls, feel nothing. They may nevertheless induce emotions in users that potentially surpass the pleasure of human-human sexual experiences. The most technologically advanced love and sex robots are forecast to sense human emotions and gear their performances of empathy, conversation, and sexual activity accordingly. I offer a model of how this might be done to provide a better service experience. I compare the nature of resulting “artificial emotions” by robots to natural emotions by humans. I explore the ethical issues entailed in offering love and sex robot services with artificial emotions and offer a conclusion and recommendations for service management and for further research.
Article
Full-text available
Due to its immense popularity amongst marketing practitioners, online personalized advertising is increasingly becoming the subject of academic research. Although advertisers need to collect a large amount of customer information to develop customized online adverts, the effect of how this information is collected on advert effectiveness has been surprisingly understudied. Equally overlooked is the interplay between consumer’s emotions and the process of consumer data collection. Two studies were conducted with the aim of closing these important gaps in the literature. Our findings revealed that overt user data collection techniques produced more favourable cognitive, attitudinal and behavioral responses than covert techniques. Moreover, consistent with the self-validation hypothesis, our data revealed that the effects of these data collection techniques can be enhanced (e.g., via happiness and pride), attenuated (e.g., via sadness), or even eliminated (e.g., via guilt), depending on the emotion experienced by the consumer while viewing an advert.
Article
Twenty years ago, we reflected on the potential of psychological research in the area of embodied conversational agents and systematized the variables that need to be considered in empirical studies. We gave an outlook on potential and necessary research by taking into account the independent variables behavior and appearance of the embodied agent, by referring to the dependent variables acceptance , efficiency and effects on behavior and summarizing moderating variables such as task and individual differences . Twenty years later, we now give an account on what has been found and how the field has developed – suggesting avenues for future research.
Article
Full-text available
In this paper, we theoretically address the relevance of unintentional and inconsistent interactional elements in human-robot interactions. We argue that elements failing, or poorly succeeding, to reproduce a humanlike interaction create significant consequences in human-robot relational patterns and may affect human-human relations. When considering social interactions as systems, the absence of a precise interactional element produces a general reshaping of the interactional pattern, eventually generating new types of interactional settings. As an instance of this dynamic, we study the absence of metacommunicative abilities in social artifacts. Then, we analyze the pragmatic consequences of the aforementioned absence through the lens of Paul Watzlawick’s interactionist theory. We suggest that a fixed complementary interactional setting may be produced because of the asymmetric understanding, between robots and humans, of metacommunication. We highlight the psychological implications of this interactional asymmetry within Jessica Benjamin’s concept of “mutual recognition”. Finally, we point out the possible shift of dysfunctional interactional patterns from human-robot interactions to human-human ones.
Article
Extant grief studies examine the way humans mourn the loss of a nonhuman, be it an animal, object, or abstract concept. Yet little is known about grief when it comes to robots. As humans are increasingly brought into contact with more human-like machines, it is relevant to consider the nature of our relationship to these technologies. Centered on a qualitative analysis of 35 films, this study seeks to determine whether humans experience grief when a robot is destroyed, and if so, under what conditions. Our observations of the relevant film scenes suggest that eight variables play a role in determining whether and to what extent a human experiences grief in response to a robot's destruction. As a result, we have devised a psychological mechanism by which different types of grief can be classified as a function of these eight variables.
Article
Full-text available
An interesting aspect of love and sex (and other types of interactions) with robots is that human beings often treat robots as animate and express emotions towards them. In this paper, we discuss two interpretations of why people experience emotions towards robots and tend to treat them as animate: naturalistic and antinaturalistic. We first provide a set of examples that illustrate human beings considering robots animate and experiencing emotions towards them. We then identify, reconstruct and compare naturalist and antinaturalist accounts of these attitudes and point out the functions and limitations of these accounts. Finally, we argue that in the case of emotional and ‘animating’ human–robot interactions, naturalist and antinaturalist accounts should be – as they most often are – considered complementary rather than competitive or contradictory.
Article
Full-text available
Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.
Article
Full-text available
The aim of this paper is to present a study in which we compare the degree of empathy that a convenience sample of university students expressed with humans, animals, robots and objects. The present study broadens the spectrum of elements eliciting empathy that has been previously explored while at the same time comparing different facets of empathy. Here we used video clips of mistreated humans, animals, robots, and objects to elicit empathic reactions and to measure attributed emotions. The use of such a broad spectrum of elements allowed us to infer the role of different features of the selected elements, specifically experience (how much the element is able to understand the events of the environment) and degree of anthropo-/zoomorphization. The results show that participants expressed empathy differently with the various social actors being mistreated. A comparison between the present results and previous results on vicarious feelings shows that congruence between self and other experience was not always held, and it was modulated by familiarity with robotic artefacts of daily usage.
Preprint
Full-text available
FORTHCOMING - A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue about the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper about a holistic approach called “Δ phenomenology” of HSRI (Human-Social-Robot-Interaction). In the first part of the paper we will analyse the semantic of a HSRI, that is what leads a human being (x) to assign or receive a meaning of sociality (z) by interacting with a social robot (y). Hence, we will question the ontological structure underlying HSRIs, suggesting that HSRIs may lead to a peculiar kind of users alienation. By combining all these variables, we will formulate some final recommendations for an ethics of Social Robots.
Chapter
The demographic change and the decrease of care personnel lead to the discussion to implement robots to support older adults. To ensure sustainable use, the solutions must be accepted. Technology Acceptance is dealt with in different models, but little attention has been paid to the emotions that older adults have toward service robots that support every day or care activities. The simulated robot study examined the positive and negative emotions and the attitudes of 142 older adults toward robots in different situations and with robots of different appearances. The situation influenced both emotions and attitudes. The older adults expressed more negative emotions and a more negative attitude in a care situation. In terms of appearance, less uncanniness and higher usage intention for the human-like and android robot were reported. The results contribute to a deeper understanding of robot acceptance and should be considered in the development of service robots for older adults in the future. Furthermore, the results should be validated in vivo with existing robots.
Chapter
Demographic change is leading to a higher proportion of older adults. The health and care sector must adapt because diseases and functional limitations increase with age. Strategies are required to promote and improve the functional capacity of older adults. A key factor in protecting health is regular physical activity. However, older adults do not move enough. It is therefore important to develop strategies to encourage older adults to be physically active. Regular motivation and guidance are helpful, but often not feasible due to staff shortages and high costs of personalized trainers. One solution is to use technology. Several studies have shown the positive effects of robots as instructors and motivators for physical activity. However, it has not been examined whether it is suitable in the private household. Therefore, the explorative user study investigated whether a socially assistive robot could be a practical solution to motivate older adults living independently to exercise regularly. Seven older adults participated. They trained one week with the robot as an instructor. The participants enjoyed the robot, but some technical difficulties such as slowness, communication, face recognition, stability, and acoustic problems occured. The participants experienced the robot as motivating, but they expected habituation effects. Even if the robot used was not suitable for autonomous training at home, this research can help find new ways to motivate older adults to engage in regular physical activity and improve technical solutions with the involvement of older adults.
Article
Full-text available
The novel pathogen Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), the cause of coronavirus disease (COVID-19), has created a global crisis. Currently, the limits of public health systems and medical knowhow have been exposed. COVID-19 has challenged our best minds, forcing them to return to the drawing board. Fear of infection leading to possible lifelong morbidity or death has embedded itself in the collective imagination leading to both altruistic and maladaptive behaviours. Although, COVID-19 has been a global concern, its advent is a defining moment for artificial intelligence. Medicorobots have been increasingly used in hospitals during the last twenty years. Their various applications have included logistic support, feeding, nursing support and surface disinfection. In this article we examine how COVID-19 is reframing technology/human interactions via medicorobots, and the future implications of this relationship. In the last section we predict possible developments in artificial intelligence and how they may benefit future humanity in medicine.
Chapter
Full-text available
This paper discusses some challenges for the design of sympathetic robots, including a lack of conceptual clarity and the difficulty of quantifying an instance of sympathy. Candace Clark’s sociological sympathy account is offered as a solid theoretical basis for advancing the design and acceptance of sympathetic robots. The aspects of her theory relevant to HRI and some potential challenges in its implementation are discussed, as well as the alarming potential for sympathetic robots to act ideologically imperialistic, or to ‘nudge’ users across cultural boundaries.
Chapter
Full-text available
In 2017, the Protestant Church in Germany presented the robot priest “BlessU2” to the participants of the Deutscher Kirchentag in Wittenberg. This generated a number of important questions on key themes of religion(s) in digital societies: Are robots legitimized and authorized to pronounce blessings on humans—and why? To answer such questions, one must first define the interrelationship of technology, religion and the human being. Paul Tillich (1886–1965) referred to the polarization of autonomy and heteronomy by raising the issue of theonomy: the first step on the way to critical research on representing the divine in robotic technology.
Article
Service robots are on the rise. Technological advances in engineering, artificial intelligence, and machine learning enable robots to take over tasks traditionally carried out by humans. Despite the rapid increase in the employment of robots, there are still frequent failures in the performance of service robots. This research examines how and to what extent people attribute responsibility toward service robots in such cases of service failures compared to when humans provide the same service failures. Participants were randomly assigned to read vignettes describing a service failure either by a service robot or a human and were then asked who they thought was responsible for the service failure. The results of three experiments show that people attributed less responsibility toward a robot than a human for the service failure because people perceive robots to have less controllability over the task. However, people attributed more responsibility toward a service firm when a robot delivered a failed service than when a human delivered the same failed service. This research advances theory regarding the perceived responsibility of humans versus robots in the service sector, as well as the perceived responsibility of the firms involved. There are also important practical considerations raised by this research, such as how utilizing service robots may influence customer attitudes toward service firms.
Chapter
In an increasingly globalized world remote communication becomes ever more crucial. Hence, telepresence robots gain importance as they simplify attending important but distant events. However, research regarding human affinity towards a person by interacting directly or through a telepresence robot with another individual has not been undertaken immensely. Therefore, this work aims to investigate if there is a difference and by what it may be caused. Thus, a concurrent nested mixed method study was performed. A tour guided by a student was conducted with 102 participants through a part of a university building. Forty-one subjects experienced the tour through the telepresence robot Double 2 whereas another 41 subjects did the tour in person. The Multidimensional Mood State Questionnaire was used before and after the tour to detect if the tour has an impact on subjects’ mood. Human affinity was measured inter alia through a hypothetical injury scenario of the guide in a questionnaire. In addition, information about the robot and tour were collected. Indications which strengthen that there is no difference in perception of human affinity whether people are interacting through a telepresence robot or in person were found. Moreover, new hypotheses which refine the original were established. Furthermore, the human guide will be replaced by a robot guide in a future study. Thereby, future work aims to lay the foundation of “human-robot-robot” interaction with respect to human affinity which has not yet been undertaken.
Article
Full-text available
Although robots are starting to enter into our professional and private lives, little is known about the emotional effects which robots elicit. However, insights into this topic are an important prerequisite when discussing, for example, ethical issues regarding the question of what role we (want to) allow robots to play in our lives. In line with the Media Equation, humans may react towards robots as they do towards humans, making it all the more important to carefully investigate the preconditions and consequences of contact with robots. Based on assumptions on the socialness of reactions towards robots and anecdotal evidence of emotional attachments to robots (e. g. Klamer and BenAllouch in Trappl R. (ed.), Proceedings of EMCSR 2010, Vienna, 2010; Klamer and BenAllouch in Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI-2010), Atlanta, GA. ACM, New York, 2010; Kramer et al. in Appl. Artif. Intell. 25(6): 474-502, 2011), we conducted a study that provides further insights into the question of whether humans show emotional reactions towards Ugobe's Pleo, which is shown in different situations. We used a 2 x 2 design with one between-subjects factor "prior interaction with the robot" (never seen the robot before vs. 10-minute interaction with the robot) and a within-subject factor "type of video" (friendly interaction video vs. torture video). Following a multi-method approach, we assessed participants' physiological arousal and self-reported emotions as well as their general evaluation of the videos and the robot. In line with our hypotheses, participants showed increased physiological arousal during the reception of the torture video as compared to the normal video. They also reported fewer positive and more negative feelings after the torture video and expressed empathic concern for the robot. It appears that the acquaintance with the robot does not play a role, as "prior interaction with the robot" showed no effect.
Article
Full-text available
Against the background of the uncanny valley hypothesis [1] and its conceptual shortcomings this study aims at identifying design characteristics which determine the evaluation of robots. We conducted a web-based survey with standardized pictures of 40 robots which were evaluated by 151 participants. A cluster analysis revealed six clusters of robots. The results are discussed with regard to implications for the uncanny valley hypothesis.
Article
Full-text available
Much social behavior is predicated upon assumptions an actor makes about the knowledge, beliefs and motives of others. To note just a few examples, coordinated behavior of the kind found in bargaining and similar structured interactions (Dawes, McTavish, & Shaklee, 1977; Schelling, 1960) requires that participants plan their own moves in anticipation of what their partners' moves are likely to be; predicting another's moves requires extensive assumptions about what the other knows, wants, and believes. Similarly, social comparison theory (Festinger, 1950; Festinger, 1954; Woods, 1988) postulates that people evaluate their own abilities and beliefs by comparing them with the abilities and beliefs of others -- typically with abilities and beliefs that are normative for relevant categories of others. In order to make such comparisons, the individual must know (or think he or she knows) how these abilities and beliefs are distributed in those populations. Reference group theory (Merton & Kitt, 1950) incorporates a similar set of assumptions. In communication, the fundamental role of knowing what others know 1 is
Conference Paper
Full-text available
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
Article
Full-text available
An individual has a theory of mind if he imputes mental states to himself and others. A system of inferences of this kind is properly viewed as a theory because such states are not directly observable, and the system can be used to make predictions about the behavior of others. As to the mental states the chimpanzee may infer, consider those inferred by our own species, for example, purpose or intention, as well as knowledge, belief, thinking, doubt, guessing, pretending, liking, and so forth. To determine whether or not the chimpanzee infers states of this kind, we showed an adult chimpanzee a series of videotaped scenes of a human actor struggling with a variety of problems. Some problems were simple, involving inaccessible food – bananas vertically or horizontally out of reach, behind a box, and so forth – as in the original Kohler problems; others were more complex, involving an actor unable to extricate himself from a locked cage, shivering because of a malfunctioning heater, or unable to play a phonograph because it was unplugged. With each videotape the chimpanzee was given several photographs, one a solution to the problem, such as a stick for the inaccessible bananas, a key for the locked up actor, a lit wick for the malfunctioning heater. The chimpanzee's consistent choice of the correct photographs can be understood by assuming that the animal recognized the videotape as representing a problem, understood the actor's purpose, and chose alternatives compatible with that purpose.
Article
Full-text available
Takayuki Kanda is a computer scientist with interests in intelligent robots and human-robot interaction; he is a researcher in the Intelligent Robotics and Communication Laboratories at ATR (Advanced Telecommunications Re-search Institute), Kyoto, Japan. Takayuki Hirano is a computer scientist with an interest in human–robot interaction; he is an intern researcher in the Intelli-gent Robotics and Communication Laboratories at ATR, Kyoto, Japan. Daniel Eaton is a computer scientist with an interest in human–robot interaction; he is an intern researcher in the Intelligent Robotics and Communication Labora-tories at ATR, Kyoto, Japan. Hiroshi Ishiguro is a computer scientist with in-terests in computer vision and intelligent robots; he is Professor of Adaptive Machine Systems in the School of Engineering at Osaka University, Osaka, Ja-pan, and a visiting group leader in the Intelligent Robotics and Communication Laboratories at ATR, Kyoto, Japan. ABSTRACT Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speak-ing "Robovie" robots interacted with first-and sixth-grade pupils at the perime-ter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition.
Article
Full-text available
In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.
Conference Paper
Full-text available
Valerie the roboceptionist is the most recent addition to Carnegie Mellon's social robots project. A permanent installation in the entranceway to Newell-Simon hall, the robot combines useful functionality - giving directions, looking up weather forecasts, etc. - with an interesting and compelling character. We are using Valerie to investigate human-robot social interaction, especially long-term human-robot "relationships". Over a nine-month period, we have found that many visitors continue to interact with the robot on a daily basis, but that few of the individual interactions last for more than 30 seconds. Our analysis of the data has indicated several design decisions that should facilitate more natural human-robot interactions.
Article
Full-text available
Robots have been introduced into our society, but their social role is still unclear. A critical issue is whether the robot’s exhibition of intelligent behaviour leads to the users’ perception of the robot as being a social actor, similar to the way in which people treat computers and media as social actors. The first experiment mimicked Stanley Milgram’s obedience experiment, but on a robot. The participants were asked to administer electric shocks to a robot, and the results show that people have fewer concerns about abusing robots than about abusing other people. We refined the methodology for the second experiment by intensifying the social dilemma of the users. The participants were asked to kill the robot. In this experiment, the intelligence of the robot and the gender of the participants were the independent variables, and the users’ destructive behaviour towards the robot the dependent variable. Several practical and methodological problems compromised the acquired data, but we can conclude that the robot’s intelligence had a significant influence on the users’ destructive behaviour. We discuss the encountered problems and suggest improvements. We also speculate on whether the users’ perception of the robot as being “sort of alive” may have influenced the participants’ abusive behaviour.
Article
Full-text available
Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovie's turn at a game and, against Robovie's stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovie's moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) children's social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.
Conference Paper
Full-text available
In this paper the WASABI Affect Simulation Architecture is introduced, in which a virtual human's cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of "Core Affect" in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.
Conference Paper
Full-text available
In human-computer interaction social behavior towards computers like flattery, reciprocity, and politeness have been observed [1]. In order to determine whether the results can be replicated when interacting with embodied conversational agents (ECA), we conducted an experimental study. 63 participants evaluated the ECA Max after a 10-minute conversation. The interview situation was manipulated in three conditions: Being questioned by Max himself, being questioned by paper-and-pencil questionnaire in the same room facing Max, and being questioned by means of a paper-and-pencil questionnaire in another room. Results show that participants were more polite to the ECA in terms of a better evaluation when they were questioned by Max himself compared to when they were questioned more indirectly by paper-and-pencil questionnaire in the same room. In contrast to previous studies [2] it was ruled out that some participants thought of the programmer when they were asked to evaluate the ECA. Additionally, user variables (e.g. gender, computer literacy) show an impact on the on the evaluation of the ECA.
Conference Paper
Full-text available
This paper presents a new experimental paradigm for the study of human-computer interaction, Five experiments provide evidence that individuals' interactions with computers are fundamentally social. The studies show that social responses to computers are not the result of conscious beliefs that computers are human or human-like. Moreover, such behaviors do not result from users' ignorance or from psychological or social dysfunctions, nor from a belief that subjects are interacting with programmers. Rather, social responses to computers are commonplace and easy to generate. The results reported here present numerous and unprecedented hypotheses, unexpected implications for design, new approaches to usability testing, and direct methods for verii3cation.
Conference Paper
Full-text available
Robotic systems represent new capabilities that justifiably excite technologists and problem holders in many areas. But what affordances do the new capabilities represent and how will problem holders and practitioners exploit these capabilities as they ...
Conference Paper
Full-text available
In this paper, we study the role of emotions and expressive behaviour in socially interactive characters employed in educational games. More specifically, on how we can use such emotional behaviour to help users to better understand the game state. An emotion model for these characters, which is mainly influenced by the current state of the game and is based on the emotivector anticipatory mechanism, was developed. We implemented the model in a social robot named iCat, using chess as the game scenario. The results of a preliminary evaluation suggested that the emotional behaviour embedded in the character indeed helped the users to have a better perception of the game. Copyright © 2008, International Foundation for Autonomous Agents Multiagent Systems (www.ifaamas.org). All rights reserved.
Conference Paper
Full-text available
The EU project SERA (Social Engagement with Robots and Agents) provided the unique opportunity to collect real field data of people interacting with a robot companion in their homes. In the course of three iterations, altogether six elderly participants took part. Following a multi-methodological approach, the continuous quantitative and qualitative description of user behavior on a very fine-grained level gave us insights into when and how people interacted with the robot companion. Post-trial semi-structured interviews explored how the users perceived the companion and revealed their attitudes. Based on this large data set, conclusions can be drawn on whether people show signs of bonding and how their relation to the robot develops over time. Results indicate large inter-individual differences with regard to interaction behavior and attitudes. Implications for research on companions are discussed.
Article
Full-text available
It is important to identify how much the appearance of a humanoid robot affects human behaviors toward it. We compared participants' impressions of and behaviors toward two real humanoid robots in simple human-robot interaction. These two robots have different appearances but are controlled to perform the same recorded utterances and motions, which are adjusted by using a motion capturing system. We conducted an experiment where 48 human participants participated. In the experiment, participants interacted with the two robots one by one and also with a human as a reference. As a result, we found that the different appearances did not affect the participants' verbal behaviors but did affect their non-verbal behaviors such as distance and delay of response. These differences are explained by two factors, impressions and attributions.
Article
Full-text available
Because it becomes more and more feasible that artificial entities like robots or agents will soon be parts of our daily lives, an essential challenge is to advance the sociability of artifacts. Against this background, a pivotal goal of the Sera project was to develop a theoretical framework for sociable companions as well as for human-artifact interaction. In discussing several levels of sociability from a theoretical point of view, we will critically reflect whether human-companion interaction has to build on basic principles of human-human interaction. Alternative approaches are presented. It is discussed whether a “theory of companions” is necessary and useful and what it should be able to explain and contribute.
Article
To facilitate a multidimensional approach to empathy the Interpersonal Reactivity Index (IRI) includes 4 subscales: Perspective-Taking (PT) Fantasy (FS) Empathic Concern (EC) and Personal Distress (PD). The aim of the present study was to establish the convergent and discriminant validity of these 4 subscales. Hypothesized relationships among the IRI subscales between the subscales and measures of other psychological constructs (social functioning self-esteem emotionality and sensitivity to others) and between the subscales and extant empathy measures were examined. Study subjects included 677 male and 667 female students enrolled in undergraduate psychology classes at the University of Texas. The IRI scales not only exhibited the predicted relationships among themselves but also were related in the expected manner to other measures. Higher PT scores were consistently associated with better social functioning and higher self-esteem; in contrast Fantasy scores were unrelated to these 2 characteristics. High EC scores were positively associated with shyness and anxiety but negatively linked to egotism. The most substantial relationships in the study involved the PD scale. PD scores were strongly linked with low self-esteem and poor interpersonal functioning as well as a constellation of vulnerability uncertainty and fearfulness. These findings support a multidimensional approach to empathy by providing evidence that the 4 qualities tapped by the IRI are indeed separate constructs each related in specific ways to other psychological measures.
Article
An automated coordinate-based system to retrieve brain labels from the 1988 Talairach Atlas, called the Talairach Daemon (TD), was previously introduced [Lancaster et al., 1997]. In the present study, the TD system and its 3-D database of labels for the 1988 Talairach atlas were tested for labeling of functional activation foci. TD system labels were compared with author-designated labels of activation coordinates from over 250 published functional brain-mapping studies and with manual atlas-derived labels from an expert group using a subset of these activation coordinates. Automated labeling by the TD system compared well with authors' labels, with a 70% or greater label match averaged over all locations. Author-label matching improved to greater than 90% within a search range of +/-5 mm for most sites. An adaptive grey matter (GM) range-search utility was evaluated using individual activations from the M1 mouth region (30 subjects, 52 sites). It provided an 87% label match to Brodmann area labels (BA 4 & BA 6) within a search range of +/-5 mm. Using the adaptive GM range search, the TD system's overall match with authors' labels (90%) was better than that of the expert group (80%). When used in concert with authors' deeper knowledge of an experiment, the TD system provides consistent and comprehensive labels for brain activation foci. Additional suggested applications of the TD system include interactive labeling, anatomical grouping of activation foci, lesion-deficit analysis, and neuroanatomy education. (C) 2000 Wiley-Liss, Inc.
Article
The chapter summarizes research on emotions in the context of robot and virtual agent development. First, several motives and reasons for the implementation of emotions in artificial entities are presented. Then, a choice of robot and agent systems that model emotion based on various theories and assumptions are described. However, the common practice to implement emotions is also critically reflected against the background of theories on the relation of emotions and facial expressions. Based on this, alternative approaches are presented which implement a theory of mind module instead of emotions. In a second part, humans' emotional reactions to systems with implemented emotions and corresponding facial expressions are described.
Article
In recent studies of the structure of affect, positive and negative affect have consistently emerged as two dominant and relatively independent dimensions. A number of mood scales have been created to measure these factors; however, many existing measures are inadequate, showing low reliability or poor convergent or discriminant validity. To fill the need for reliable and valid Positive Affect and Negative Affect scales that are also brief and easy to administer, we developed two 10-item mood scales that comprise the Positive and Negative Affect Schedule (PANAS). The scales are shown to be highly internally consistent, largely uncorrelated, and stable at appropriate levels over a 2-month time period. Normative data and factorial and external evidence of convergent and discriminant validity for the scales are also presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Book
Reviews the book, Natural theories of mind: Evolution, development and simulation of everyday mindreading edited by Andrew Whiten (see record 1991-97348-000). In recent years there has been a phenomenal growth in interest and research directed at what, has become known as a Theory of Mind ("ToM") and its development. Among the many edited books recently made available on the topic, Whiten's Natural theories of mind is unique in the eclectic, multidisciplinary approach it brings to this vital, yet fledgling area. This interdisciplinary approach, which also includes a chapter by Carrithers placing the development of a theory of mind within the broader context of sociology and anthropology, is at the same lime both the strength of this volume and its limitation. It may be that few will read this book cover-to-cover (not a remarkable criticism for an edited book). Those who do will be given an unusually broad overview of this hot research area and the interdisciplinary context within which the area can best be understood and from which it will most profitably develop. Whiten's collection is therefore recommended both to those who are looking for an entrance into the theory of mind literature and for those already embroiled in the field who are looking for new perspectives. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Students of empathy can seem a cantankerous lot. Although they typically agree that empathy is important, they often disagree about why it is important, about what effects it has, about where it comes from, and even about what it is. The term empathy is currently applied to more than a half-dozen phenomena. These phenomena are related to one another, but they are not elements, aspects, facets, or components of a single thing that is empathy, as one might say that an attitude has cognitive, affective, and behavioral components. Rather, each is a conceptually distinct, stand-alone psychological state. Further, each of these states has been called by names other than empathy. Opportunities for disagreement abound. In an attempt to sort out this disagreement, I wish first to identify two distinct questions that empathy is thought to answer. Then I wish to identify eight distinct phenomena that have been called empathy. Finally, I wish to relate these eight phenomena to the two questions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The present studies were designed to test whether people are "polite" to computers. Among people, an interviewer who directly asks about him- or herself will received more positive and less varied responses than if the same question is posed by a 3rd party. Two studies were designed to determine if the same phenomenon occurs in human-computer interaction. In the 1st study, 30 Ss performed a task with a text-based computer and were then interviewed about the performance of that computer on 1 of 3 loci: (1) the same computer, (2) a pencil-and-paper questionnaire, or (3) a different (but identical) text-based computer. Consistent with the politeness prediction, same-computer participants evaluated the computer more positively and more homogeneously than did either pencil-and-paper or different computer participants. Study 2, with 30 participants, replicated the results with voice-based computers. ((c) 1999 APA/PsycINFO, all rights reserved)
Article
Following Langer (1992), this article reviews a series of experimental studies that demonstrate that individuals mindlessly apply social rules and expectations to computers. The first set of studies illustrates how individuals overuse human social categories, applying gender stereotypes to computers and ethnically identifying with computer agents. The second set demonstrates thatpeople exhibit overlearned social behaviors such as politeness and reciprocity toward computers. In the third set of studies, premature cognitive commitments are demonstrated: A specialist television set is perceived as providing better content than a generalist television set. A final series of studies demonstrates the depth of social responses with respect to computer ‘personality.’ Alternative explanations for these findings, such asanthropomorphism and intentional social responses, cannot explain the results. We conclude with an agenda for future research.
Chapter
Sociable machines are a blend of art, science, and engineering. We highlight how insights from these disciplines have helped us to address a few key design issues for building expressive humanoid robots that interact with people in a social manner.
Chapter
This chapter reports the motivations and choices underlying the design of Feelix, a simple humanoid LEGO robot that displays different emotions through facial expression in response to physical contact. It concludes by discussing what this simple technology can tell us about emotional expression and interaction.
Conference Paper
To determine the effect of robotic embodiment on human-robot interaction, we used functional magnetic resonance imaging (fMRI) to measure brain activity during the observation of emotionally positive or neutral actions performed by bipedal or wheel-drive humanoid robots. fMRI data from 30 participants were analyzed in the study. The results revealed that bipedal humanoid robot performing emotionally positive actions induced the activation of the left orbitofrontal cortex, which is associated with emotional empathy, whereas wheel-drive humanoid robot performing the same actions elicited a lesser response. These results demonstrate that humans more readily empathize with a bipedal humanoid robot based on the ability to simulate human-like body movements.
Conference Paper
As robots enter everyday life and start to interact with ordinary people the question of their appearance becomes increasingly important. Our perception of a robot can be strongly influenced by its facial appearance. Synthesizing relevant ideas from narrative art design, the psychology of face recognition, and recent HRI studies into robot faces, we discuss effects of the uncanny valley and the use of iconicity and its relationship to the self-other perceptive divide, as well as abstractness and realism, classifying existing designs along these dimensions. A new expressive HRI research robot called KASPAR is introduced and the results of a preliminary study on human perceptions of robot expressions are discussed
Article
Empirical studies have repeatedly shown that autonomous artificial entities, so-called embodied conversational agents, elicit social behavior on the part of the human interlocutor. Various theoretical approaches have tried to explain this phenomenon: According to the Threshold Model of Social Influence (Blascovich et al., 2002), the social influence of real persons who are represented by avatars will always be high, whereas the influence of an artificial entity depends on the realism of its behavior. Conversely, the Ethopoeia concept (Nass & Moon, 2000) predicts that automatic social reactions are triggered by situations as soon as they include social cues. The presented study evaluates whether participants´ belief in interacting with either an avatar (a virtual representation of a human) or an agent (autonomous virtual person) lead to different social effects. We used a 2 × 2 design with two levels of agency (agent or avatar) and two levels of behavioral realism (showing feedback behavior versus showing no behavior). We found that the belief of interacting with either an avatar or an agent barely resulted in differences with regard to the evaluation of the virtual character or behavioral reactions, whereas higher behavioral realism affected both. It is discussed to what extent the results thus support the Ethopoeia concept.
Article
This paper focuses on the role of emotion and expressive behavior in regulating social interaction between humans and expressive anthropomorphic robots, either in communicative or teaching scenarios. We present the scientific basis underlying our humanoid robot's emotion models and expressive behavior, and then show how these scientific viewpoints have been adapted to the current implementation. Our robot is also able to recognize affective intent through tone of voice, the implementation of which is inspired by the scientific findings of the developmental psycholinguistics community. We first evaluate the robot's expressive displays in isolation. Next, we evaluate the robot's overall emotive behavior (i.e. the coordination of the affective recognition system, the emotion and motivation systems, and the expression system) as it socially engages nave human subjects face-to-face.
Conference Paper
As robots enter everyday life and start to interact with or- dinary people (5) the question of their appearance becomes increasingly important. A user's perception of a robot can be strongly inuenced by its facial appearance (6). The di- mensions and issues of face design are illustrated in the de- sign rationale, details of construction and intended uses of a new minimal expressive robot called KASPAR.
Conference Paper
Empathy has great potential in human-robot interaction. However, the challenging nature of assessing the user's emotional state points to the importance of also understanding the effects of empathic behaviours incongruent with users' affective experience. A 3×2 between-subject video-based survey experiment (N=133) was conducted with empathic robot behaviour (empathically accurate, neutral, inaccurate) and valence of the situation (positive, negative) as dimensions. Trust decreased when empathic responses were incongruent with the affective state of the user. However, in the negative valence condition, reported perceived empathic abilities were greater when the robot responded as if the situation were positive.