Conference Paper
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Body language of robot arms, have rarely been explored as a medium of conveying robot intentions. An explorative study was done focusing on two questions: one, if robot arm postures can convey robot intentions, and two, if participants coming in contact with this robot arm for the first time can associate any meaning to the postures without watching the robot in action, or working with it. Thirty five participants took part in this exploratory study. Results show significant effect of the design of postures on the interpretation of its meaning. Postures designed in tempered form of human/animal-like body language could successfully convey the desired intention to a significant number of people in the following categories: 'Robot giving object in a friendly manner'; 'Robot is saying Hi!', 'Robot has been told not to disturb'. Developers may predefine such postures in a robot to enhance its acceptability.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper re-evaluates what constitutes a social robot by analysing how a range of different forms of robot are interpreted as socially aware and communicative. Its argument juxtaposes a critical assessment of the development of humanlike and animal-like robotic companions with a consideration of human relations with machinelike robots in working teams. The paper employs a range of communication theories alongside ideas relating to anthropomorphism and zoomorphism in discussing human-robot interactions. Some traditions of communication theory offer perspectives that support the development of humanlike and animal-like social robots. However, these perspectives have been critiqued within communications scholarship as unethically closed to the possibilities of otherness and difference. This paper therefore reconfigures and extends the use of communication theory to explore how machinelike robots are interpreted by humans as social and communicative others. This involves an analysis of human relations with Explosive Ordnance Disposal (EOD) robots and with the robotic desk lamp, AUR. The paper positions social robotics research as impor- tant in understanding working teams containing hu- mans and robots. In particular, this paper introduces the value of tempered anthropomorphism and zoomor- phism as processes that support communication be- tween humans and machinelike robots, while also en- suring that a sense of the otherness of the machine and respect for its non-human abilities is retained
Conference Paper
Full-text available
Humans use very sophisticated ways of bodily emotion expression combining facial expressions, sound, gestures and full body posture. Like others, we want to apply these aspects of human communication to ease the interaction between robots and users. In doing so we believe there is a need to consider what abstraction of human social communicative behaviors is appropriate for robots. The study reported in this paper is a pilot study to not offer simulated emotion but to offer an abstracted robot version of emotion expressions and an evaluation to what extent users interpret these robot expressions as the intended emotional states. To this end, we present the mobile, mildly humanized robot Daryl, for which we created six motion sequences that combine human-like, animal-like, and robot-specific social cues. The results of a user study (N=29) show that despite the absence of facial expressions and articulated extremities, subjects' interpretation of Daryl's emotional states were congruent with the abstracted emotion display. These results demonstrate that abstract displays of emotion that combine human-like, animal-like, and robot-specific modalities could in fact be an alternative to complex facial expressions and will feed into ongoing work identifying robot-specific social cues.
Article
Full-text available
Previous work has shown that non-verbal behaviors affect anthropomorphic inferences about artificial communicators such as virtual agents or social robots. In an experiment with a humanoid robot we investigated the effects of the robot’s hand and arm gestures on the perception of humanlikeness, likability of the robot, shared reality, and future contact intentions after interacting with the robot. For this purpose, the speech-accompanying non-verbal behaviors of the humanoid robot were manipulated in three experimental conditions: (1) no gesture, (2) congruent co-verbal gesture, and (3) incongruent co-verbal gesture. We hypothesized higher ratings on all dependent measures in the two multimodal (i.e., speech and gesture) conditions compared to the unimodal (i.e., speech only) condition. The results confirm our predictions: when the robot used co-verbal gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without gestures. Surprisingly, this effect was particularly pronounced when the robot’s gestures were partly incongruent with speech, although this behavior negatively affected the participants’ task-related performance. These findings show that communicative non-verbal behaviors displayed by robotic systems affect anthropomorphic perceptions and the mental models humans form of a humanoid robot during interaction.
Article
Full-text available
In the future, interactive robots will perform many helpful tasks. In 5 studies, we developed techniques for measuring the richness and content of people's mental models of a robot. Using these techniques, we examined how a robot's appearance and dialogue affected people's responses. Participants had a comparatively rich mechanistic perception of the robot, and perceived it to have some human traits, but not complex human attachment, foibles, or creativity. In study 5, participants who interacted with an extraverted, playful robot versus a more serious, caring robot, developed a richer, more positive mental model of the playful robot but cooperated less with it. Our findings imply different designs for robotic assistants that meet social and practical goals.
Article
Full-text available
Complex and natural social interaction between artificial agents (computer-generated or robotic) and humans necessitates the display of rich emotions in order to be believable, socially relevant, and accepted, and to generate the natural emotional responses that humans show in the context of social interaction, such as engagement or empathy. Whereas some robots use faces to display (simplified) emotional expressions, for other robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve naturalness. This research investigates the creation of an affect space for the generation of emotional body language to be displayed by humanoid robots. To do so, three experiments investigating how emotional body language displayed by agents is interpreted were conducted. The first experiment compared the interpretation of emotional body language displayed by humans and agents. The results showed that emotional body language displayed by an agent or a human is interpreted in a similar way in terms of recognition. Following these results, emotional key poses were extracted from an actor's performances and implemented in a Nao robot. The interpretation of these key poses was validated in a second study where it was found that participants were better than chance at interpreting the key poses displayed. Finally, an affect space was generated by blending key poses and validated in a third study. Overall, these experiments confirmed that body language is an appropriate medium for robots to display emotions and suggest that an affect space for body expressions can be used to improve the expressiveness of humanoid robots.
Article
Full-text available
In this paper, we present the development of postural expressions of emotions for the humanoid robot Nao in an assistive context. The approach is based on the adaptation of human body postures to the case of the Nao robot. In our paper, the association between the joints of human body and the joints of Nao robot are described. The postural expressions are studied for three emotions -anger, sadness, and happiness. In our experimental design, we generated 32 postural expressions for each emotion and based on the questionnaire-based interview of ten external observers, we selected the best five postural expressions for each emotion. The results of this work will be integrated in a study designed for children with autism.
Article
Full-text available
Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are "polite" to computers; that they treat computers with female voices differently than "male" ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses. Using everyday language to engage readers interested in psychology, communication, and computer technology, Reeves and Nass detail how this knowledge can help in designing a wide range of media.
Conference Paper
Full-text available
Nonverbal communication plays an important role in coordinating teammates' actions for collaborative activities. In this paper, we explore the impact of non-verbal social cues and behavior on task performance by a human-robot team. We report our results from an experiment where naive human subjects guide a robot to perform a physical task using speech and gesture. Both self-report via questionnaire and behavioral analysis of video offer evidence to support our hypothesis that implicit non-verbal communication positively impacts human-robot task performance with respect to understandability of the robot, efficiency of task performance, and robustness to errors that arise from miscommunication.
Article
Encounters with humanoid robots are new to the everyday experience of children and adults. Yet, increasingly, they are finding their place. This has occurred largely through the introduction of a class of interactive toys (including Furbies, AIBOs, and My Real Babies) that I call "relational artifacts." Here, I report on several years of fieldwork with commercial relational artifacts (as well as with the MIT AI Laboratory's Kismet and Cog). It suggests that even these relatively primitive robots have been accepted as companionate objects and are changing the terms by which people judge the "appropriateness" of machine relationships. In these relationships, robots serve as powerful objects of psychological projection and philosophical evocation in ways that are forging a nascent robotics culture.
Article
This thesis is concerned with the notion of fluency in human-robot interaction (HRI), exploring cognitive mechanisms for robotic agents that would enable them to overcome the stop-and-go rigidity present in much of HRI to date. We define fluency as the ethereal yet manifest quality existent when two agents perform together at high level of coordination and adaptation, in particular when they are well-accustomed to the task and to each other. Based on mounting psychological and neurological evidence, we argue that one of the keys to this goal is the adaptation of an embodied approach to robot cognition. We show how central ideas from this psychological school are applicable to robot cognition and present a cognitive architecture making use of perceptual symbols, simulation, and perception-action networks. In addition, we demonstrate that anticipation of perceptual input, and in particular of the actions of others, are an important ingredient of fluent joint action. To that end, we show results from an experiment studying the effects of anticipatory action on fluency and teamwork, and use these results to suggest benchmark metrics for fluency. We also show the relationship between anticipatory action and a simulator approach to perception, through a comparative human subject study of an implemented cognitive architecture on the robot AUR, a robotic desk lamp, designed for this thesis. A result of this work is modeling the effect of practice on human-robot joint action, arguing that mechanisms that govern the passage of cognitive capabilities from a deliberate yet slower system to a faster, sub-intentional, and more rigid one, are crucial to fluent joint action in well-rehearsed ensembles. Theatrical acting theory serves as an inspiration for this work, as we argue that lessons from acting method can be applied to human-robot interaction. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Article
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.
Article
This paper explores the topic of human-robot interaction (HRI) from the perspective of designing sociable autonomous robots-robots designed to interact with people in a human-like way. There are a growing number of applications for robots that people can engage as capable creatures or as partners rather than tools, yet little is understood about how to best design robots that interact with people in this way. The related field of human-computer interaction (HCI) offers important insights, however autonomous robots are a very different technology from desktop computers. In this paper, we look at the field of HRI from an HCI perspective, pointing out important similarities yet significant differences that may ultimately make HRI a distinct area of inquiry. One outcome of this discussion is that it is important to view the design and evaluation problem from the robot's perspective as well as that of the human. Taken as a whole, this paper provides a framework with which to design and evaluate sociable robots from a HRI perspective.
Robot-specific social cues in emotional body language Ensemble: Fluency and Embodiment for Robots Acting with Humans Mental Models of Robotic Assistants
  • E Sandry
  • S Embgen
  • M Luber
  • C Becker-Asano
  • M Ragni
  • V Evers
  • K O Arras
  • G Hoffman
  • S Kiesler
  • J Goetz
E. Sandry, " Re-evaluating the Form and Communication of Social Robots, " Int. J. Soc. Robot., vol. 7, no. 3, pp. 335–346, 2015. [8] S. Embgen, M. Luber, C. Becker-Asano, M. Ragni, V. Evers, and K. O. Arras, " Robot-specific social cues in emotional body language, " Proc. -IEEE Int. Work. Robot Hum. Interact. Commun., pp. 1019–1025, 2012. [9] G. Hoffman, " Ensemble: Fluency and Embodiment for Robots Acting with Humans, " Massachusetts Institute of Technology, 2007. [10] S. Kiesler and J. Goetz, " Mental Models of Robotic Assistants, " CHI '02 Ext. Abstr. Hum. Factors Comput. Syst., pp. 576–577, 2002. [11]
Social Interactions in HRI: The Robot View The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding
  • C Breazeal
C. Breazeal, " Social Interactions in HRI: The Robot View, " IEEE Trans. Syst. Man Cybern. Part C (Applications Rev., vol. 34, no. 2, pp. 181–186, May 2004. [3] P. Eckman and W. V. Friesen, " The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding. " SEMIOTICA, pp. 49–98, 1969. [4]
W3-The Levinas Reader
  • E Lévinas
E. Lévinas, W3-The Levinas Reader. 1989.
Robot-specific social cues in emotional body language
  • S Embgen
  • M Luber
  • C Becker-Asano
  • M Ragni
  • V Evers
  • K O Arras
S. Embgen, M. Luber, C. Becker-Asano, M. Ragni, V. Evers, and K. O. Arras, "Robot-specific social cues in emotional body language," Proc. -IEEE Int. Work. Robot Hum. Interact. Commun., pp. 1019-1025, 2012.