Conference Paper

A study of a retro-projected robotic face and its effectiveness for gaze reading by humans

DOI: 10.1145/1734454.1734471 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP

ABSTRACT

Reading gaze direction is important in human-robot interactions as it supports, among others, joint attention and non-linguistic interaction. While most previous work focuses on implementing gaze direction reading on the robot, little is known about how the human partner in a human-robot interaction is able to read gaze direction from a robot. The purpose of this paper is twofold: (1) to introduce a new technology to implement robotic face using retro-projected animated faces and (2) to test how well this technology supports gaze reading by humans. We briefly discuss the robot design and discuss parameters influencing the ability to read gaze direction. We present an experiment assessing the user's ability to read gaze direction for a selection of different robotic face designs, using an actual human face as baseline. Results indicate that it is hard to recreate human-human interaction performance. If the robot face is implemented as a semi sphere, performance is worst. While robot faces having a human-like physiognomy and, perhaps surprisingly, video projected on a flat screen perform equally well and seem to suggest that these are the good candidates to implement joint attention in HRI.

Download full-text

Full-text

Available from: Tony Belpaeme
  • Source
    • "To our knowledge, no work has been similarly reported on the audiovisual intelligibility of speech – i.e. the benefit of facial animation to the comprehension of the spoken message – uttered by robots. Remarkable experiments have nevertheless been conducted first by Delaunay et al.[14]with lamp avatar: they compared the estimations of the gaze direction of four different types of facial interface: a real human face, a human face displayed on a flat-screen monitor, an animated face projected on a semi-sphere and an animated face projected on a 3D mask. They show that robot faces having a human-like physiognomy and, surprisingly, video projected on a flat screen perform equally well. "

    Full-text · Conference Paper · May 2015
  • Source
    • "It is known that, the perception of 3D objects that are displayed on 2D surfaces are influenced by the Mona Lisa effect [6]. In other words, the orientation of the object in relation to the observer will be perceived as constant regardless of observer's position [7]. For instance, if the 2D projected face is gazing forward, mutual gaze is perceived with the animation, regardless of where the observer is standing/sitting in relation to the display. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This article proposes an emotive lifelike robotic face, called ExpressionBot, that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The proposed robotic head consists of two major components: 1) a hardware component that contains a small projector, a fish-eye lens, a custom-designed mask and a neck system with 3 degrees of freedom; 2) a facial animation system, projected onto the robotic mask, that is capable of presenting facial expressions, realistic eye movement, and accurate visual speech. We present three studies that compare Human-Robot Interaction with Human-Computer Interaction with a screen-based model of the avatar. The studies indicate that the robotic face is well accepted by users, with some advantages in recognition of facial expression and mutual eye gaze contact.
    Full-text · Article · Feb 2015
  • Source
    • "Finally, since a gesture is grounded with respect to a specific referent in the environment, the robot must be able to correctly segment and localize visually salient objects in the environment at a similar granularity to the people with whom it is interacting. Studies of human reading of robot gaze [31], as well as biologically inspired methods to assess and map visually salient features and objects in an environment, exist [26], [27], as do models of human visual attention selection [28]; however, the role of visual saliency during deictic reference by a robot is largely uninvestigated. III. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robot's deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.
    Full-text · Conference Paper · Sep 2011
Show more

Questions & Answers about this publication

  • Frédéric Delaunay added an answer in Assistive Robotics:
    Can anyone help with the estimation of robots' gaze by human viewers?
    We recently bought an iCub2 with enhanced communication abilities (see Nina below). We are currently working on visual attention and try to characterize the perception of the robot's gaze direction by human observers. We got surprised! The morphology of robotic eyes with no deformation of the eyelids and palpebral commissure strongly biases the estimation of eyes direction as soon as the gaze is averted.

    Are you aware of any study (similar to what SAMER AL MOUBAYED and KTH colleagues have done with Furhat) on robots?

    Thank you in advance for your help!
    Frédéric Delaunay
    Hi Gérard, we conducted a gaze-reading experiment comparing 2D avatars, 3D rear projected faces, and humans in 2010 (check attachment). To establish a metric, we used a transparent grid comprised of 5x5cm cells through which two participants could read the gaze.

    Let me know if you'd like to discuss this.

    Best,

    - Frédéric -
    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: Reading gaze direction is important in human-robot interactions as it supports, among others, joint attention and non-linguistic interaction. While most previous work focuses on implementing gaze direction reading on the robot, little is known about how the human partner in a human-robot interaction is able to read gaze direction from a robot. The purpose of this paper is twofold: (1) to introduce a new technology to implement robotic face using retro-projected animated faces and (2) to test how well this technology supports gaze reading by humans. We briefly discuss the robot design and discuss parameters influencing the ability to read gaze direction. We present an experiment assessing the user's ability to read gaze direction for a selection of different robotic face designs, using an actual human face as baseline. Results indicate that it is hard to recreate human-human interaction performance. If the robot face is implemented as a semi sphere, performance is worst. While robot faces having a human-like physiognomy and, perhaps surprisingly, video projected on a flat screen perform equally well and seem to suggest that these are the good candidates to implement joint attention in HRI.
      Full-text · Conference Paper · Mar 2010