Conference Paper

A study of a retro-projected robotic face and its effectiveness for gaze reading by humans

DOI: 10.1145/1734454.1734471 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP

ABSTRACT Reading gaze direction is important in human-robot interactions as it supports, among others, joint attention and non-linguistic interaction. While most previous work focuses on implementing gaze direction reading on the robot, little is known about how the human partner in a human-robot interaction is able to read gaze direction from a robot. The purpose of this paper is twofold: (1) to introduce a new technology to implement robotic face using retro-projected animated faces and (2) to test how well this technology supports gaze reading by humans. We briefly discuss the robot design and discuss parameters influencing the ability to read gaze direction. We present an experiment assessing the user's ability to read gaze direction for a selection of different robotic face designs, using an actual human face as baseline. Results indicate that it is hard to recreate human-human interaction performance. If the robot face is implemented as a semi sphere, performance is worst. While robot faces having a human-like physiognomy and, perhaps surprisingly, video projected on a flat screen perform equally well and seem to suggest that these are the good candidates to implement joint attention in HRI.

Download full-text


Available from: Tony Belpaeme, Jul 01, 2015
1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robot's deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.
    RO-MAN, 2011 IEEE; 09/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: We introduce an approach to using animated faces for robotics where a static physical object is used as a projection surface for an animation. The talking head is projected onto a 3D physical head model. In this chapter we discuss the different benefits this approach adds over mechanical heads. After that, we investigate a phenomenon commonly referred to as the Mona Lisa gaze effect. This effect results from the use of 2D surfaces to display 3D images and causes the gaze of a portrait to seemingly follow the observer no matter where it is viewed from. The experiment investigates the perception of gaze direction by observers. The analysis shows that the 3D model eliminates the effect, and provides an accurate perception of gaze direction. We discuss at the end the different requirements of gaze in interactive systems, and explore the different settings these findings give access to.
    Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues - COST 2102 International Conference, Budapest, Hungary, September 7-10, 2010, Revised Selected Papers; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In human-human communication, eye gaze is a fundamental cue in e.g. turn-taking and interaction control [Kendon 1967]. Accurate control of gaze direction is therefore crucial in many applications of animated avatars striving to simulate human interactional behaviors. One inherent complication when conveying gaze direction through a 2D display, however, is what has been referred to as the Mona Lisa effect; if the avatar is gazing towards the camera, the eyes seem to "follow" the beholder whatever vantage point he or she may assume [Boyarskaya and Hecht 2010]. This becomes especially problematic in applications where multiple persons are interacting with the avatar, and the system needs to use gaze to address a specific person. Introducing D structure in the facial display, e.g. projecting the avatar face on a face mask, makes the percept of the avatar's gaze change with the viewing angle, as is indeed the case with real faces. To this end, [Delaunay et al. 2010] evaluated two back-projected displays - a spherical "dome" and a face shaped mask. However, there may be many factors influencing gaze direction percieved from a 3D facial display, so an accurate calibration procedure for gaze direction is called for.
    The ACM / SSPNET 2nd International Symposium on Facial Analysis and Animation; 10/2010

Questions & Answers about this publication

  • Frédéric Delaunay added an answer in Assistive Robotics:
    Can anyone help with the estimation of robots' gaze by human viewers?
    We recently bought an iCub2 with enhanced communication abilities (see Nina below). We are currently working on visual attention and try to characterize the perception of the robot's gaze direction by human observers. We got surprised! The morphology of robotic eyes with no deformation of the eyelids and palpebral commissure strongly biases the estimation of eyes direction as soon as the gaze is averted.

    Are you aware of any study (similar to what SAMER AL MOUBAYED and KTH colleagues have done with Furhat) on robots?

    Thank you in advance for your help!
    Frédéric Delaunay · University of Plymouth
    Hi Gérard, we conducted a gaze-reading experiment comparing 2D avatars, 3D rear projected faces, and humans in 2010 (check attachment). To establish a metric, we used a transparent grid comprised of 5x5cm cells through which two participants could read the gaze.

    Let me know if you'd like to discuss this.


    - Frédéric -