Conference Paper

GazeRoboard: Gaze-communicative Guide System in Daily Life on Stuffed-toy Robot with Interactive Display Board

ATR Intell. Robot. & Commun. Labs., Tokyo
DOI: 10.1109/IROS.2008.4650692 Conference: Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on
Source: IEEE Xplore


In this paper, we propose a guide system for daily life in semipublic spaces by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. The system provides naturally anthropomorphic guidance through a) gaze-communicative behaviors of the stuffed-toy robot (ldquojoint attentionrdquo and ldquoeye-contact reactionsrdquo) that virtually express its internal mind, b) voice guidance, and c) projection on the board corresponding to the userpsilas gaze orientation. The userpsilas gaze is estimated by our remote gaze-tracking method. The results from both subjective/objective evaluations and demonstration experiments in a semipublic space show i) the holistic operation of the system and ii) the inherent effectiveness of the gaze-communicative guide.

11 Reads
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a videophone conversation support system by the behaviors of a companion robot and the switching of camera images in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to understand a user's conversational states, which are whether the user is talking (taking the initiative) and whether the user is concentrating on the conversation. First, a) the system estimates the user's conversational state by a machine learning method. Next, b-1) the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listener's attitude when she/he is not really listening to another user's speech, b-2) the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and b-3) the system switches the camera images to create an illusion of eye-contact corresponding to the current context of the user's attitude. From empirical studies, a detailed experiment, and a demonstration experiment, i) both the robot's active listening behaviors and the switching of the camera image compensate for the other person's attitude, ii) the topic provision function is effective for awkward silences, and iii) elderly people prefer long intervals between the robot's behaviors.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a daily-partner robot, that is aware of the user's situation or behavior by using gaze and utterance detection. For appropriate and familiar anthropomorphic interaction, the robot should wait for a timing to talk something to the user corresponding to the situation of her/him while she/he doing a task or thinking. According to the need, our proposed robot i) estimates the user's context by detecting her/his gaze and utterance, such as the target of the user's speech, ii) tries to notify the need to speak to the user by silent (i.e. without making an utterance) gazeturns toward the user and joint attention with taking advantage of the attentiveness, and iii) tell the message when the user talks to the robot. The results of experiments combining subjects' daily tasks with/without the above steps show that the crossmodal-aware behaviors of the robot are important in respectful communications without disturbing the user's ongoing task by adopting silent behaviors showing the robot's intention to speak and for drawing the user's attention.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method of selecting the answerer from audiences for a museum guide robot. First, we observed and videotaped scenes when a human guide asks visitors questions in a gallery talk to engage visitors. Based on the interaction analysis, we have found that the human guide selects the appropriate answerer by distributing his/her gaze towards visitors and observing visitors' gaze responses during the pre-question phase. Then, we performed the experiments that a robot distributed its gaze towards visitors to select an answerer and analyzed visitors' responses. From the experiments, we have found that the visitors who are asked questions by the robot feel embarrassed when they have no prior knowledge about the questions and the visitor's gaze before and during the question play an important role to avoid being asked questions. Based on these findings we have developed a function for a guide robot to select the answerer by observing visitors' gaze responses.
    Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Extended Abstracts Volume, Atlanta, Georgia, USA, April 10-15, 2010; 01/2010
Show more