GazeRoboard: Gaze-communicative guide system in daily life on stuffed-toy robot with interactive display board
ABSTRACT In this paper, we propose a guide system for daily life in semipublic spaces by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. The system provides naturally anthropomorphic guidance through a) gaze-communicative behaviors of the stuffed-toy robot (ldquojoint attentionrdquo and ldquoeye-contact reactionsrdquo) that virtually express its internal mind, b) voice guidance, and c) projection on the board corresponding to the userpsilas gaze orientation. The userpsilas gaze is estimated by our remote gaze-tracking method. The results from both subjective/objective evaluations and demonstration experiments in a semipublic space show i) the holistic operation of the system and ii) the inherent effectiveness of the gaze-communicative guide.
- [Show abstract] [Hide abstract]
ABSTRACT: This paper presents a method in a museum guide robot to choose an appropriate answerer among multiple visitors. First, we observed and videotaped scenes of gallery talk when human guides ask visitors questions. Based on an analysis of this video, we have found that the guides selects an answerer by distributing his or her gaze towards multiple visitors and observing the visitors' gaze responses during the question. Then, we performed experiments on a robot that distributes its gaze towards multiple visitors, and analyzed visitors' responses. From these experiments, we found that visitors who are asked questions by the robot felt embarrassed when he or she had no prior knowledge about the questions, and that visitor gaze during the questions plays an important role in avoiding being asked questions. Based on these findings, we have developed a function in a guide robot that observes visitor gaze response and selects an appropriate answerer based on these responses. Gaze responses are tracked and recognized using an omnidirectional camera and a laser range sensor. The effectiveness of our method was confirmed through experiments.RO-MAN, 2010 IEEE; 10/2010
Conference Paper: People tracking using integrated sensors for human robot interaction[Show abstract] [Hide abstract]
ABSTRACT: In human-human interaction, position and orientation of participants' bodies and faces play an important role. Thus, robots need to be able to detect and track human bodies and faces, and obtain human positions and orientations to achieve effective human-robot interaction. It is difficult, however, to robustly obtain such information from video cameras alone in complex environments. Hence, we propose to use integrated sensors that are composed of a laser range sensor and an omni-directional camera. A Rao-Blackwellized particle filter framework is employed to track the position and orientation of both bodies and heads of people based on the distance data and panorama images captured from the laser range sensor and the omni-directional camera. In addition to the tracking techniques, we present two applications of our integrated sensor system. One is a robotic wheelchair moving with a caregiver; the sensor system detects and tracks the caregiver and the wheelchair moves with the caregiver based on the tracking results. The other is a museum guide robot that explains exhibits to multiple visitors; the position and orientation data of visitors' bodies and faces enable the robot to distribute its gaze to each of multiple visitors to keep their attention while talking.Industrial Technology (ICIT), 2010 IEEE International Conference on; 04/2010
- [Show abstract] [Hide abstract]
ABSTRACT: This paper proposes a videophone conversation support system by the behaviors of a companion robot and the switching of camera images in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to understand a user's conversational states, which are whether the user is talking (taking the initiative) and whether the user is concentrating on the conversation. First, a) the system estimates the user's conversational state by a machine learning method. Next, b-1) the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listener's attitude when she/he is not really listening to another user's speech, b-2) the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and b-3) the system switches the camera images to create an illusion of eye-contact corresponding to the current context of the user's attitude. From empirical studies, a detailed experiment, and a demonstration experiment, i) both the robot's active listening behaviors and the switching of the camera image compensate for the other person's attitude, ii) the topic provision function is effective for awkward silences, and iii) elderly people prefer long intervals between the robot's behaviors.09/2010;