Conference Paper

Pointing to space: modeling of deictic interaction referring to regions.

DOI: 10.1145/1734454.1734559 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP

ABSTRACT In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.

0 Followers
 · 
117 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In designing and developing a gesture recognition system, it is crucial to know the characteristics of a gesture selected to control, for example, an end effector of a robot arm. We conducted an experiment to collect a set of user-defined gestures and investigate characteristics of the gestures for controlling primitive motions of an end effector in human–robot collaboration. We recorded 152 gestures from 19 volunteers by presenting virtual robotic arm movements to the participants, and then asked the participants to think about and perform gestures that would cause the motions. It was found that the hands were the parts of the body used most often for gesture articulation even when the participants were holding tools and objects with both hands: a number of participants used one- and two-handed gestures interchangeably, gestures were consistently performed by the participants across all pairs of reversible gestures, and the participants expected better recognition performance for gestures that were easy to think of and perform. These findings are expected to be useful as guidelines in creating a gesture set for controlling robotic arms according to natural user behaviors.
    Advanced Robotics 01/2015; 29(4). DOI:10.1080/01691864.2014.978371 · 0.56 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing interactive behaviors for social robots presents a number of challenges. It is difficult to interpret the meaning of the details of people's behavior, particularly non-verbal behavior like body positioning, but yet a social robot needs to be contingent to such subtle behaviors. It needs to generate utterances and non-verbal behavior with good timing and coordination. The rules for such behavior are often based on implicit knowledge and thus difficult for a designer to describe or program explicitly. We propose to teach such behaviors to a robot with a learning-by-demonstration approach, using recorded human-human interaction data to identify both the behaviors the robot should perform and the social cues it should respond to. In this study, we present a fully unsupervised approach that uses abstraction and clustering to identify behavior elements and joint interaction states, which are used in a variable-length Markov model predictor to generate socially-appropriate behavior commands for a robot. The proposed technique provides encouraging results despite high amounts of sensor noise, especially in speech recognition. We demonstrate our system with a robot in a shopping scenario.
    23rd International Symposium on Robot and Human Interactive Communication (RO-MAN 2014); 08/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We built a model of the environment for human–robot interaction by learning from humans cognitive processes. Our method, which differs from previous map building techniques in terms of perspectives, is based on a route perspective that is a mental tour of the environment. The main contribution of this work is the theory and computational implementation of the concept of route with its respective visual memory. The concept of route is modeled as a three layered model composed of memory layer, survey layer and route layer. By imitating the human concept of a route, the route layer is modeled as a directional path segmented by action taking associated with visual memory taken while traveling the path. We developed a system that generates human understandable route directions which was evaluated towards two methods: one copied from the explanation of a human expert and one generated with a model without the route perspective layer. Finally, experimental results demonstrate the usefulness of the route perspective layer, since it performed better than the model without the route layer and similarly to the human expert.
    International Journal of Social Robotics 11/2014; 7(2):1-17. DOI:10.1007/s12369-014-0265-8