Conference Paper

Pointing to space: modeling of deictic interaction referring to regions.

DOI: 10.1145/1734454.1734559 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP

ABSTRACT In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.

0 Bookmarks
 · 
114 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pointing behaviors are used for referring to objects and people in everyday interactions, but the behaviors used for referring to objects are not necessarily polite or socially appropriate for referring to humans. In this study, we confirm that although people would point precisely to an object to indicate where it is, they were hesitant to do so when pointing to another person. We propose a model for generating socially-appropriate deictic behaviors in a robot. The model is based on balancing two factors: understandability and social appropriateness. In an experiment with a robot in a shopping mall, we found that the robot's deictic behavior was perceived as more polite, more natural, and better overall when using our model, compared with a model considering understandability alone.
    Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We built a model of the environment for human–robot interaction by learning from humans cognitive processes. Our method, which differs from previous map building techniques in terms of perspectives, is based on a route perspective that is a mental tour of the environment. The main contribution of this work is the theory and computational implementation of the concept of route with its respective visual memory. The concept of route is modeled as a three layered model composed of memory layer, survey layer and route layer. By imitating the human concept of a route, the route layer is modeled as a directional path segmented by action taking associated with visual memory taken while traveling the path. We developed a system that generates human understandable route directions which was evaluated towards two methods: one copied from the explanation of a human expert and one generated with a model without the route perspective layer. Finally, experimental results demonstrate the usefulness of the route perspective layer, since it performed better than the model without the route layer and similarly to the human expert.
    International Journal of Social Robotics 11/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing interactive behaviors for social robots presents a number of challenges. It is difficult to interpret the meaning of the details of people's behavior, particularly non-verbal behavior like body positioning, but yet a social robot needs to be contingent to such subtle behaviors. It needs to generate utterances and non-verbal behavior with good timing and coordination. The rules for such behavior are often based on implicit knowledge and thus difficult for a designer to describe or program explicitly. We propose to teach such behaviors to a robot with a learning-by-demonstration approach, using recorded human-human interaction data to identify both the behaviors the robot should perform and the social cues it should respond to. In this study, we present a fully unsupervised approach that uses abstraction and clustering to identify behavior elements and joint interaction states, which are used in a variable-length Markov model predictor to generate socially-appropriate behavior commands for a robot. The proposed technique provides encouraging results despite high amounts of sensor noise, especially in speech recognition. We demonstrate our system with a robot in a shopping scenario.
    23rd International Symposium on Robot and Human Interactive Communication (RO-MAN 2014); 08/2014