Conference Paper

Pointing to space: modeling of deictic interaction referring to regions.

DOI: 10.1145/1734454.1734559 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP

ABSTRACT In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.

0 Bookmarks
 · 
111 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pointing behaviors are used for referring to objects and people in everyday interactions, but the behaviors used for referring to objects are not necessarily polite or socially appropriate for referring to humans. In this study, we confirm that although people would point precisely to an object to indicate where it is, they were hesitant to do so when pointing to another person. We propose a model for generating socially-appropriate deictic behaviors in a robot. The model is based on balancing two factors: understandability and social appropriateness. In an experiment with a robot in a shopping mall, we found that the robot's deictic behavior was perceived as more polite, more natural, and better overall when using our model, compared with a model considering understandability alone.
    Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 03/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing interactive behaviors for social robots presents a number of challenges. It is difficult to interpret the meaning of the details of people's behavior, particularly non-verbal behavior like body positioning, but yet a social robot needs to be contingent to such subtle behaviors. It needs to generate utterances and non-verbal behavior with good timing and coordination. The rules for such behavior are often based on implicit knowledge and thus difficult for a designer to describe or program explicitly. We propose to teach such behaviors to a robot with a learning-by-demonstration approach, using recorded human-human interaction data to identify both the behaviors the robot should perform and the social cues it should respond to. In this study, we present a fully unsupervised approach that uses abstraction and clustering to identify behavior elements and joint interaction states, which are used in a variable-length Markov model predictor to generate socially-appropriate behavior commands for a robot. The proposed technique provides encouraging results despite high amounts of sensor noise, especially in speech recognition. We demonstrate our system with a robot in a shopping scenario.
    23rd International Symposium on Robot and Human Interactive Communication (RO-MAN 2014); 08/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study addresses the robot that waits for users while they shop. In order to wait, the robot needs to understand which locations are appropriate for waiting. We investigated how people choose locations for waiting, and revealed that they are concerned with “disturbing pedestrians” and “disturbing shop activities”. Using these criteria, we developed a classifier of waiting locations. “Disturbing pedestrians” are estimated from statistics of pedestrian trajectories, which is observed with a human-tracking system based on laser range finders. “Disturbing shop activities” are estimated based on shop visibility. We evaluated this autonomous waiting behavior in a shopping-assist scenario. The experimental results revealed that users found the autonomous waiting robot chose appropriate waiting locations for waiting more than a robot with random choice or one controlled manually by the user him or herself.
    Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 01/2013