In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.
"Prior work in robot deictics has shown that referring to a region of space—which is often more difficult to verbalize than an object—results in only a marginally worse accuracy rate than referring to an object . Robots are able to use visual differences in spaces in combination with deictics to help listeners identify the correct region in the space . St. Clair et al. demonstrated that a robot using a combination of deictic gestures and gaze to refer to a space resulted in higher accuracy than using just one modality . "
[Show abstract][Hide abstract] ABSTRACT: As robots collaborate with humans in increasingly diverse environ-ments, they will need to effectively refer to objects of joint inter-est and adapt their references to various physical, environmental, and task conditions. Humans use a broad range of deictic ges-tures—gestures that direct attention to collocated objects, persons, or spaces—that include pointing, touching, and exhibiting to help their listeners understand their references. These gestures offer varying levels of support under different conditions, making some gestures more or less suitable for different settings. While these ges-tures offer a rich space for designing communicative behaviors for robots, a better understanding of how different deictic gestures affect communication under different conditions is critical for achieving effective human-robot interaction. In this paper, we seek to build such an understanding by implementing six deictic gestures on a hu-manlike robot and evaluating their communicative effectiveness in six diverse settings that represent physical, environmental, and task conditions under which robots are expected to employ deictic com-munication. Our results show that gestures which come into physical contact with the object offer the highest overall communicative ac-curacy and that specific settings benefit from the use of particular types of gestures. Our results highlight the rich design space for deictic gestures and inform how robots might adapt their gestures to the specific physical, environmental, and task conditions.
"Some studies in human-robot interaction have focus on generating human-like multimodal referring acts using both speech and gesture for objects   , and space  . Brooks and Breazeal  describe a framework for multimodally referring to objects using a combination of deictic gesture, speech, and spatial knowledge. "
[Show abstract][Hide abstract] ABSTRACT: Pointing behaviors are used for referring to objects and people in everyday interactions, but the behaviors used for referring to objects are not necessarily polite or socially appropriate for referring to humans. In this study, we confirm that although people would point precisely to an object to indicate where it is, they were hesitant to do so when pointing to another person. We propose a model for generating socially-appropriate deictic behaviors in a robot. The model is based on balancing two factors: understandability and social appropriateness. In an experiment with a robot in a shopping mall, we found that the robot's deictic behavior was perceived as more polite, more natural, and better overall when using our model, compared with a model considering understandability alone.
Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 03/2013
"Intentional analysis and timing are still nontrivial, except in the context of performing a specific pre-determined task. Both recognition , , , ,  and production , , ,  of deictic gestures have been studied in human-human, human-computer, and human-robot interaction (HRI) settings. Our work adds to this field a step toward obtaining an empirically grounded HRI model of deictic gestural accuracy between people and robots, with implications for the design of robot embodiments and control systems that perform situated distal pointing. "
[Show abstract][Hide abstract] ABSTRACT: In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robot's deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.