Article

Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching.

IEEE T. Autonomous Mental Development 01/2011; 3:43-53.
Source: DBLP

ABSTRACT Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor rep-resentation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of a humanoid robot eye and arm movements. Simulated results confirm that mixed neural populations, such as those found in some particular brain areas modeled in this work, are especially suitable to the problem at hand. In the final experi-mental setup, by exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

0 Bookmarks
 · 
60 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.
    Biological Cybernetics 10/2012; · 2.07 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this paper is to improve the skills of robotic systems in their interaction with nearby objects. The basic idea is to enhance visual estimation of objects in the world through the merging of different visual estimators of the same stimuli. A neuroscience-inspired model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of multiple monocular and binocular cues can make robot sensory systems more reliable and versatile. The same results, compared with simulations and data from human studies, show that the model is able to reproduce some well-recognized neuropsychological effects.
    IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics: a publication of the IEEE Systems, Man, and Cybernetics Society 10/2011; 42(2):530-8. · 3.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In human-human interactions, a consciously perceived high degree of self-other overlap is associated with a higher degree of integration of the other person's actions into one's own cognitive representations. Here, we report data suggesting that this pattern does not hold for human-robot interactions. Participants performed a social Simon task with a robot, and afterwards indicated the degree of self-other overlap using the Inclusion of the Other in the Self (IOS) scale. We found no overall correlation between the social Simon effect (as an indirect
    International Journal of Humanoid Robotics 04/2013; 10(1):1-13. · 0.37 Impact Factor

Full-text (2 Sources)

View
31 Downloads
Available from
Jun 4, 2014