Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching.

IEEE T. Autonomous Mental Development 01/2011; 3:43-53.
Source: DBLP

ABSTRACT Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor rep-resentation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of a humanoid robot eye and arm movements. Simulated results confirm that mixed neural populations, such as those found in some particular brain areas modeled in this work, are especially suitable to the problem at hand. In the final experi-mental setup, by exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

Download full-text


Available from: Angel P. del Pobil, Jun 19, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data.
    The Scientific World Journal 02/2014; 2014:179391. DOI:10.1155/2014/179391 · 1.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our owns. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner’s shoulder) reference frame to one’s own body-centred reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centred reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot’s current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.
    Neural Networks 09/2014; DOI:10.1016/j.neunet.2014.08.009 · 2.08 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The posterior parietal cortex of primates, and more exactly areas of the dorso-medial visual stream, are able to encode the peripersonal space of a subject in a way suitable for gathering visual information and contextually performing purposeful gazing and arm reaching movements. Such sensorimotor knowledge of the environment is not explicit, but rather emerges through the interaction of the subject with nearby objects. In this work, single-cell data regarding the activation of primate dorso-medial stream neurons during gazing and reaching movements is studied, with the purpose of discovering meaningful pattern useful for modeling purposes. The outline of a model of the mechanisms which allow humans and other primates to build dynamical representations of their peripersonal space through active interaction with nearby objects is proposed, and a detailed description of how to employ the results of the data analysis in the model is offered. The application of the model to robotic systems will allow artificial agents to improve their skills in exploring the nearby space, and will at the same time constitute a way to validate modeling assumptions.
    Neurocomputing 03/2011; 74:1203-1212. DOI:10.1016/j.neucom.2010.07.029 · 2.01 Impact Factor