Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching.

IEEE T. Autonomous Mental Development 01/2011; 3:43-53.
Source: DBLP

ABSTRACT Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor rep-resentation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of a humanoid robot eye and arm movements. Simulated results confirm that mixed neural populations, such as those found in some particular brain areas modeled in this work, are especially suitable to the problem at hand. In the final experi-mental setup, by exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

  • [Show abstract] [Hide abstract]
    ABSTRACT: An improved quasi sliding mode rapid maneuver control algorithm based on linear extended state observer (LESO) of high agile small satellite is proposed. First of all, small satellite dynamics model and kinematics model are established in SGCMGs as actuator. Then, quaternion error feedback control scheme based on quasi sliding mode control is given, effectively weakening the chattering problem of sliding mode control system, realizing the fast mobile of small satellite. Considering the steady-state error of quasi sliding mode control system with disturbance, in order to further improve the steady state performance of the control system, using ADRC theory, the quasi sliding mode attitude control algorithm based on the LESO is designed in this paper. The experimental results show that, the algorithm proposed in this paper can stable and reliable work, motor speed reaches an average of 4.5°/s, the attitude pointing accuracy is better than 0.0035° (3σ), and attitude stability is better than 0.007°/s (3σ).
    2013 2nd International Conference on Measurement, Information and Control (ICMIC); 08/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our owns. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner’s shoulder) reference frame to one’s own body-centred reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centred reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot’s current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.
    Neural Networks 09/2014; DOI:10.1016/j.neunet.2014.08.009 · 2.08 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target object in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While existing research is often directed to develop such abilities individually, in this work we integrate a number of computational models into a unified framework, and implement it onto a humanoid torso. To achieve this ambitious goal, we propose a cognitive architecture that connects models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system which allows the robot to create its internal model and its representation of the surrounding space by directly interacting with the environment, through a mutual adaptation of perception and action. The result is a robot capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, that can work separately or cooperate for more structured and effective behaviors.
    IEEE Transactions on Autonomous Mental Development 12/2014; DOI:10.1109/TAMD.2014.2332875 · 1.35 Impact Factor

Full-text (2 Sources)

Available from
Jun 4, 2014