Article

Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching.

IEEE T. Autonomous Mental Development 01/2011; 3:43-53.
Source: DBLP

ABSTRACT Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor rep-resentation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of a humanoid robot eye and arm movements. Simulated results confirm that mixed neural populations, such as those found in some particular brain areas modeled in this work, are especially suitable to the problem at hand. In the final experi-mental setup, by exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

Download full-text

Full-text

Available from: Angel P. del Pobil, Aug 19, 2015
0 Followers
 · 
92 Views
  • Source
    • "Second, this representation is directly linked with the visual search: after a visually detected object is fixated the Reachability information can be retrieved without the need of any additional transformation. Moreover, the gaze configuration can be used to directly trigger the reaching movement, as described in the literature [20] [21] [22] [23] [24]. Furthermore, previous works provide a discrete representation of the space (i.e. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
    Robotics and Autonomous Systems 04/2014; 62(4). DOI:10.1016/j.robot.2013.12.011 · 1.11 Impact Factor
  • Source
    • "In their simulation, the SOMs estimate the relative arm position with respect to the face for visuo-tactile face representation where the synaptic links of the most contiguous visual and tactile neurons are reinforced over time. Besides, Chinellato et al. [15] follow the model proposed by Pouget and Deneve [13], which exploits the gain-field mechanism for multimodal integration. In a computer simulation of an eyehand system, they use radial basis function networks (RBFs) for visuomotor transformations, for gazing and for reaching "
    [Show abstract] [Hide abstract]
    ABSTRACT: Seeing is not just done through the eyes, it involves the integration of other modalities such as auditory, proprioceptive and tactile information, to locate objects, persons and also the limbs. We hypothesize that the neural mechanism of gain-field modulation, which is found to process coordinate transform between modalities in the superior colliculus and in the parietal area, plays a key role to build such unified perceptual world. In experiments with a head-neck-eye's robot with a camera and microphones, we study how gain-field modulation in neural networks can serve for transcribing one modality's reference frame into another one (e.g., audio signals into eyes' coordinate). It follows that each modality influences the estimations of the position of a stimulus (multimodal enhancement). This can be used in example for mapping sound signals into retina coordinates for audio-visual speech perception.
    Humanoid Robots (Humanoids), 2012 12th IEEE-RAS International Conference on; 11/2012
  • Source
    • "Second, this representation is directly linked with the visual search: after a visually detected object is fixated the reachability information can be retrieved without the need of any additional transformation. Moreover, the gaze configuration can be used to directly trigger the reaching movement, as described in the literature [17]–[21]. Furthermore, previous works provide a discrete representation of the space (i.e. grid of voxels). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe how a humanoid robot can learn a representation of its own reachable space from motor experience: a Reachable Space Map. The map provides information about the reachability of a visually detected object (i.e. a 3D point in space). We propose a bio-inspired solution in which the map is built in a gaze-centered reference frame: the position of a point in space is encoded with the motor configuration of the robot head and eyes which allows the fixation of that point. We provide experimental results in which a simulated humanoid robot learns this map autonomously and we discuss how the map can be used for planning whole-body and bimanual reaching.
    Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS International Conference on; 01/2012
Show more