Article

Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching.

IEEE T. Autonomous Mental Development 01/2011; 3:43-53.
Source: DBLP

ABSTRACT Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor rep-resentation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of a humanoid robot eye and arm movements. Simulated results confirm that mixed neural populations, such as those found in some particular brain areas modeled in this work, are especially suitable to the problem at hand. In the final experi-mental setup, by exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

Download full-text

Full-text

Available from: Angel P. del Pobil, Aug 03, 2015
0 Followers
 · 
91 Views
  • Source
    • "Second, this representation is directly linked with the visual search: after a visually detected object is fixated the Reachability information can be retrieved without the need of any additional transformation. Moreover, the gaze configuration can be used to directly trigger the reaching movement, as described in the literature [20] [21] [22] [23] [24]. Furthermore, previous works provide a discrete representation of the space (i.e. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
    Robotics and Autonomous Systems 04/2014; 62(4). DOI:10.1016/j.robot.2013.12.011 · 1.11 Impact Factor
  • Source
    • "Second, this representation is directly linked with the visual search: after a visually detected object is fixated the reachability information can be retrieved without the need of any additional transformation. Moreover, the gaze configuration can be used to directly trigger the reaching movement, as described in the literature [17]–[21]. Furthermore, previous works provide a discrete representation of the space (i.e. grid of voxels). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe how a humanoid robot can learn a representation of its own reachable space from motor experience: a Reachable Space Map. The map provides information about the reachability of a visually detected object (i.e. a 3D point in space). We propose a bio-inspired solution in which the map is built in a gaze-centered reference frame: the position of a point in space is encoded with the motor configuration of the robot head and eyes which allows the fixation of that point. We provide experimental results in which a simulated humanoid robot learns this map autonomously and we discuss how the map can be used for planning whole-body and bimanual reaching.
    Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS International Conference on; 01/2012
  • Source
    • "Second, this representation is directly linked with the visual search: after a visually detected object is fixated the reachability information can be retrieved without the need of any additional transformation. Moreover, the gaze configuration can be used to directly trigger the reaching movement, as described in the literature [13]–[17]. Furthermore , previous works provide a discrete representation of the space (i.e. grid of voxels). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe an interactive learning strategy that enables a humanoid robot to build a representation of its workspace: we call it a Reachable Space Map. The robot learns this map autonomously and online during the execution of goal-directed reaching movements; reaching control is based on kinematic models that are learned online as well. The map can be used to estimate the reachability of a fixated object and to plan preparatory movements (e.g. bending or rotating the waist) that improve the effectiveness of the subsequent reaching action. Three main concepts make our solution innovative with respect to previous works: the use of a gaze-centered motor representation to describe the robot workspace, the primary role of action in building and representing knowledge (i.e. interactive learning), the realization of autonomous online learning. We evaluate our strategy by learning the workspace of a simulated humanoid robot and we show how this knowledge can be exploited to plan and execute complex actions, like whole-body bimanual reaching.
    Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on; 01/2012
Show more