Seeing or not seeing where your hands are

Department of Psychology, University of Bologna, Italy.
Experimental Brain Research (Impact Factor: 2.04). 05/2000; 131(4):458-67. DOI: 10.1007/s002219900264
Source: PubMed


Previous findings have demonstrated the existence of a visual peripersonal space centered on the hand in humans and its modulatory effects on tactile perception. A strong modulatory effect of vision on touch perception was found when a visual stimulus was presented near the hand. In contrast, when the visual stimulus was presented far from the hand, only a weak modulatory effect was found. The aim of the present study was to verify whether such cross-modal links between touch and vision in the peripersonal space centered on the hand could be mediated by proprioceptive signals specifying the current hand positions or if they directly reflect an interaction between two sensory modalities, i.e., vision and touch. To this aim, cross-modal effects were studied in two different experiments: one in which patients could see their hands and one in which vision of their hands was prevented. The results showed strong modulatory effects of vision on touch perception when the visual stimulus was presented near the seen hand and only mild effects when the vision of the hand was prevented. These findings are explained by referring to the activity of bimodal neurons in premotor and parietal cortex of macaque, which have tactile receptive fields on the hand, and corresponding visual receptive fields in the space immediately adjacent to the tactile fields. One important feature of these bimodal neurons is that their responsiveness to visual stimuli delivered near the body part is reduced or even extinguished when the view of the body part is prevented. This implies that, at least for the hand, the vision of the hand is crucial for determining the spatial mapping between vision and touch that takes place in the peripersonal space. In contrast, the proprioceptive signals specifying the current hand position in space do not seem to be relevant in determining the cross-modal interaction between vision and touch.

Download full-text


Available from: Giuseppe Di Pellegrino, Sep 08, 2015
  • Source
    • "However, the degree to which a contralesional event is extinguished by an ipsilesional one depends on a number of spatial and postural factors. For example, whereas a rightvisual-field event can extinguish a left-hand tactile one, in some right-brain-injured patients this extinction is markedly reduced if the unstimulated right hand is moved away from the right visual event (di Pellegrino et al., 1997; Làdavas, di Pellegrino, Farnè, & Zeloni, 1998), yet it returns if a false/rubber hand is now put in the empty space close to the right visual event (Farnè, Pavani, Meneghello, & Làdavas, 2000). Extinction is abolished if the head and eyes are turned so that the right-visual-field event is now located close to the previously extinguished left tactile event (Kennett, Rorden, Husain, & Driver, 2010). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated the effects of seen and unseen within-hemifield posture changes on crossmodal visual-tactile links in covert spatial attention. In all experiments, a spatially nonpredictive tactile cue was presented to the left or the right hand, with the two hands placed symmetrically across the midline. Shortly after a tactile cue, a visual target appeared at one of two eccentricities within either of the hemifields. For half of the trial blocks, the hands were aligned with the inner visual target locations, and for the remainder, the hands were aligned with the outer target locations. In Experiments 1 and 2, the inner and outer eccentricities were 17.5º and 52.5º, respectively. In Experiment 1, the arms were completely covered, and visual up-down judgments were better when on the same side as the preceding tactile cue. Cueing effects were not significantly affected by hand or target alignment. In Experiment 2, the arms were in view, and now some target responses were affected by cue alignment: Cueing for outer targets was only significant when the hands were aligned with them. In Experiment 3, we tested whether any unseen posture changes could alter the cueing effects, by widely separating the inner and outer target eccentricities (now 10º and 86º). In this case, hand alignment did affect some of the cueing effects: Cueing for outer targets was now only significant when the hands were in the outer position. Although these results confirm that proprioception can, in some cases, influence tactile-visual links in exogenous spatial attention, they also show that spatial precision is severely limited, especially when posture is unseen.
    Attention Perception & Psychophysics 01/2014; 76(4). DOI:10.3758/s13414-013-0484-3 · 2.17 Impact Factor
  • Source
    • "The ability to detect the position of a limb from proprioceptive information alone is poor (Graziano 1999; Làdavas et al. 2000). When an arm is moved passively to a new location, such that its position can only be identified by proprioceptive information about joint position and muscle length, participants are significantly less accurate at tracking the arm compared with when a target light is attached to the hand (Mather and Lackner 1981). "
    [Show abstract] [Hide abstract]
    ABSTRACT: How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.
    Experimental Brain Research 08/2012; 222(4):389-97. DOI:10.1007/s00221-012-3224-3 · 2.04 Impact Factor
  • Source
    • "Monitoring the hand [19] and objects [20] within peripersonal space [21], in particular, appears to influence motor behavior and performance. Recent research demonstrated that subjects reaching to a target in the presence of obstacles collided with virtual objects as often as forty percent of the time in some conditions [22], whereas testing avoidance of real obstacles resulted in collisions in less than one percent of the movements [23] [24] indicating that online movement planning and execution changes in the presence of real objects. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Many patients experience severe motor, proprioception and tactile sensory loss following central or peripheral nervous system injury such as Guillain-Barre syndrome (GBS). For many, there are no traditional therapies available and patients fail to use the hand and arm dramatically affecting their quality of life. Our project investigates technology assisted protocols to help re-calibrate body perceptions and improve sensory dependent motor skills. We designed, built and tested an easy to use system to provide technology assistance to a variety of underserved patients, and therapists. The Sensory Motor Training Station (SMTS) accommodates the patient’s lost sensory and motor skills and is used to train cognitive, sensory, motor, and proprioception skills. Virtual Reality (VR) is used with immersive virtual limbs and real objects to increase sense of involvement, and provide tactile experiences in a real world integrated arm and hand task. Robot assistance as needed or transparent mode is provided to overcome patient weakness and promote practice plasticity. We trained a person suffering from GBS. The patient successfully exercised and skills were assessed using the system. SMTS can easily be adapted to accommodate left or right limb, heterogeneous patients, and individual cognitive, sensory and motor issues. Results revealed patient performance varies in each sensory and motor training condition; performance improved in the presence of real objects and also during voluntary motor participation in the exercises facilitated by the robot transparent mode support against gravity and friction. Our multi-sensory technology assistance system provided exercise and assessment for both upper limbs in a real world integrated hand and arm task.
    IASTED Biomedical Engineering; 02/2012
Show more