Article

Two hands, one perception: how bimanual haptic information is combined by the brain

Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy.
Journal of Neurophysiology (Impact Factor: 3.04). 01/2012; 107(2):544-50. DOI: 10.1152/jn.00756.2010
Source: PubMed

ABSTRACT Humans routinely use both of their hands to gather information about shape and texture of objects. Yet, the mechanisms of how the brain combines haptic information from the two hands to achieve a unified percept are unclear. This study systematically measured the haptic precision of humans exploring a virtual curved object contour with one or both hands to understand if the brain integrates haptic information from the two hemispheres. Bayesian perception theory predicts that redundant information from both hands should improve haptic estimates. Thus exploring an object with two hands should yield haptic precision that is superior to unimanual exploration. A bimanual robotic manipulandum passively moved the hands of 20 blindfolded, right-handed adult participants along virtual curved contours. Subjects indicated which contour was more "curved" (forced choice) between two stimuli of different curvature. Contours were explored uni- or bimanually at two orientations (toward or away from the body midline). Respective psychophysical discrimination thresholds were computed. First, subjects showed a tendency for one hand to be more sensitive than the other with most of the subjects exhibiting a left-hand bias. Second, bimanual thresholds were mostly within the range of the corresponding unimanual thresholds and were not predicted by a maximum-likelihood estimation (MLE) model. Third, bimanual curvature perception tended to be biased toward the motorically dominant hand, not toward the haptically more sensitive left hand. Two-handed exploration did not necessarily improve haptic sensitivity. We found no evidence that haptic information from both hands is integrated using a MLE mechanism. Rather, results are indicative of a process of "sensory selection", where information from the dominant right hand is used, although the left, nondominant hand may yield more precise haptic estimates.

1 Bookmark
 · 
118 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We reach for and grasp different sized objects numerous times per day. Most of these movements are visually-guided, but some are guided by the sense of touch (i.e. haptically-guided), such as reaching for your keys in a bag, or for an object in a dark room. A marked right-hand preference has been reported during visually-guided grasping, particularly for small objects. However, little is known about hand preference for haptically-guided grasping. Recently, a study has shown a reduction in right-hand use in blindfolded individuals, and an absence of hand preference if grasping was preceded by a short haptic experience. These results suggest that vision plays a major role in hand preference for grasping. If this were the case, then one might expect congenitally blind (CB) individuals, who have never had a visual experience, to exhibit no hand preference. Two novel findings emerge from the current study: first, the results showed that contrary to our expectation, CB individuals used their right hand during haptically-guided grasping to the same extent as visually-unimpaired (VU) individuals did during visually-guided grasping. And second, object size affected hand use in an opposite manner for haptically-versus visually-guided grasping. Big objects were more often picked up with the right hand during haptically-guided, but less often during visually-guided grasping. This result highlights the different demands that object features pose on the two sensory systems. Overall the results demonstrate that hand preference for grasping is independent of visual experience, and they suggest a left-hemisphere specialization for the control of grasping that goes beyond sensory modality.
    PLoS ONE 10/2014; 9(10). DOI:10.1371/journal.pone.0110175 · 3.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although motor actions can profoundly affect the perceptual interpretation of sensory inputs, it is not known whether the combination of sensory and movement signals occurs only for sensory surfaces undergoing movement or whether it is a more general phenomenon. In the haptic modality, the independent movement of multiple sensory surfaces poses a challenge to the nervous system when combining the tactile and kinesthetic signals into a coherent percept. When exploring a stationary object, the tactile and kinesthetic signals come from the same hand. Here we probe the internal structure of haptic combination by directing the two signal streams to separate hands: one hand moves but receives no tactile stimulation, while the other hand feels the consequences of the first hand's movement but remains still. We find that both discrete and continuous tactile and kinesthetic signals are combined as if they came from the same hand. This combination proceeds by direct coupling or transfer of the kinesthetic signal from the moving to the feeling hand, rather than assuming the displacement of a mediating object. The combination of signals is due to perception rather than inference, because a small temporal offset between the signals significantly degrades performance. These results suggest that the brain simplifies the complex coordinate transformation task of remapping sensory inputs to take into account the movements of multiple body parts in haptic perception, and they show that the effects of action are not limited to moving sensors.
    Proceedings of the National Academy of Sciences 01/2015; 112(2):619-624. DOI:10.1073/pnas.1419539112 · 9.81 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Is there any difference between matching the position of the hands by asking the subjects to move them to the same spatial location or to mirror-symmetric locations with respect to the body midline? If the motion of the hands were planned in the extrinsic space, the mirror-symmetric task would imply an additional challenge, because we would need to flip the coordinates of the target on the other side of the workspace. Conversely, if the planning were done in intrinsic coordinates, in order to move both hands to the same spot in the workspace, we should compute different joint angles for each arm. Even if both representations were available to the subjects, the two tasks might lead to different results, providing some cue on the organization of the "body schema". In order to answer such questions, the middle fingertip of the non-dominant hand of a population of healthy subjects was passively moved by a manipulandum to 20 different target locations. Subjects matched these positions with the middle fingertip of their dominant hand. For most subjects, the matching accuracy was higher in the extrinsic modality both in terms of systematic error and variability, even for the target locations in which the configuration of the arms was the same for both modalities. This suggests that the matching performance of the subjects could be determined not only by proprioceptive information but also by the cognitive representation of the task: expressing the goal as reaching for the physical location of the hand in space is apparently more effective than requiring to match the proprioceptive representation of joint angles.
    Frontiers in Human Neuroscience 02/2015; 9:72. DOI:10.3389/fnhum.2015.00072 · 2.90 Impact Factor