CNS activation and regional connectivity during pantomime observation: no engagement of the mirror neuron system for deaf signers.

Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, San Diego, CA 92120, USA.
NeuroImage (Impact Factor: 6.25). 09/2009; 49(1):994-1005. DOI: 10.1016/j.neuroimage.2009.08.001
Source: PubMed

ABSTRACT Deaf signers have extensive experience using their hands to communicate. Using fMRI, we examined the neural systems engaged during the perception of manual communication in 14 deaf signers and 14 hearing non-signers. Participants passively viewed blocked video clips of pantomimes (e.g., peeling an imaginary banana) and action verbs in American Sign Language (ASL) that were rated as meaningless by non-signers (e.g., TO-DANCE). In contrast to visual fixation, pantomimes strongly activated fronto-parietal regions (the mirror neuron system, MNS) in hearing non-signers, but only bilateral middle temporal regions in deaf signers. When contrasted with ASL verbs, pantomimes selectively engaged inferior and superior parietal regions in hearing non-signers, but right superior temporal cortex in deaf signers. The perception of ASL verbs recruited similar regions as pantomimes for deaf signers, with some evidence of greater involvement of left inferior frontal gyrus for ASL verbs. Functional connectivity analyses with left hemisphere seed voxels (ventral premotor, inferior parietal lobule, fusiform gyrus) revealed robust connectivity with the MNS for the hearing non-signers. Deaf signers exhibited functional connectivity with the right hemisphere that was not observed for the hearing group for the fusiform gyrus seed voxel. We suggest that life-long experience with manual communication, and/or auditory deprivation, may alter regional connectivity and brain activation when viewing pantomimes. We conclude that the lack of activation within the MNS for deaf signers does not support an account of human communication that depends upon automatic sensorimotor resonance between perception and action.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The capacity for language is arguably the most remarkable innovation of the human brain. A relatively recent interpretation prescribes that part of the language-related circuits were co-opted from circuitry involved in hand control-the mirror neuron system (MNS), involved both in the perception and in the execution of voluntary grasping actions. A less radical view is that in early humans, communication was opportunistic and multimodal, using signs, vocalizations or whatever means available to transmit social information. However, one point that is not yet clear under either perspective is how learned communication acquired a semantic property thereby allowing us to name objects and eventually describe our surrounding environment. Here we suggest a scenario involving both manual gestures and learned vocalizations that led to the development of a primitive form of conventionalized reference. This proposal is based on comparative evidence gathered from other species and on neurolinguistic evidence in humans, which points to a crucial role for vocal learning in the early development of language. Firstly, the capacity to direct the attention of others to a common object may have been crucial for developing a consensual referential system. Pointing, which is a ritualized grasping gesture, may have been crucial to this end. Vocalizations also served to generate joint attention among conversants, especially when combined with gaze direction. Another contributing element was the development of pantomimic actions resembling events or animals. In conjunction with this mimicry, the development of plastic neural circuits that support complex, learned vocalizations was probably a significant factor in the evolution of conventionalized semantics in our species. Thus, vocal imitations of sounds, as in onomatopoeias (words whose sound resembles their meaning), are possibly supported by mirror system circuits, and may have been relevant in the acquisition of early meanings.
    Frontiers in Human Neuroscience 01/2014; 8:605. · 2.91 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In order to address previous controversies whether hand movements and gestures are linked to mental concepts or solely to the process of speaking, in the present study we investigate the neuropsychological functions of the entire spectrum of unimanual and bimanual hand movements and gestures when they either accompany speaking or when they act as the only means to communicate in the absence of speech. The results showed that the hand movement activity regarding all types of hand movements and gestures stayed constant with and without speaking. The analysis of the Structure of hand movements showed that executions shifted from in space hand movements with a phase structure during the condition without speech to more irregular on body hand movements without a phase structure during the co-speech condition. The gestural analysis revealed that pantomime gestures increase under conditions without speech whereas emotional motions and subject-oriented actions primarily occur when speaking. The present results provide evidence that the overall hand movement activity does not differ between co-speech conditions and conditions without speech, but that the hands adopt different neuropsychological functions. We conclude that the hands primarily externalise mental concepts in conditions without speaking but that their use shifts to more self-regulation and to endorsing verbal output with emotional connotations when they accompany speech.
    Journal of Cognitive Psychology 10/2014; 26(7):740-753. · 1.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The phenotype of autism involves heterogeneous adaptive traits (strengths vs. disabilities), different domains of alterations (social vs. non-social), and various associated genetic conditions (syndromic vs. nonsyndromic autism). Three observations suggest that alterations in experience-dependent plasticity are an etiological factor in autism: (1) the main cognitive domains enhanced in autism are controlled by the most plastic cortical brain regions, the multimodal association cortices; (2) autism and sensory deprivation share several features of cortical and functional reorganization; and (3) genetic mutations and/or environmental insults involved in autism all appear to affect developmental synaptic plasticity, and mostly lead to its upregulation. We present the Trigger-Threshold-Target (TTT) model of autism to organize these findings. In this model, genetic mutations trigger brain reorganization in individuals with a low plasticity threshold, mostly within regions sensitive to cortical reallocations. These changes account for the cognitive enhancements and reduced social expertise associated with autism. Enhanced but normal plasticity may underlie non-syndromic autism, whereas syndromic autism may occur when a triggering mutation or event produces an altered plastic reaction, also resulting in intellectual disability and dysmorphism in addition to autism. Differences in the target of brain reorganization (perceptual vs. language regions) account for the main autistic subgroups. In light of this model, future research should investigate how individual and sex-related differences in synaptic/regional brain plasticity influence the occurrence of autism.
    Neuroscience & Biobehavioral Reviews 08/2014; · 10.28 Impact Factor

Full-text (2 Sources)

Available from
Jun 6, 2014