The role of premotor cortex in speech perception: Evidence from fMRI and rTMS

Ahmanson-Lovelace Brain Mapping Center, Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA.
Journal of Physiology-Paris (Impact Factor: 1.9). 05/2008; 102(1-3):31-4. DOI: 10.1016/j.jphysparis.2008.03.003
Source: PubMed


This article discusses recent functional magnetic resonance imaging (fMRI) and repetitive Transcranial Magnetic Stimulation (rTMS) data that suggest a direct involvement of premotor cortical areas in speech perception. These new data map well onto psychological theories advocating an active role of motor structures in the perception of speech sounds. It is proposed that the perception of speech is enabled--at least in part--by a process that simulates speech production.

45 Reads
  • Source
    • "The inferior frontal gyrus is thought to be responsible for mapping auditory signals onto articulatory gestures (Myers et al., 2009; Lee et al., 2012; Chevillet et al., 2013). It has been suggested that the role of the inferior frontal gyrus is defined by the linkage between motor observation and imitation , which allows for abstraction of articulatory gestures from the auditory signals, along with the motor cortex and the insula (Ackermann and Riecker, 2004, 2010; Molnar-Szakacs et al., 2005; Pulvermüller, 2005; Pulvermüller et al., 2005, 2006; Skipper et al., 2005; Galantucci et al., 2006; Meister et al., 2007; Iacoboni, 2008; Kilner et al., 2009; Pulvermüller and Fadiga, 2010). On the other hand, both fMRI and transcranial magnetic stimulation (TMS) studies have indicated a functional heterogeneity within the inferior frontal cortex, which includes semantic processing (Homae et al., 2002; Devlin et al., 2003; Gough et al., 2005). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Foreign-accented speech often presents a challenging listening condition. In addition to deviations from the target speech norms related to the inexperience of the nonnative speaker, listener characteristics may play a role in determining intelligibility levels. We have previously shown that an implicit visual bias for associating East Asian faces and foreignness predicts the listeners' perceptual ability to process Korean-accented English audiovisual speech (Yi et al., 2013). Here, we examine the neural mechanism underlying the influence of listener bias to foreign faces on speech perception. In a functional magnetic resonance imaging (fMRI) study, native English speakers listened to native- and Korean-accented English sentences, with or without faces. The participants' Asian-foreign association was measured using an implicit association test (IAT), conducted outside the scanner. We found that foreign-accented speech evoked greater activity in the bilateral primary auditory cortices and the inferior frontal gyri, potentially reflecting greater computational demand. Higher IAT scores, indicating greater bias, were associated with increased BOLD response to foreign-accented speech with faces in the primary auditory cortex, the early node for spectrotemporal analysis. We conclude the following: (1) foreign-accented speech perception places greater demand on the neural systems underlying speech perception; (2) face of the talker can exaggerate the perceived foreignness of foreign-accented speech; (3) implicit Asian-foreign association is associated with decreased neural efficiency in early spectrotemporal processing.
    Frontiers in Human Neuroscience 10/2014; 8:768. DOI:10.3389/fnhum.2014.00768 · 3.63 Impact Factor
  • Source
    • "According to this theory, during speech perception, motor primitives are activated as a result of an acoustically evoked motor resonance. This theory is supported by the observations that passive listening to syllables involves motor and premotor areas (Fadiga, Craighero, Buccino, & Rizzolatti, 2002; Iacoboni, 2008; Pulvermüller et al., 2006) and that the presupplementary motor area is involved in the perception of degraded speech (Adank & Devlin, 2010; Shahin, Bishop, & Miller, 2009). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Dyslexia is a language-based neurodevelopmental disorder. It is characterized as a persistent deficit in reading and spelling. These difficulties have been shown to result from an underlying impairment of the phonological component of language, possibly also affecting speech perception. Although there is little evidence for such a deficit under optimal, quiet listening conditions, speech perception difficulties in adults with dyslexia are often reported under more challenging conditions, such as when speech is masked by noise. Previous studies have shown that these difficulties are more pronounced when the background noise is speech and when little spatial information is available to facilitate differentiation between target and background sound sources. In this study, we investigated the neuroimaging correlates of speech-in-speech perception in typical readers and participants with dyslexia, focusing on the effects of different listening configurations. Fourteen adults with dyslexia and 14 matched typical readers performed a subjective intelligibility rating test with single words presented against concurrent speech during functional magnetic resonance imaging (fMRI) scanning. Target words were always presented with a four-talker background in one of three listening configurations: Dichotic, Binaural or Monaural. The results showed that in the Monaural configuration, in which no spatial information was available and energetic masking was maximal, intelligibility was severely decreased in all participants, and this effect was particularly strong in participants with dyslexia. Functional imaging revealed that in this configuration, participants partially compensate for their poorer listening abilities by recruiting several areas in the cerebral networks engaged in speech perception. In the Binaural configuration, participants with dyslexia achieved the same performance level as typical readers, suggesting that they were able to use spatial information when available. This result was, however, associated with increased activation in the right superior temporal gyrus, suggesting the need to reallocate neural resources to overcome speech-in-speech difficulties. Taken together, these results provide further understanding of the speech-in-speech perception deficit observed in dyslexia.
    Neuropsychologia 06/2014; 60(1). DOI:10.1016/j.neuropsychologia.2014.05.016 · 3.30 Impact Factor
  • Source
    • "Many researchers have proposed that speech intelligibility is enhanced by visual speech cues because the information available in the visible gestures activates motor representations that can be used to constrain auditory speech perception. Specifically, researchers hypothesize that certain brain regions internally model and simulate speech production and that these internal models are used to recover vocal tract shape information inherent in the speech signal (Callan et al., 2003, 2004a; Wilson and Iacoboni, 2006; Iacoboni and Wilson, 2006; Skipper et al., 2007a,b; Iacoboni, 2008; Poeppel et al., 2008; Rauschecker and Scott, 2009; Rauschecker, 2011). Internal models are a wellknown concept in the motor control literature, and are believed to be used by the brain to simulate the input/output characteristics , or their inverses, of the motor control system (Kawato, 1999). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex has been shown to be active during both observation and execution of action (‘Mirror System’ properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker’s articulating face and heard her voice), visual only (only saw the speaker’s articulating face), and audio only (only heard the speaker’s voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the premotor cortex involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the fMRI analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and premotor cortex. The left ventral inferior premotor cortex showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the premotor cortex are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.
    Frontiers in Psychology 04/2014; 5:389. DOI:10.3389/fpsyg.2014.00389 · 2.80 Impact Factor
Show more

Similar Publications


45 Reads
Available from