Individual differences in premotor and motor recruitment during speech perception

Medical Research Council, Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, England, UK.
Neuropsychologia (Impact Factor: 3.45). 04/2012; 50(7):1380-92. DOI: 10.1016/j.neuropsychologia.2012.02.023
Source: PubMed

ABSTRACT Although activity in premotor and motor cortices is commonly observed in neuroimaging studies of spoken language processing, the degree to which this activity is an obligatory part of everyday speech comprehension remains unclear. We hypothesised that rather than being a unitary phenomenon, the neural response to speech perception in motor regions would differ across listeners as a function of individual cognitive ability. To examine this possibility, we used functional magnetic resonance imaging (fMRI) to investigate the neural processes supporting speech perception by comparing active listening to pseudowords with matched tasks that involved reading aloud or repetition, all compared to acoustically matched control stimuli and matched baseline tasks. At a whole-brain level there was no evidence for recruitment of regions in premotor or motor cortex during speech perception. A focused region of interest analysis similarly failed to identify significant effects, although a subset of regions approached significance, with notable variability across participants. We then used performance on a battery of behavioural tests that assessed meta-phonological and verbal short-term memory abilities to investigate the reasons for this variability, and found that individual differences in particular in low phonotactic probability pseudoword repetition predicted participants' neural activation within regions in premotor and motor cortices during speech perception. We conclude that normal listeners vary in the degree to which they recruit premotor and motor cortex as a function of short-term memory ability. This is consistent with a resource-allocation approach in which recruitment of the dorsal speech processing pathway depends on both individual abilities and specific task demands.


Available from: Matthew H Davis, Jan 14, 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Models propose an auditory-motor mapping via a left-hemispheric dorsal speech-processing stream, yet its detailed contributions to speech perception and production are unclear. Using fMRI-navigated repetitive transcranial magnetic stimulation (rTMS), we virtually lesioned left dorsal stream components in healthy human subjects and probed the consequences on speech-related facilitation of articulatory motor cortex (M1) excitability, as indexed by increases in motor-evoked potential (MEP) amplitude of a lip muscle, and on speech processing performance in phonological tests. Speech-related MEP facilitation was disrupted by rTMS of the posterior superior temporal sulcus (pSTS), the sylvian parieto-temporal region (SPT), and by double-knock-out but not individual lesioning of pars opercularis of the inferior frontal gyrus (pIFG) and the dorsal premotor cortex (dPMC), and not by rTMS of the ventral speech-processing stream or an occipital control site. RTMS of the dorsal stream but not of the ventral stream or the occipital control site caused deficits specifically in the processing of fast transients of the acoustic speech signal. Performance of syllable and pseudoword repetition correlated with speech-related MEP facilitation, and this relation was abolished with rTMS of pSTS, SPT, and pIFG. Findings provide direct evidence that auditory-motor mapping in the left dorsal stream causes reliable and specific speech-related MEP facilitation in left articulatory M1. The left dorsal stream targets the articulatory M1 through pSTS and SPT constituting essential posterior input regions and parallel via frontal pathways through pIFG and dPMC. Finally, engagement of the left dorsal stream is necessary for processing of fast transients in the auditory signal. Copyright © 2015 the authors 0270-6474/15/351411-12$15.00/0.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A fundamental goal of the human auditory system is to map complex acoustic signals onto stable internal representations of the basic sound patterns of speech. Phonemes and the distinctive features that they comprise constitute the basic building blocks from which higher-level linguistic representations, such as words and sentences, are formed. Although the neural structures underlying phonemic representations have been well studied, there is considerable debate regarding frontal-motor cortical contributions to speech as well as the extent of lateralization of phonological representations within auditory cortex. Here we used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis to investigate the distributed patterns of activation that are associated with the categorical and perceptual similarity structure of 16 consonant exemplars in the English language used in Miller and Nicely's (1955) classic study of acoustic confusability. Participants performed an incidental task while listening to phonemes in the MRI scanner. Neural activity in bilateral anterior superior temporal gyrus and supratemporal plane was correlated with the first two components derived from a multidimensional scaling analysis of a behaviorally derived confusability matrix. We further showed that neural representations corre-sponding to the categorical features of voicing, manner of articulation, and place of articulation were widely distributed throughout bilateral primary, secondary, and association areas of the superior temporal cortex, but not motor cortex. Although classification of phonological features was generally bilateral, we found that multivariate pattern information was moderately stronger in the left com-pared with the right hemisphere for place but not for voicing or manner of articulation.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 01/2015; 35(2):634. DOI:10.1523/JNEUROSCI.2454-14.2015 · 6.75 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds. Copyright © 2014 Elsevier Inc. All rights reserved.
    Brain and Language 12/2014; 141C:35-49. DOI:10.1016/j.bandl.2014.11.005 · 3.31 Impact Factor

Similar Publications