Parametric merging of MEG and fMRI reveals spatiotemporal differences in cortical processing of spoken words and environmental sounds in background noise.
ABSTRACT There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here, we used parametric stimulus manipulation in magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) recordings on the same subjects, to study effects of noise on processing of spoken words and environmental sounds. The added noise influenced MEG response strengths in the bilateral supratemporal auditory cortex, at different times for the different stimulus types. Specifically for spoken words, the effect of noise on the electrophysiological response was remarkably nonlinear. Therefore, we used the single-subject MEG responses to construct parametrization for fMRI data analysis and obtained notably higher sensitivity than with conventional stimulus-based parametrization. fMRI results showed that partly different temporal areas were involved in noise-sensitive processing of words and environmental sounds. These results indicate that cortical processing of sounds in background noise is stimulus specific in both timing and location and provide a new functionally meaningful platform for combining information obtained with electrophysiological and hemodynamic measures of brain function.
- SourceAvailable from: Ann R Bradlow[show abstract] [hide abstract]
ABSTRACT: Some children with learning problems (LP) experience speech-sound perception deficits that worsen in background noise. The first goal was to determine whether these impairments are associated with abnormal neurophysiologic representation of speech features in noise reflected at brain-stem and cortical levels. The second goal was to examine the perceptual and neurophysiological benefits provided to an impaired system by acoustic cue enhancements. Behavioral speech perception measures (just noticeable difference scores), auditory brain-stem responses, frequency-following responses and cortical-evoked potentials (P1, N1, P1', N1') were studied in a group of LP children and compared to responses in normal children. We report abnormalities in the fundamental sensory representation of sound at brain-stem and cortical levels in the LP children when speech sounds were presented in noise, but not in quiet. Specifically, the neurophysiologic responses from these LP children displayed a different spectral pattern and lacked precision in the neural representation of key stimulus features. Cue enhancement benefited both behavioral and neurophysiological responses. Overall, these findings contribute to our understanding of the preconscious biological processes underlying perception deficits and may assist in the design of effective intervention strategies.Clinical Neurophysiology 06/2001; 112(5):758-67. · 3.14 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.Cerebral Cortex 06/2000; 10(5):512-28. · 6.83 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.Cerebral Cortex 11/2005; 15(10):1621-31. · 6.83 Impact Factor