Broca's area and the discrimination of frequency transitions: a functional MRI study.
ABSTRACT The left inferior frontal lobe has been traditionally viewed as a "language area," although its involvement in the discrimination of rapid nonverbal frequency changes has been also shown. Using functional MRI, we studied seven healthy adults during discrimination of relatively slow (200 ms) tonal frequency glides. Compared to a control task, in which subjects indiscriminately responded to white noise bursts, tonal discrimination was associated with bilateral superior and middle temporal and medial frontal activations. Inferior frontal activations were bilateral, but stronger on the left. Contrary to previous studies comparing discrimination of slow frequency changes to rest, our results suggest that such discriminations-when compared to an auditory control task-activate the left inferior frontal gyrus. Our findings are consistent with a participation of Broca's area in nonlinguistic processes besides its known roles in semantic, syntactic, and phonological functions.
- SourceAvailable from: David L Woods[Show abstract] [Hide abstract]
ABSTRACT: We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to the active and passive processing of the pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by analyzing the activation loci statistically using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas not involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location.Hearing research 08/2013; 307. DOI:10.1016/j.heares.2013.08.001 · 2.85 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Language perception comprises mechanisms of perception and discrimination of auditory stimuli. An important component of auditory perception and discrimination concerns auditory objects. Many interesting auditory objects in our environment are of relatively long duration; however, the temporal window of integration of auditory cortex neurons processing these objects is very limited. Thus, it is necessary to make active use of short-term memory in order to construct and temporarily store long-duration objects. We sought to understand the mechanisms by which the brain manipulates long-duration tonal patterns, temporarily stores the segments of those patterns, and integrates them into an auditory object. We extended a previously constructed model of auditory recognition of short-duration tonal patterns by expanding the prefrontal cortically-based short-term memory module of the previous model into a memory buffer with multiple short-term memory submodules and by adding a gating module. The gating module distributes the segments of the input pattern to separate locations of the extended prefrontal cortex in an orderly fashion, allowing a subsequent comparison of the stored segments against the segments of a second pattern. In addition to simulating behavioral data and electrical activity of neurons, our model also produces simulations of the blood oxygen level dependent (BOLD) signal as obtained in fMRI studies. The results of these simulations provided us with predictions that we tested in an fMRI experiment with normal volunteers. This fMRI experiment used the same task and similar stimuli to that of the model. We compared simulated data with experimental values. We found that two brain areas, the right precentral gyrus and the left medial frontal gyrus, correlated well with our simulations of the memory gating module. Other fMRI studies of auditory perception and discrimination have also found correlation of fMRI activation of those areas with similar tasks and thus provide further support to our findings.Journal of Integrative Neuroscience 01/2009; 7(4):501-27. DOI:10.1142/S021963520800199X · 1.12 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: How do listeners process the speech signal to extract acoustic cues and recover phonetic information? More than 50 years after the appearance of the motor theory of speech perception, recent neurophysiological discoveries challenge the view that speech perception only relies on perceptual auditory mechanisms and suggest that the motor system is also crucial for speech recognition. The aim of the present chapter is to review and discuss these findings in an attempt to define what could be a “common language of perception and action”.Revue Française de Linguistique Appliquée 01/2008; 13(2):9-22. · 0.08 Impact Factor