Binder, J.R. et al. Human temporal lobe activation by speech and nonspeech sounds. Cereb. Cortex 10, 512-528
Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States Cerebral Cortex
(Impact Factor: 8.67).
06/2000; 10(5):512-28. DOI: 10.1093/cercor/10.5.512
Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.
Available from: Susanne Passow
- "Furthermore, the acquisition of PET resting-state data took place in a relatively silent environment, while fMRI-based resting-state data acquisitions are always accompanied by intense scanner noise. This might explain the additional activation in the superior temporal gyrus, present in the fMRI [e.g., Binder et al., 2000; Rimol et al., 2005; Wong et al., 2008] but not in the FDG-PET data. "
[Show abstract] [Hide abstract]
ABSTRACT: Over the last decade, the brain's default-mode network (DMN) and its function has attracted a lot of attention in the field of neuroscience. However, the exact underlying mechanisms of DMN functional connectivity, or more specifically, the blood-oxygen level-dependent (BOLD) signal, are still incompletely understood. In the present study, we combined 2-deoxy-2-[ 18 F]fluoroglucose positron emission tomography (FDG-PET), proton magnetic resonance spectroscopy (1 H-MRS), and resting-state functional magnetic resonance imaging (rs-fMRI) to investigate more directly the association between local glucose consumption, local glutamatergic neurotransmission and DMN functional connectivity during rest. The results of the correlation analyzes using the dorsal posterior cingulate cortex (dPCC) as seed region showed spatial similarities between fluctuations in FDG-uptake and fluctuations in BOLD signal. More specifically, in both modalities the same DMN areas in the inferior parietal lobe, angular gyrus, precu-neus, middle, and medial frontal gyrus were positively correlated with the dPCC. Furthermore, we could demonstrate that local glucose consumption in the medial frontal gyrus, PCC and left angular gyrus was associated with functional connectivity within the DMN. We did not, however, find a relationship between glutamatergic neurotransmission and functional connectivity. In line with very recent findings, our results lend further support for a close association between local metabolic activity and functional connectivity and provide further insights towards a better understanding of the underlying mechanism of the BOLD signal. Hum Brain Mapp 00:000–000, 2015.
Available from: Tracy Love
- "2004; Van Petten and Rheinfelder, 1995) and for environmental sound targets primed by words, pictures, or other environmental sounds (Aramak et al., 2010; Cummings et al., 2006, 2008; Cummings and Čeponienė, 2010; Daltrozzo and Schön, 2009; Orgs et al., 2008; Orgs et al., 2006; Plante et al., 2000; Schirmer et al., 2011; Schön et al., 2010; Van Petten and Rheinfelder, 1995). Indeed several studies of N400 priming effects using bimodal (visual/auditory ) stimulus presentation have found similar scalp distributions for the N400 priming effects to words and environmental sounds across multiple ages (Cummings et al., 2006, 2008; Cummings and Čeponienė, 2010; Orgs et al., 2007) Finally, functional imaging results have shown activation to both word and environmental sound stimuli in areas commonly thought of as language specific: left inferior frontal and superior temporal regions (Binder et al., 2000; Leech and Saygin, 2011; Price et al., 2005; Thierry et al., 2003; Tranel et al., 2003) and similar neural networks being implicated in the semantic processing of speech and musical sounds (Koelsch, 2005; Koelsch et al., 2004; Steinbeis and Koelsch, 2008). Despite these similarities, there are some important differences between words and environmental sounds. "
[Show abstract] [Hide abstract]
ABSTRACT: In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300–700 ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300–400 ms) and late (500–700 ms) time windows, though a graded pattern similar to that of words was seen in the mid-latency time window (400–500 ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds.
Available from: Merav Ahissar
- "The intraparietal area is associated with the storing of information (Koelsch et al., 2009; Baldo & Dronkers, 2006), although it is probably not the site of storage itself (Sreenivasan et al., 2014; Magen et al., 2009). The posterior superior temporal region is associated with analysis of temporal auditory structures at different levels of complexity (Obleser & Kotz, 2010; Friederici et al., 2009; Davis & Johnsrude, 2003; Binder et al., 2000). Figure 4C and D shows a reduction in activity in these areas between the first and third blocks. "
[Show abstract] [Hide abstract]
ABSTRACT: Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in the left intraparietal area, which is involved in stimulus retention, and in the posterior superior-temporal area, which is involved in representing auditory regularities. We propose that this joint reduction reflects a reduction of the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.