Hemispheric asymmetry for spectral and temporal processing in the human antero‐lateral auditory belt cortex

Faculty of Biosciences, University of Leipzig, Germany.
European Journal of Neuroscience (Impact Factor: 3.67). 10/2005; 22(6):1521-8. DOI: 10.1111/j.1460-9568.2005.04315.x
Source: PubMed

ABSTRACT The present study investigates the acoustic basis of the hemispheric asymmetry for the processing of speech and music. Experiments on this question ideally involve stimuli that are perceptually unrelated to speech and music, but contain acoustic characteristics of both. Stimuli in previous studies were derived from speech samples or tonal sequences. Here we introduce a new class of noise-like sound stimuli with no resemblance of speech or music that permit independent parametric variation of spectral and temporal acoustic complexity. Using these stimuli in a functional MRI experiment, we test the hypothesis of a hemispheric asymmetry for the processing of spectral and temporal sound structure by seeking cortical areas in which the blood oxygen level dependent (BOLD) signal covaries with the number of simultaneous spectral components (spectral complexity) or the temporal modulation rate (temporal complexity) of the stimuli. BOLD-responses from the left and right Heschl's gyrus (HG) and part of the right superior temporal gyrus covaried with the spectral parameter, whereas covariation analysis for the temporal parameter highlighted an area on the left superior temporal gyrus. The portion of superior temporal gyrus in which asymmetrical responses are apparent corresponds to the antero-lateral auditory belt cortex, which has been implicated with spectral integration in animal studies. Our results support a similar function of the anterior auditory belt in humans. The findings indicate that asymmetrical processing of complex sounds in the cerebral hemispheres does not depend on semantic, but rather on acoustic stimulus characteristics.

Download full-text


Available from: D. Y von Cramon, Jun 25, 2015
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study combines functional and structural magnetic resonance imaging to test the "asymmetric sampling in time" (AST) hypothesis, which makes assertions about the symmetrical and asymmetrical representation of speech in the primary and nonprimary auditory cortex. Twenty-three volunteers participated in this parametric clustered-sparse fMRI study. The availability of slowly changing acoustic cues in spoken sentences was systematically reduced over continuous segments with varying lengths (100, 150, 200, 250 ms) by utilizing local time-reversion. As predicted by the hypothesis, functional lateralization in Heschl's gyrus could not be observed. Lateralization in the planum temporale and posterior superior temporal gyrus shifted towards the right hemisphere with decreasing suprasegmental temporal integrity. Cortical thickness of the planum temporale was automatically measured. Participants with an L > R cortical thickness performed better on the in-scanner auditory pattern-matching task. Taken together, these findings support the AST hypothesis and provide substantial novel insight into the division of labor between left and right nonprimary auditory cortex functions during comprehension of spoken utterances. In addition, the present data yield support for a structural-behavioral relationship in the nonprimary auditory cortex. Hum Brain Mapp, 2013. © 2013 Wiley-Periodicals, Inc.
    Human Brain Mapping 04/2014; 35(4). DOI:10.1002/hbm.22291 · 6.92 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to the active and passive processing of the pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by analyzing the activation loci statistically using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas not involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location.
    Hearing research 08/2013; 307. DOI:10.1016/j.heares.2013.08.001 · 2.85 Impact Factor