Attention Improves Population-Level Frequency Tuning in Human Auditory Cortex

Institute for Biomagnetism and Biosignalanalysis, University Hospital, University of Muenster, 48149 Muenster, Germany.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience (Impact Factor: 6.34). 10/2007; 27(39):10383-90. DOI: 10.1523/JNEUROSCI.2963-07.2007
Source: PubMed


Attention improves auditory performance in noisy environments by either enhancing the processing of task-relevant stimuli ("gain"), suppressing task-irrelevant information ("sharpening"), or both. In the present study, we investigated the effect of focused auditory attention on the population-level frequency tuning in human auditory cortex by means of magnetoencephalography. Using complex stimuli consisting of a test tone superimposed on different band-eliminated noises during active listening or distracted listening conditions, we observed that focused auditory attention caused not only gain, but also sharpening of frequency tuning in human auditory cortex as reflected by the N1m auditory evoked response. This combination of gain and sharpening in the auditory cortex may contribute to better auditory performance during focused auditory attention.

Download full-text


Available from: Henning Teismann, Jan 06, 2016
  • Source
    • "The transient evoked component, such as N1, is known to be modulated by the parameters of stimulus properties. It has been suggested that the characteristics of these components also vary, depending on attention allocation (e.g., Lange et al., 2003; Okamoto et al., 2007; Gontier et al., 2013; Picton, 2013) to sensory signals. When one compares the lengths of two neighboring time intervals, T1 and T2, the participant might judge the lengths of two intervals by focusing on the temporal location of the tone marking the end of T1 (and simultaneously marking the beginning of T2). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain activity related to time estimation processes in humans was analyzed using a perceptual phenomenon called auditory temporal assimilation. In a typical stimulus condition, two neighboring time intervals (T1 and T2 in this order) are perceived as equal even when the physical lengths of these time intervals are considerably different. Our previous event-related potential (ERP) study demonstrated that a slow negative component (SNCt) appears in the right-frontal brain area (around the F8 electrode) after T2, which is associated with judgment of the equality/inequality of T1 and T2. In the present study, we conducted two ERP experiments to further confirm the robustness of the SNCt. The stimulus patterns consisted of two neighboring time intervals marked by three successive tone bursts. Thirteen participants only listened to the patterns in the first session, and judged the equality/inequality of T1 and T2 in the next session. Behavioral data showed typical temporal assimilation. The ERP data revealed that three components (N1; contingent negative variation, CNV; and SNCt) emerged related to the temporal judgment. The N1 appeared in the central area, and its peak latencies corresponded to the physical timing of each marker onset. The CNV component appeared in the frontal area during T2 presentation, and its amplitude increased as a function of T1. The SNCt appeared in the right-frontal area after the presentation of T1 and T2, and its magnitude was larger for the temporal patterns causing perceptual inequality. The SNCt was also correlated with the perceptual equality/inequality of the same stimulus pattern, and continued up to about 400 ms after the end of T2. These results suggest that the SNCt can be a signature of equality/inequality judgment, which derives from the comparison of the two neighboring time intervals.
    Full-text · Article · Sep 2014 · Frontiers in Psychology
  • Source
    • "The spatial localization accuracy of EEG is, however, relatively low and thus it was not possible to determine decisively whether the observed short-term plasticity effects originated from the auditory-cortical areas, or whether, for example, putative frontal cortical contributions to the N100 response measured with EEG [92] contributed to the findings. MEG, offering better spatial localization accuracy than EEG, has been utilized in subsequent studies to show that there is either a combination of increased gain and receptive-field reshaping [93, 94] or relatively pure receptive-field reshaping effects [88, 95] that modulate, during selective-attention, the auditory-cortical response that is elicited ∼100 ms from sound onset. Importantly, these short-term plasticity effects have been observed to correlate with behavioral discrimination accuracy [88, 91, 93]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, "short-term plasticity", might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
    Full-text · Article · Jan 2014 · Neural Plasticity
  • Source
    • "Although executive function begins to develop in early childhood (Marcovitch and Zelazo, 2009; Zelazo et al., 2003), recent evidence from Bialystok (2006, 2011), Bialystok et al. (2008, 2009) suggests that the experiences generated by bilingualism have a positive effect on linguistic and cognitive processing. Our results support the notion that experience with non-native speech improves other aspects of cognitive processing as the recruitment of executive brain regions in older bilingual children is an alternative way to manipulate perceptual information (Archila-Suerte et al., 2011; Okamoto et al., 2007; Tallal and Gaab, 2006). An area of future study is the continued development of non-native speech perception in adolescent bilinguals who have been exposed to the second language sequentially. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of the present study is to reveal how the neural mechanisms underlying non-native speech perception change throughout childhood. In a pre-attentive listening fMRI task, English monolingual and Spanish-English bilingual children - divided into groups of younger (6-8yrs) and older children (9-10yrs) - were asked to watch a silent movie while several English syllable combinations played through a pair of headphones. Two additional groups of monolingual and bilingual adults were included in the analyses. Our results show that the neural mechanisms supporting speech perception throughout development differ in monolinguals and bilinguals. While monolinguals recruit perceptual areas (i.e., superior temporal gyrus) in early and late childhood to process native speech, bilinguals recruit perceptual areas (i.e., superior temporal gyrus) in early childhood and higher-order executive areas in late childhood (i.e., bilateral middle frontal gyrus and bilateral inferior parietal lobule, among others) to process non-native speech. The findings support the Perceptual Assimilation Model and the Speech Learning Model and suggest that the neural system processes phonological information differently depending on the stage of L2 speech learning.
    Full-text · Article · Nov 2012 · NeuroImage
Show more