Article

Effects of attention on neuroelectric correlates of auditory stream segregation

Baycrest Centre for Geriatric Care, University of Toronto, Canada.
Journal of Cognitive Neuroscience (Impact Factor: 4.69). 02/2006; 18(1):1-13. DOI: 10.1162/089892906775250021
Source: PubMed

ABSTRACT A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABA--patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and--was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with Deltaf and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABA- patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to Deltaf-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds.

Download full-text

Full-text

Available from: Terence W Picton, Apr 15, 2015
0 Followers
 · 
119 Views
  • Source
    • "Although subjective reports and neural measures of streaming have often been collected simultaneously (Cusack, 2005; Dykstra et al., 2011; Gutschalk et al., 2005; Hill et al., 2012; Snyder et al., 2006; Szalárdy et al., 2013), to our knowledge only one previous study has directly linked an objective measure of streaming with concurrent percept reports. Participants in Billig et al. (2013) heard sequences of repeated syllables that could be perceived as integrated or segregated due to spectral differences between the initial /s/ sound and the remainder (such as " stone " vs. " s " + " dohne " ). "
    [Show description] [Hide description]
    DESCRIPTION: Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of “ABA-” triplets, where “A” and “B” were tones of different frequencies and “-” was a silent gap. Segregation was more frequently reported, and rhythmically-deviant triplets less well-detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple “covert attention” account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the co-dependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes.
  • Source
    • "However, the current study is among the first to our knowledge (also see Rojas et al., 2007) showing that lateral suppression in particular is impaired in SZ, which is an important finding because lateral suppression is likely to be a neurophysiological capability that is vital for more precise coding of auditory features (Chen and Jen, 2000; Wang et al., 2002). The finding of impaired P2 lateral suppression in particular could be related to the fact that P2 is particularly involved in processing spectral information in healthy individuals (Shahin et al., 2005, 2007; Snyder et al., 2006, 2009). Thus, this impairment should be considered a candidate for explaining at least some of the widely reported auditory deficits observed in SZ. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Well-documented auditory processing deficits such as impaired frequency discrimination and reduced suppression of auditory brain responses in schizophrenia (SZ) may contribute to abnormal auditory functioning in everyday life. Lateral suppression of non-stimulated neurons by stimulated neurons has not been extensively assessed in SZ and likely plays an important role in precise encoding of sounds. Therefore, this study evaluated whether lateral suppression of activity in auditory cortex is impaired in SZ.
    Schizophrenia Research 01/2015; 162(1-3). DOI:10.1016/j.schres.2014.12.032 · 4.43 Impact Factor
  • Source
    • "A multi-talker listening situation represents a special case of acoustic degradation, as the segregation of the auditory scene requires the operation of a complex set of perceptual and attentional processes (Bronckhorst, 2000). ERP evidence indicates that both bottom-up peripheral mechanisms and top-down modulation play a role in auditory scene analysis (Alain et al., 2005; Snyder et al., 2006). A recent electroencephalography (EEG) study examined the dynamic cortical response to two continuous dichotically presented speech signals using auditory evoked spread spectrum analysis (AESPA; Power et al., 2012), and found that selective attention to one signal yielded a peak of activity at a latency of $ 200 ms (cf. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of −12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning.
    Neuropsychologia 10/2014; 65. DOI:10.1016/j.neuropsychologia.2014.10.016 · 3.45 Impact Factor
Show more