Article

Integration and segregation in auditory scene analysis.

Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
The Journal of the Acoustical Society of America (Impact Factor: 1.56). 04/2005; 117(3 Pt 1):1285-98. DOI: 10.1121/1.1854312
Source: PubMed

ABSTRACT Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech.

0 Followers
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ERPs and behavioral responses were measured to assess how task-irrelevant sounds interact with task processing demands and affect the ability to monitor and track multiple sound events. Participants listened to four-tone sequential frequency patterns, and responded to frequency pattern deviants (reversals of the pattern). Irrelevant tone feature patterns (duration and intensity) and respective pattern deviants were presented together with frequency patterns and frequency pattern deviants in separate conditions. Responses to task-relevant and task-irrelevant feature pattern deviants were used to test processing demands for irrelevant sound input. Behavioral performance was significantly better when there were no distracting feature patterns. Errors primarily occurred in response to the to-be-ignored feature pattern deviants. Task-irrelevant elicitation of ERP components was consistent with the error analysis, indicating a level of processing for the irrelevant features. Task-relevant elicitation of ERP components was consistent with behavioral performance, demonstrating a "cost" of performance when there were two feature patterns presented simultaneously. These results provide evidence that the brain tracked the irrelevant duration and intensity feature patterns, affecting behavioral performance. Overall, our results demonstrate that irrelevant informational streams are processed at a cost, which may be considered a type of multitasking that is an ongoing, automatic processing of task-irrelevant sensory events. © 2015 Society for Psychophysiological Research.
    Psychophysiology 05/2015; DOI:10.1111/psyp.12446 · 3.18 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized.
    International journal of psychophysiology: official journal of the International Organization of Psychophysiology 06/2014; 95(2). DOI:10.1016/j.ijpsycho.2014.06.010 · 2.65 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Stream segregation is the process by which the auditory system disentangles the mixture of sound inputs into discrete sources that cohere across time. The length of time required for this to occur is termed the "buildup" period. In the current study, we used the buildup period as an index of how quickly sounds are segregated into constituent parts. Specifically, we tested the hypothesis that stimulus context impacts the timing of the buildup and, therefore, affects when stream segregation is detected. To measure the timing of the buildup we recorded the Mismatch Negativity component (MMN) of event-related brain potentials (ERPs), during passive listening, to determine when the streams were neurophysiologically segregated. In each condition, a pattern of repeating low (L) and high (H) tones (L-L-H) was presented in trains of stimuli separated by silence, with the H tones forming a simple intensity oddball paradigm and the L tones serving as distractors. To determine the timing of the buildup, probe tones occurred in two positions of the trains, early (within the buildup period) and late (past the buildup period). The context was manipulated by presenting roving vs. non-roving frequencies across trains in two conditions. MMNs were elicited by intensity probe tones in the Non-Roving condition (early and late positions) and the Roving condition (late position only) indicating that neurophysiologic segregation occurred faster in the Non-Roving condition. This suggests a shorter buildup period when frequency was repeated from train to train. Overall, our results demonstrate that the dynamics of the environment influence the way in which the auditory system extracts regularities from the input. The results support the hypothesis that the buildup to segregation is highly dependent upon stimulus context and that the auditory system works to maintain a consistent representation of the environment when no new information suggests that reanalyzing the scene is necessary.
    Frontiers in Neuroscience 04/2014; 8:93. DOI:10.3389/fnins.2014.00093

Preview

Download
2 Downloads
Available from