Article

Integration and segregation in auditory scene analysis.

Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
The Journal of the Acoustical Society of America (Impact Factor: 1.56). 04/2005; 117(3 Pt 1):1285-98. DOI: 10.1121/1.1854312
Source: PubMed

ABSTRACT Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech.

0 Followers
 · 
85 Views
    • "g . , Duncan & Humphreys , 1989 ; Winkler , Takegata , & Sussman , 2005 ) , several studies investigating auditory feature binding found that it can occur even in the absence of focused attention ( Gomes , Bernstein , Ritter , Vaughan , & Miller , 1997 ; Sussman , Gomes , Nousak , Ritter , & Vaughan , 1998 ; Takegata , Huotilainen , Rinne , Näätänen , & Winkler , 2001 ; Takegata , Paavilainen , Näätänen , & Winkler , 1999 ; Takegata et al . , 2005 ; Winkler et al . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Communication by sounds requires that the communication channels (i.e. speech/speakers and other sound sources) had been established. This allows to separate concurrently active sound sources, to track their identity, to assess the type of message arriving from them, and to decide whether and when to react (e.g., reply to the message). We propose that these functions rely on a common generative model of the auditory environment. This model predicts upcoming sounds on the basis of representations describing temporal/sequential regularities. Predictions help to identify the continuation of the previously discovered sound sources to detect the emergence of new sources as well as changes in the behavior of the known ones. It produces auditory event representations which provide a full sensory description of the sounds, including their relation to the auditory context and the current goals of the organism. Event representations can be consciously perceived and serve as objects in various cognitive operations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
    Brain and Language 07/2015; 148:1-22. DOI:10.1016/j.bandl.2015.05.003 · 3.31 Impact Factor
  • Source
    • "Panesse et al. showed that sound streams segregated by tone frequency were held in memory outside the focus of attention, even if the cognitive load may have reduced further processing of the distinct frequency streams. The efficiency of the segregation and maintenance of distinct auditory streams in memory is especially important, given the transient nature of auditory stimuli (Sussman, 2005). If unattended auditory signals are not unraveled immediately , there is no more opportunity to recapture and process the sounds: the physical input is gone. "
    [Show abstract] [Hide abstract]
    ABSTRACT: ERPs and behavioral responses were measured to assess how task-irrelevant sounds interact with task processing demands and affect the ability to monitor and track multiple sound events. Participants listened to four-tone sequential frequency patterns, and responded to frequency pattern deviants (reversals of the pattern). Irrelevant tone feature patterns (duration and intensity) and respective pattern deviants were presented together with frequency patterns and frequency pattern deviants in separate conditions. Responses to task-relevant and task-irrelevant feature pattern deviants were used to test processing demands for irrelevant sound input. Behavioral performance was significantly better when there were no distracting feature patterns. Errors primarily occurred in response to the to-be-ignored feature pattern deviants. Task-irrelevant elicitation of ERP components was consistent with the error analysis, indicating a level of processing for the irrelevant features. Task-relevant elicitation of ERP components was consistent with behavioral performance, demonstrating a "cost" of performance when there were two feature patterns presented simultaneously. These results provide evidence that the brain tracked the irrelevant duration and intensity feature patterns, affecting behavioral performance. Overall, our results demonstrate that irrelevant informational streams are processed at a cost, which may be considered a type of multitasking that is an ongoing, automatic processing of task-irrelevant sensory events. © 2015 Society for Psychophysiological Research.
    Psychophysiology 05/2015; DOI:10.1111/psyp.12446 · 3.18 Impact Factor
  • Source
    • "In summary, not only the MMN approach (Sussman, 2005; Yabe et al., 2001), but also the oscillatory speech-tracking approach can be used to test temporal structure processing of several simultaneous speech streams and allows one to distinguish between temporal structure processing and selective attention effects on speech processing in noise. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized.
    International journal of psychophysiology: official journal of the International Organization of Psychophysiology 06/2014; 95(2). DOI:10.1016/j.ijpsycho.2014.06.010 · 2.65 Impact Factor
Show more

Preview

Download
2 Downloads
Available from