Article

Integration and segregation in auditory scene analysis

Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
The Journal of the Acoustical Society of America (Impact Factor: 1.5). 04/2005; 117(3 Pt 1):1285-98. DOI: 10.1121/1.1854312
Source: PubMed

ABSTRACT

Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech.

    • "g . , Duncan & Humphreys , 1989 ; Winkler , Takegata , & Sussman , 2005 ) , several studies investigating auditory feature binding found that it can occur even in the absence of focused attention ( Gomes , Bernstein , Ritter , Vaughan , & Miller , 1997 ; Sussman , Gomes , Nousak , Ritter , & Vaughan , 1998 ; Takegata , Huotilainen , Rinne , Näätänen , & Winkler , 2001 ; Takegata , Paavilainen , Näätänen , & Winkler , 1999 ; Takegata et al . , 2005 ; Winkler et al . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Communication by sounds requires that the communication channels (i.e. speech/speakers and other sound sources) had been established. This allows to separate concurrently active sound sources, to track their identity, to assess the type of message arriving from them, and to decide whether and when to react (e.g., reply to the message). We propose that these functions rely on a common generative model of the auditory environment. This model predicts upcoming sounds on the basis of representations describing temporal/sequential regularities. Predictions help to identify the continuation of the previously discovered sound sources to detect the emergence of new sources as well as changes in the behavior of the known ones. It produces auditory event representations which provide a full sensory description of the sounds, including their relation to the auditory context and the current goals of the organism. Event representations can be consciously perceived and serve as objects in various cognitive operations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
    No preview · Article · Jul 2015 · Brain and Language
  • Source
    • "Panesse et al. showed that sound streams segregated by tone frequency were held in memory outside the focus of attention, even if the cognitive load may have reduced further processing of the distinct frequency streams. The efficiency of the segregation and maintenance of distinct auditory streams in memory is especially important, given the transient nature of auditory stimuli (Sussman, 2005). If unattended auditory signals are not unraveled immediately , there is no more opportunity to recapture and process the sounds: the physical input is gone. "
    [Show abstract] [Hide abstract]
    ABSTRACT: ERPs and behavioral responses were measured to assess how task-irrelevant sounds interact with task processing demands and affect the ability to monitor and track multiple sound events. Participants listened to four-tone sequential frequency patterns, and responded to frequency pattern deviants (reversals of the pattern). Irrelevant tone feature patterns (duration and intensity) and respective pattern deviants were presented together with frequency patterns and frequency pattern deviants in separate conditions. Responses to task-relevant and task-irrelevant feature pattern deviants were used to test processing demands for irrelevant sound input. Behavioral performance was significantly better when there were no distracting feature patterns. Errors primarily occurred in response to the to-be-ignored feature pattern deviants. Task-irrelevant elicitation of ERP components was consistent with the error analysis, indicating a level of processing for the irrelevant features. Task-relevant elicitation of ERP components was consistent with behavioral performance, demonstrating a "cost" of performance when there were two feature patterns presented simultaneously. These results provide evidence that the brain tracked the irrelevant duration and intensity feature patterns, affecting behavioral performance. Overall, our results demonstrate that irrelevant informational streams are processed at a cost, which may be considered a type of multitasking that is an ongoing, automatic processing of task-irrelevant sensory events. © 2015 Society for Psychophysiological Research.
    Full-text · Article · May 2015 · Psychophysiology
  • Source
    • "In a nutshell (for a detailed review see Moore and 131 Gockel, 2002), the segregation of a complex audio signal into streams can occur on the basis of 132 many different acoustic cues (Van Noorden, 1975); it is assumed to rely on processes at multiple 133 levels of the auditory system; and it reflects a number of different processes, some of which are 134 stimulus-driven while others are of more general cognitive nature, i.e., involving attention 135 and/or knowledge (Bregman, 1994). 136 Electrophysiological indices of auditory stream segregation have been detected in several 137 approaches (Sussman, 2005; Sussman, Horváth, Winkler, & Orr, 2007; Winkler, Takegata, & 138 Sussman, 2005; Yabe, et al., 2001; for an overview see Snyder and Alain, 2007). One line of 139 research focused on the Mismatch Negativity (MMN) as neural index for a distinct perceptional 140 state of stream segregation by constructing tone sequences such that only a perceptual 141 segregation into two streams would allow a MMN-generating sound pattern to emerge. "

    Full-text · Article · Jan 2015 · Psychomusicology: Music, Mind, and Brain
Show more