Auditory temporal edge detection in human auditory cortex

Equipe Audition, Laboratoire de Psychologie de la Perception, CNRS (UMR 8158) Université Paris Descartes and Ecole Normale Supérieure, France.
Brain Research (Impact Factor: 2.84). 07/2008; 1213:78-90. DOI: 10.1016/j.brainres.2008.03.050
Source: PubMed


Auditory objects are detected if they differ acoustically from the ongoing background. In simple cases, the appearance or disappearance of an object involves a transition in power, or frequency content, of the ongoing sound. However, it is more realistic that the background and object possess substantial non-stationary statistics, and the task is then to detect a transition in the pattern of ongoing statistics. How does the system detect and process such transitions? We use magnetoencephalography (MEG) to measure early auditory cortical responses to transitions between constant tones, regularly alternating, and randomly alternating tone-pip sequences. Such transitions embody key characteristics of natural auditory temporal edges. Our data demonstrate that the temporal dynamics and response polarity of the neural temporal-edge-detection processes depend in specific ways on the generalized nature of the edge (the context preceding and following the transition) and suggest that distinct neural substrates in core and non-core auditory cortex are recruited depending on the kind of computation (discovery of a violation of regularity, vs. the detection of a new regularity) required to extract the edge from the ongoing fluctuating input entering a listener's ears.

Download full-text


Available from: Jonathan Z Simon,
  • Source
    • "Such differences in spatial distribution, which are probably explained by the fast and regular presentation rate, could reflect the distinct neuronal generators involved in the processing of local and global deviations [L€ utkenh€ oner, 2003]. The absence of earlier activity preceding the global MMNm is in line with previous findings showing that sound transitions from a regular sequence to a constant pure-tone elicited a peak at 160 ms after transition but no earlier activity in the P50 time range, suggesting that early mechanisms of deviance detection may be limited by the kinds of regularity they can compute [Chait et al., 2008]. Based on the lack of global deviance-related effects in the time range of the MLR, we suggest that very early deviance detection mechanisms work, at least for frequency [Leung et al., 2012], at the feature level [Alho et al., 2012; Althen et al., 2011; Grimm et al., 2012; Leung et al., 2013], and reflect an early stage prior to feature combination , sequential grouping, or the extraction of regularities based on the interrelationship between sounds. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Our auditory system is able to encode acoustic regularity of growing levels of complexity to model and predict incoming events. Recent evidence suggests that early indices of deviance detection in the time range of the middle-latency responses (MLR) precede the mismatch negativity (MMN), a well-established error response associated with deviance detection. While studies suggest that only the MMN, but not early deviance-related MLR, underlie complex regularity levels, it is not clear whether these two mechanisms interplay during scene analysis by encoding nested levels of acoustic regularity, and whether neuronal sources underlying local and global deviations are hierarchically organized. We registered magnetoencephalographic evoked fields to rapidly presented four-tone local sequences containing a frequency change. Temporally integrated local events, in turn, defined global regularities, which were infrequently violated by a tone repetition. A global magnetic mismatch negativity (MMNm) was obtained at 140-220 ms when breaking the global regularity, but no deviance-related effects were shown in early latencies. Conversely, Nbm (45-55 ms) and Pbm (60-75 ms) deflections of the MLR, and an earlier MMNm response at 120-160 ms, responded to local violations. Distinct neuronal generators in the auditory cortex underlay the processing of local and global regularity violations, suggesting that nested levels of complexity of auditory object representations are represented in separated cortical areas. Our results suggest that the different processing stages and anatomical areas involved in the encoding of auditory representations, and the subsequent detection of its violations, are hierarchically organized in the human auditory cortex. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
    Human Brain Mapping 11/2014; 35(11). DOI:10.1002/hbm.22582 · 5.97 Impact Factor
  • Source
    • "Indeed MEG brain imaging experiments with such stimuli (Chait et al., 2008) demonstrate that the auditory cortex detects the emergence of regular patterns automatically, and rapidly, even in the absence of directed attention (when listeners are actively engaged in an unrelated task). The point of detection, measured as the first brain response to the transition, occurs roughly a cycle and a half after the nominal transition time (a similar estimate is also obtained by measuring behavioral detection time; see below). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated how listeners perceive the temporal relationship of a light flash and a complex acoustic signal. The stimulus mimics ubiquitous events in busy scenes which are manifested as a change in the pattern of on-going fluctuation. Detecting pattern emergence inherently requires integration over time; resulting in such events being detected later than when they occurred. How does delayed detection time affect the perception of such events relative to other events in the scene? To model these situations, we use rapid sequences of tone pips with a time-frequency pattern that changes from random to regular ("REG-RAND") or vice versa ("RAND-REG"). REG-RAND transitions are detected rapidly, but RAND-REG take longer to detect (∼880 ms post nominal transition). Using a Temporal Order Judgment task, we instructed subjects to indicate whether the flash appeared before or after the acoustic transition. The point of subjective simultaneity between the flash and RAND-REG does not occur at the point of detection (∼880 ms post nominal transition) but ∼470 ms closer to the nominal acoustic transition. In a second experiment we halved the tone pip duration. The resulting pattern of performance was qualitatively similar to that in Experiment 1, but scaled by half. Our results indicates that the brain possesses mechanisms that survey the proximal history of an on-going stimulus and automatically adjust perception so as to compensate for prolonged detection time, thus producing more accurate representations of scene dynamics. However, this readjustment is not complete.
    Frontiers in Psychology 10/2012; 3:396. DOI:10.3389/fpsyg.2012.00396 · 2.80 Impact Factor
  • Source
    • "Disappearance detection in the present stimuli requires a ‘smarter’, ‘second order transient’ detection mechanism, capable of acquiring the temporal patterning of the on-going sound, and signalling when those rules are violated, e.g. when an expected tone pip fails to arrive. The existence of such ‘smart’ offset detection mechanisms (albeit in the context of a single sequence rather than several concurrent sources), operating automatically irrespective of listeners' attentional focus, has been demonstrated in several recent human brain imaging studies [23]–[25] and it has been hypothesized that they might play a role in scene change detection. The animal electrophysiology literature has largely focused on offset responses to simpler sounds (long pure tones) however there is some evidence (e.g. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an "early warning device," rapidly directing attention to new events. Here, we investigate listeners' sensitivity to changes in complex acoustic scenes-what makes certain events "pop-out" and grab attention while others remain unnoticed? We use artificial "scenes" populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between "appear" and "disappear" events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion) across all scene interruptions used, suggesting a robust perceptual representation of item appearance.
    PLoS ONE 09/2012; 7(9):e46167. DOI:10.1371/journal.pone.0046167 · 3.23 Impact Factor
Show more