Visual short-term memory load affects sensory processing of irrelevant sounds in human auditory cortex

Apperception Cortical Dynamics (ACD), Department of Psychology, P.O. Box 9, University of Helsinki, FIN-00014 Helsinki, Finland.
Cognitive Brain Research (Impact Factor: 3.77). 08/2003; 17(2):358-67. DOI: 10.1016/S0926-6410(03)00137-X
Source: PubMed


We used whole-head magnetoencephalography (MEG) to investigate neural activity in human auditory cortex elicited by irrelevant tones while the subjects were engaged in a short-term memory task presented in the visual modality. As compared to a no-memory-task condition, memory load enhanced the amplitude of the auditory N1m response. In addition, the N1m amplitude depended on the phase of the memory task, with larger response amplitudes observed during encoding than retention. Further, these amplitude modulations were accompanied by anterior-posterior shifts in N1m source locations. The results show that a memory task for visually presented stimuli alters sensory processing in human auditory cortex, even when subjects are explicitly instructed to ignore any auditory stimuli. Thus, it appears that task demands requiring attentional allocation and short-term memory result in interaction across visual and auditory brain areas carrying out the processing of stimulus features.

Download full-text


Available from: Jussi Valtonen,
40 Reads
  • Source
    • "also Weisz & Schlittmeier, 2006). In other neurophysiological ISE experiments, distractor presentation continued during retention (Bell et al., 2010; Campbell et al., 2007; Valtonen et al., 2003) or was even restricted to retention (Campbell et al., 2003, 2007; Kopp et al., 2004, 2006). A question arises as to whether the observed ISE depends on the timing of irrelevant sound presentation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The irrelevant sound effect (ISE) describes reduced verbal short-term memory during irrelevant changing-state sounds which consist of different and distinct auditory tokens. Steady-state sounds lack such changing-state features and do not impair performance. An EEG experiment (N=16) explored the distinguishing neurophysiological aspects of detrimental changing-state speech (3-token sequence) compared to ineffective steady-state speech (1-token sequence) on serial recall performance. We analyzed evoked and induced activity related to the memory items as well as spectral activity during the retention phase. The main finding is that the behavioral sound effect was exclusively reflected by attenuated token-induced gamma activation most pronounced between 50-60 Hz and 50-100 ms post-stimulus onset. Changing-state speech seems to disrupt a behaviorally relevant ongoing process during target presentation (e.g., the serial binding of the items).
    Psychophysiology 12/2011; 48(12):1669-80. DOI:10.1111/j.1469-8986.2011.01263.x · 3.18 Impact Factor
  • Source
    • "statistical significance. Support for the independence between visual and auditory resource cannot be based on support of the null hypothesis as in the current experiment, as it is always possible that an effect in audition could be observed by further increasing the difficulty of the visual task (Valtonen et al., 2003). As above, it is perhaps worth considering what a more detailed analysis might reveal in terms of the interactions between auditory and visual stimulation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Artwork can often pique the interest of the viewer or listener as a result of the ambiguity or instability contained within it. Our engagement with uncertain sensory experiences might have its origins in early cortical responses, in that perceptually unstable stimuli might preclude neural habituation and maintain activity in early sensory areas. To assess this idea, participants engaged with an ambiguous visual stimulus wherein two squares alternated with one another, in terms of simultaneously opposing vertical and horizontal locations relative to fixation (i.e., stroboscopic alternating motion; von Schiller, 1933). At each trial, participants were invited to interpret the movement of the squares in one of five ways: traditional vertical or horizontal motion, novel clockwise or counter-clockwise motion, and, a free-view condition in which participants were encouraged to switch the direction of motion as often as possible. Behavioral reports of perceptual stability showed clockwise and counter-clockwise motion to possess an intermediate level of stability compared to relatively stable vertical and horizontal motion, and, relatively unstable motion perceived during free-view conditions. Early visual evoked components recorded at parietal-occipital sites such as C1, P1, and N1 modulated as a function of visual intention. Both at a group and individual level, increased perceptual instability was related to increased negativity in all three of these early visual neural responses. Engagement with increasingly ambiguous input may partly result from the underlying exaggerated neural response to it. The study underscores the utility of combining neuroelectric recording with the presentation of perceptually multi-stable yet physically identical stimuli, in revealing brain activity associated with the purely internal process of interpreting and appreciating the sensory world that surrounds us.
    Frontiers in Human Neuroscience 08/2011; 5:73. DOI:10.3389/fnhum.2011.00073 · 3.63 Impact Factor
  • Source
    • "This, in turn, results in a clear dissociation between the sensory magnitude and the N1 amplitude (Picton, Goodman, & Bryce, 1970; Picton, Woods, & Proulx, 1978; Pratt & Sohmer, 1977). In a similar vein, Woods and Elmasian (1986) observed that the strong attenuation of the N1 amplitude at the beginning of a stimulus block is not directly related to loudness (see also Donald, 1979), but rather to its attention-catching properties or disruptiveness (Campbell, 2005; Campbell et al., 2003, 2005; Rinne et al., 2006; Valtonen et al., 2003). For the same reason, the N1 generator process does not seem to be involved in feature integration. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this review, we will present a model of brain events leading to conscious perception in audition. This represents an updated version of Näätänen's previous model of automatic and attentive central auditory processing. This revised model is mainly based on the mismatch negativity (MMN) and N1 indices of automatic processing, the processing negativity (PN) index of selective attention, and their magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) equivalents. Special attention is paid to determining the neural processes that might underlie conscious perception and the borderline between automatic and attention-dependent processes in audition.
    Psychophysiology 09/2010; 48(1):4-22. DOI:10.1111/j.1469-8986.2010.01114.x · 3.18 Impact Factor
Show more