The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences, 14, 400-410

Department of Cognitive Psychology and Ergonomics, University of Twente, P.O. Box 215, 7500 AE Enschede, The Netherlands.
Trends in Cognitive Sciences (Impact Factor: 21.97). 09/2010; 14(9):400-10. DOI: 10.1016/j.tics.2010.06.008
Source: PubMed

ABSTRACT Multisensory integration has often been characterized as an automatic process. Recent findings indicate that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by crossmodal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in the current understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant, findings.

Download full-text


Available from: Durk Talsma, Sep 26, 2015
33 Reads
  • Source
    • "Talsma et al. (2010) review research demonstrating synergistic effects on attentional processes when visual, auditory , and/or tactile inputs are synchronized. Of particular relevance to the present study is their general conclusion that Btemporally and spatially aligned sensory inputs in different modalities have a higher likelihood to be favored for further processing, and thus to capture an individual's attention, than do stimuli that are not aligned^ (Talsma et al. 2010, p. 400). For example, Van der Burg et al. (2009) found participants' search times for targets in highly distracting visual environments were substantially improved when the target's color change was accompanied by a tactile signal. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Gesturally controlled information and communication technologies, such as tablet devices, are becoming increasingly popular tools for teaching and learning. Based on the theoretical frameworks of cognitive load and embodied cognition, this study investigated the impact of explicit instructions to trace out elements of tablet-based worked examples on mathematical problem-solving. Participants were 61 primary school children (8-11 years), who studied worked examples on an iPad either by tracing temperature graphs with their index finger or without such tracing. Results confirmed the main hypothesis that finger tracing as a form of biologically primary knowledge would support the construction of biologically secondary knowledge needed to understand temperature graphs. Children in the tracing condition achieved higher performance on transfer test questions. The theoretical and practical implications of the results are discussed.
    Educational Psychology Review 09/2015; 27(3). DOI:10.1007/s10648-015-9315-5 · 2.40 Impact Factor
    • "ed to com - parable processing gains . Moreover , their effects combined linearly , suggesting that audio - visual synchrony and fea - ture - selective attention can act in parallel to influence neu - ral stimulus representations . Our results add to the grow - ing literature on the interplay of attention and multisensory integration ( reviewed in Talsma et al . 2010 ) and may have practical implications for the design of multisensory brain – computer interfaces ( An et al . 2014 ) ."
    [Show abstract] [Hide abstract]
    ABSTRACT: Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
    Experimental Brain Research 08/2015; DOI:10.1007/s00221-015-4392-8 · 2.04 Impact Factor
  • Source
    • "While cueing and search paradigms differ in many ways (e.g., role of temporal vs. spatial correspondences between the two modalities), they both rely on spatially localized low-level changes in the sensory input. Indeed, one possible mechanism generating these crossmodal interactions is that the between-modalities (spatial and/or temporal) correspondence of the physical change makes the target location more salient via bottom–up, stimulus-driven attention control (e.g., Van der Burg et al., 2008; see also Talsma et al., 2010, for review). However, high-level factors can also contribute to crossmodal influences on visuo-spatial processing. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have shown that multisensory stimuli can contribute to attention control. Here we investigate whether irrelevant audio-visual stimuli can affect the processing of subsequent visual targets, in the absence of any direct bottom-up signals generated by low-level sensory changes and any goal-related associations between the multisensory stimuli and the visual targets. Each trial included two pictures (cat/dog), one in each visual hemifield, and a central sound that was semantically congruent with one of the two pictures (i.e., either "meow" or "woof" sound). These irrelevant audio-visual stimuli were followed by a visual target that appeared either where the congruent or the incongruent picture had been presented (valid/invalid trials). The visual target was a Gabor patch requiring an orientation discrimination judgment, allowing us to uncouple the visual task from the audio-visual stimuli. Behaviourally we found lower performance for invalid than valid trials, but only when the task demands were high (Gabor target presented together with a Gabor distractor vs. Gabor target alone). The fMRI analyses revealed greater activity for invalid than for valid trials in the dorsal and the ventral fronto-parietal attention networks. The dorsal network was recruited irrespective of task demands, while the ventral network was recruited only when task demands were high and target discrimination required additional top-down control. We propose that crossmodal semantic congruence generates a processing bias associated with the location of congruent picture, and that the presentation of the visual target on the opposite side required updating these processing priorities. We relate the activation of the attention networks to these updating operations. We conclude that the fronto-parietal networks mediate the influence of crossmodal semantic congruence on visuo-spatial processing, even in the absence of any low-level sensory cue and any goal-driven task associations.
    Frontiers in Integrative Neuroscience 07/2015; 9:45. DOI:10.3389/fnint.2015.00045
Show more