Article

The multifaceted interplay between attention and multisensory integration.

Department of Cognitive Psychology and Ergonomics, University of Twente, P.O. Box 215, 7500 AE Enschede, The Netherlands.
Trends in Cognitive Sciences (Impact Factor: 16.01). 09/2010; 14(9):400-10. DOI: 10.1016/j.tics.2010.06.008
Source: PubMed

ABSTRACT Multisensory integration has often been characterized as an automatic process. Recent findings indicate that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by crossmodal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in the current understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant, findings.

0 Bookmarks
 · 
168 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Capacity limitations of attentional resources allow only a fraction of sensory inputs to enter our awareness. Most prominently, in the attentional blink the observer often fails to detect the second of two rapidly successive targets that are presented in a sequence of distractor items. To investigate how auditory inputs enable a visual target to escape the attentional blink, this study presented the visual letter targets T1 and T2 together with phonologically congruent or incongruent spoken letter names. First, a congruent relative to an incongruent sound at T2 rendered visual T2 more visible. Second, this T2 congruency effect was amplified when the sound was congruent at T1 as indicated by a T1 congruency × T2 congruency interaction. Critically, these effects were observed both when the sounds were presented in synchrony with and prior to the visual target letters suggesting that the sounds may increase visual target identification via multiple mechanisms such as audiovisual priming or decisional interactions. Our results demonstrate that a sound around the time of T2 increases subjects' awareness of the visual target as a function of T1 and T2 congruency. Consistent with Bayesian causal inference, the brain may thus combine (1) prior congruency expectations based on T1 congruency and (2) phonological congruency cues provided by the audiovisual inputs at T2 to infer whether auditory and visual signals emanate from a common source and should hence be integrated for perceptual decisions.
    Frontiers in Integrative Neuroscience 09/2014; 8:70.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: While aging can lead to significant declines in perceptual and cognitive function, the effects of age on multisensory integration, the process in which the brain combines information across the senses, are less clear. Recent reports suggest that older adults are susceptible to the sound-induced flash illusion (Shams et al., 2000) across a much wider range of temporal asynchronies than younger adults (Setti et al., 2011). To assess whether this cost for multisensory integration is a general phenomenon of combining asynchronous audiovisual input, we compared the time courses of two variants of the sound-induced flash illusion in young and older adults: the fission illusion, where one flash accompanied by two beeps appears as two flashes, and the fusion illusion, where two flashes accompanied by one beep appear as one flash. Twenty-five younger (18-30 years) and older (65+ years) adults were required to report whether they perceived one or two flashes, whilst ignoring irrelevant auditory beeps, in bimodal trials where auditory and visual stimuli were separated by one of six stimulus onset asynchronies (SOAs). There was a marked difference in the pattern of results for the two variants of the illusion. In conditions known to produce the fission illusion, older adults were significantly more susceptible to the illusion at longer SOAs compared to younger participants. In contrast, the performance of the younger and older groups was almost identical in conditions known to produce the fusion illusion. This surprising difference between sound-induced fission and fusion in older adults suggests dissociable age-related effects in multisensory integration, consistent with the idea that these illusions are mediated by distinct neural mechanisms.
    Frontiers in Aging Neuroscience 01/2014; 6:250. · 5.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.
    PLoS ONE 01/2014; 9(10):e111070. · 3.53 Impact Factor

Full-text (2 Sources)

Download
76 Downloads
Available from
May 22, 2014