Article

Multisensory Integration Affects Visuo-Spatial Working Memory

Department of Psychology, Sapienza University of Rome, 00185 Rome, Italy.
Journal of Experimental Psychology Human Perception & Performance (Impact Factor: 3.11). 05/2011; 37(4):1099-109. DOI: 10.1037/a0023513
Source: PubMed

ABSTRACT In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.

Download full-text

Full-text

Available from: Juan Lupiáñez, Aug 23, 2015
4 Followers
 · 
156 Views
  • Source
    • "However, in sharp contrast with the evidence from previous crossmodal attention studies, the authors failed to observe significant auditory (to visual) cueing effects. Botta et al. (2011) proposed that the " anomalous " absence of the auditory effect in their study might have been due to a failed perceptual association between the unique auditory object (the cue) and the multiple visual objects (the memory array). In fact, typical crossmodal audio-visual attentional paradigms imply a one-to-one relation between the cue and the target, while in Botta et al.'s study the presentation of a single auditory cue at a given location was followed by the presentation of 4 or 6 stimuli(4 or 6 coloured squares) distributed across the visual field. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Audiovisual links in spatial attention have been reported in many previous studies. However, the effectiveness of auditory spatial cues in biasing the information encoding into visuo-spatial working memory (VSWM) is still relatively unknown. In this study, we addressed this issue by combining a cuing paradigm with a change detection task in VSWM. Moreover, we manipulated the perceptual organization of the to-be-remembered visual stimuli. We hypothesized that the auditory effect on VSWM would depend on the perceptual association between the auditory cue and the visual probe. Results showed, for the first time, a significant auditory attentional bias in VSWM. However, the effect was observed only when the to-be-remembered visual stimuli were organized in two distinctive visual objects. We propose that these results shed new light on audio-visual crossmodal links in spatial attention suggesting that, apart from the spatio-temporal contingency, the likelihood of perceptual association between the auditory cue and the visual target can have a large impact on crossmodal attentional biases.
    Acta psychologica 06/2013; 144(1):104-111. DOI:10.1016/j.actpsy.2013.05.010 · 2.19 Impact Factor
  • Source
    • "As the sensitivity of different sensory systems varies with respect to the spatial and non-spatial information they encode, converging sensory information is likely to provide the most reliable means of prioritising multimodal objects for action or further analysis. This increase in the precision with which bimodal compared to unimodal cues are represented may also explain the previous findings that bimodal cues are more resistant to concurrent task load and more effective in biasing access to working memory (Botta et al. 2011; Santangelo and Spence 2007). Although further studies are required to determine whether separate unimodal orienting responses are combined in a statistically optimal way, our data suggest perception and attention may integrate multimodal information using similar rules. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.
    Experimental Brain Research 10/2012; 222(1-2):11-20. DOI:10.1007/s00221-012-3191-8 · 2.17 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was then presented at either the audio-visually congruent or incongruent location. Target recognition differed for the congruent versus incongruent trials, and the nature of this difference depended on the probabilities of a target appearing at these respective locations. Thus, even though the lip-streams were never consciously perceived, they were nevertheless meaningfully integrated with the consciously perceived sounds, and participants learned to guide their attention according to statistical regularities between targets and these unconsciously perceived cross-modal cues. In Experiment 4, McGurk stimuli were presented in which the lip-streams were either flash suppressed (4a) or unsuppressed (4b), and the McGurk effect was found to vanish under conditions of flash suppression. Overall, these results suggest a simple yet fundamental principle regarding the function of consciousness in multisensory integration - cross-modal effects can occur in the absence of consciousness, but the influencing modality must be consciously perceived for its information to cross modalities.
    Cognition 09/2012; 125(3):353-64. DOI:10.1016/j.cognition.2012.08.003 · 3.63 Impact Factor
Show more