Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments

The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine, Van Etten Building, 1C, 1225 Morris Park Avenue, Bronx, NY 10461, USA.
Experimental Brain Research (Impact Factor: 2.04). 04/2011; 213(2-3):167-74. DOI: 10.1007/s00221-011-2670-7
Source: PubMed


It is well established that sounds can enhance visual-target detection, but the mechanisms that govern these cross-sensory effects, as well as the neural pathways involved, are largely unknown. Here, we tested behavioral predictions stemming from the neurophysiologic and neuroanatomic literature. Participants detected near-threshold visual targets presented either at central fixation or peripheral to central fixation that were sometimes paired with sounds that originated from widely misaligned locations (up to 104° from the visual target). Our results demonstrate that co-occurring sounds improve the likelihood of visual-target detection (1) regardless of retinal eccentricity and (2) despite wide audiovisual misalignments. With regard to the first point, these findings suggest that auditory facilitation of visual-target detection is unlikely to operate through previously described corticocortical pathways from auditory cortex that predominantly terminate in regions of visual cortex that process peripheral visual space. With regard to the second point, auditory facilitation of visual-target detection seems to operate through a spatially non-specific modulation of visual processing.

Download full-text


Available from: John J Foxe, Oct 07, 2015
41 Reads
  • Source
    • "It has since become clear that these early integration effects have significant impact on behavior in terms of speeded responses to multisensory inputs (see Sperdin et al., 2009). Similarly, Fiebelkorn et al. (2011) demonstrated behavioral AV integration effects where auditory inputs facilitated visual target detection regardless of retinal eccentricity and large misalignments of the audiovisual stimulus pairings. Teder-Sälejärvi et al. (2005) also examined the effect of spatial alignment on multisensory AV interactions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS) interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms). In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.
    Frontiers in Psychology 08/2015; 6:1068. DOI:10.3389/fpsyg.2015.01068 · 2.80 Impact Factor
    • "k participants could simply have attended to one area of the experimental set - up ; resulting in possible ceiling effects for this task . Finally , spatial congruency effects were not part of the original hy - pothesis as previous studies have found that spatial congruity did not impact detection performance for redundant audio - visual stimuli ( Fiebelkorn et al . , 2011 ; Girard et al . , 2013 ) . However , analysis of accuracy performance to co - located audio - visual stimuli relative to mis - located audio - visual for A t V t CON targets in the object identification task revealed that accuracy was higher when targets were spatially co - located . This suggests that spatial location impacted on obje"
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated age-related effects in cross-modal interactions using tasks assessing spatial perception and object perception. Specifically, an audio-visual object identification task and an audio-visual object localisation task were used to assess putatively distinct perceptual functions in four age groups: children (8-11 years), adolescents (12-14 years), young and older adults. Participants were required to either identify or locate target objects. Targets were specified as unisensory (visual/auditory) or multisensory (audio-visual congruent/audio-visual incongruent) stimuli. We found age-related effects in performance across both tasks. Both children and older adults were less accurate at locating objects than adolescents or young adults. Children were also less accurate at identifying objects relative to young adults, but the performance between young adults, adolescents and older adults did not differ. A greater cost in accuracy for audio-visual incongruent relative to audio-visual congruent targets was found for older adults, children and adolescents relative to young adults. However, we failed to find a benefit in performance for any age group in either the identification or localisation task for audio-visual congruent targets relative to visual-only targets. Our findings suggest that visual information dominated when identifying or localising audio-visual stimuli. Furthermore, on the basis of our results, object identification and object localisation abilities seem to mature late in development and that spatial abilities may be more prone to decline as we age relative to object identification abilities. In addition, the results suggest that multisensory facilitation may require more sensitive measures to reveal differences in cross-modal interactions across higher-level perceptual tasks.
    Multisensory research 04/2015; 28(1-2):111-151. DOI:10.1163/22134808-00002479 · 0.78 Impact Factor
  • Source
    • "Under natural conditions, early auditory-to-visual information transfer may serve to improve the detection of visual events although it seems to work in a quite unspecific manner with respect to both the location of the visual event in the visual field and cross-modal spatial congruence or incongruence (Fiebelkorn et al., 2011). Furthermore, spatially irrelevant sounds presented shortly before visual targets may speed up reaction times, even in the absence of any specific predictive value (Keetels and Vroomen, 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
    Frontiers in Psychology 08/2013; 4:530. DOI:10.3389/fpsyg.2013.00530 · 2.80 Impact Factor
Show more