Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments

The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine, Van Etten Building, 1C, 1225 Morris Park Avenue, Bronx, NY 10461, USA.
Experimental Brain Research (Impact Factor: 2.04). 04/2011; 213(2-3):167-74. DOI: 10.1007/s00221-011-2670-7
Source: PubMed


It is well established that sounds can enhance visual-target detection, but the mechanisms that govern these cross-sensory effects, as well as the neural pathways involved, are largely unknown. Here, we tested behavioral predictions stemming from the neurophysiologic and neuroanatomic literature. Participants detected near-threshold visual targets presented either at central fixation or peripheral to central fixation that were sometimes paired with sounds that originated from widely misaligned locations (up to 104° from the visual target). Our results demonstrate that co-occurring sounds improve the likelihood of visual-target detection (1) regardless of retinal eccentricity and (2) despite wide audiovisual misalignments. With regard to the first point, these findings suggest that auditory facilitation of visual-target detection is unlikely to operate through previously described corticocortical pathways from auditory cortex that predominantly terminate in regions of visual cortex that process peripheral visual space. With regard to the second point, auditory facilitation of visual-target detection seems to operate through a spatially non-specific modulation of visual processing.

Download full-text


Available from: John J Foxe,
45 Reads
  • Source
    • "al for processing such natural stimuli by enhancing and resetting low - frequency ( delta and theta - band ) spontaneous oscillations ( Schroeder and Lakatos , 2009 ) . There is also evidence from human research that a phase - resetting mechanism operating in a rhythmic context can increase processing efficiency ; resulting in enhanced detection ( Fiebelkorn et al . , 2011 ; Ten Oever et al. , 2014 ) and neuronal responses ( Thorne et al . , 2011 ; Romei et al . , 2012 ; see discussion in Lakatos et al . , 2013b ) . For example , Thorne et al . ( 2011 ) showed that phase - reset elicited by a visual stimulus increased the probability of auditory target detection when a subsequent auditory input arrived wi"
    [Show abstract] [Hide abstract]
    ABSTRACT: The brain's fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts.
    Frontiers in Psychology 10/2015; 6. DOI:10.3389/fpsyg.2015.01663 · 2.80 Impact Factor
  • Source
    • "It has since become clear that these early integration effects have significant impact on behavior in terms of speeded responses to multisensory inputs (see Sperdin et al., 2009). Similarly, Fiebelkorn et al. (2011) demonstrated behavioral AV integration effects where auditory inputs facilitated visual target detection regardless of retinal eccentricity and large misalignments of the audiovisual stimulus pairings. Teder-Sälejärvi et al. (2005) also examined the effect of spatial alignment on multisensory AV interactions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS) interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms). In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.
    Frontiers in Psychology 08/2015; 6:1068. DOI:10.3389/fpsyg.2015.01068 · 2.80 Impact Factor
    • "k participants could simply have attended to one area of the experimental set - up ; resulting in possible ceiling effects for this task . Finally , spatial congruency effects were not part of the original hy - pothesis as previous studies have found that spatial congruity did not impact detection performance for redundant audio - visual stimuli ( Fiebelkorn et al . , 2011 ; Girard et al . , 2013 ) . However , analysis of accuracy performance to co - located audio - visual stimuli relative to mis - located audio - visual for A t V t CON targets in the object identification task revealed that accuracy was higher when targets were spatially co - located . This suggests that spatial location impacted on obje"
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated age-related effects in cross-modal interactions using tasks assessing spatial perception and object perception. Specifically, an audio-visual object identification task and an audio-visual object localisation task were used to assess putatively distinct perceptual functions in four age groups: children (8-11 years), adolescents (12-14 years), young and older adults. Participants were required to either identify or locate target objects. Targets were specified as unisensory (visual/auditory) or multisensory (audio-visual congruent/audio-visual incongruent) stimuli. We found age-related effects in performance across both tasks. Both children and older adults were less accurate at locating objects than adolescents or young adults. Children were also less accurate at identifying objects relative to young adults, but the performance between young adults, adolescents and older adults did not differ. A greater cost in accuracy for audio-visual incongruent relative to audio-visual congruent targets was found for older adults, children and adolescents relative to young adults. However, we failed to find a benefit in performance for any age group in either the identification or localisation task for audio-visual congruent targets relative to visual-only targets. Our findings suggest that visual information dominated when identifying or localising audio-visual stimuli. Furthermore, on the basis of our results, object identification and object localisation abilities seem to mature late in development and that spatial abilities may be more prone to decline as we age relative to object identification abilities. In addition, the results suggest that multisensory facilitation may require more sensitive measures to reveal differences in cross-modal interactions across higher-level perceptual tasks.
    Multisensory research 04/2015; 28(1-2):111-151. DOI:10.1163/22134808-00002479 · 0.78 Impact Factor
Show more