Article

Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments

The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine, Van Etten Building, 1C, 1225 Morris Park Avenue, Bronx, NY 10461, USA.
Experimental Brain Research (Impact Factor: 2.04). 04/2011; 213(2-3):167-74. DOI: 10.1007/s00221-011-2670-7
Source: PubMed

ABSTRACT

It is well established that sounds can enhance visual-target detection, but the mechanisms that govern these cross-sensory effects, as well as the neural pathways involved, are largely unknown. Here, we tested behavioral predictions stemming from the neurophysiologic and neuroanatomic literature. Participants detected near-threshold visual targets presented either at central fixation or peripheral to central fixation that were sometimes paired with sounds that originated from widely misaligned locations (up to 104° from the visual target). Our results demonstrate that co-occurring sounds improve the likelihood of visual-target detection (1) regardless of retinal eccentricity and (2) despite wide audiovisual misalignments. With regard to the first point, these findings suggest that auditory facilitation of visual-target detection is unlikely to operate through previously described corticocortical pathways from auditory cortex that predominantly terminate in regions of visual cortex that process peripheral visual space. With regard to the second point, auditory facilitation of visual-target detection seems to operate through a spatially non-specific modulation of visual processing.

Download full-text

Full-text

Available from: John J Foxe
  • Source
    • "al for processing such natural stimuli by enhancing and resetting low - frequency ( delta and theta - band ) spontaneous oscillations ( Schroeder and Lakatos , 2009 ) . There is also evidence from human research that a phase - resetting mechanism operating in a rhythmic context can increase processing efficiency ; resulting in enhanced detection ( Fiebelkorn et al . , 2011 ; Ten Oever et al. , 2014 ) and neuronal responses ( Thorne et al . , 2011 ; Romei et al . , 2012 ; see discussion in Lakatos et al . , 2013b ) . For example , Thorne et al . ( 2011 ) showed that phase - reset elicited by a visual stimulus increased the probability of auditory target detection when a subsequent auditory input arrived wi"
    [Show abstract] [Hide abstract]
    ABSTRACT: The brain's fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts.
    Full-text · Article · Oct 2015 · Frontiers in Psychology
  • Source
    • "It has since become clear that these early integration effects have significant impact on behavior in terms of speeded responses to multisensory inputs (see Sperdin et al., 2009). Similarly, Fiebelkorn et al. (2011) demonstrated behavioral AV integration effects where auditory inputs facilitated visual target detection regardless of retinal eccentricity and large misalignments of the audiovisual stimulus pairings. Teder-Sälejärvi et al. (2005) also examined the effect of spatial alignment on multisensory AV interactions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS) interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms). In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.
    Full-text · Article · Aug 2015 · Frontiers in Psychology
  • Source
    • "Under natural conditions, early auditory-to-visual information transfer may serve to improve the detection of visual events although it seems to work in a quite unspecific manner with respect to both the location of the visual event in the visual field and cross-modal spatial congruence or incongruence (Fiebelkorn et al., 2011). Furthermore, spatially irrelevant sounds presented shortly before visual targets may speed up reaction times, even in the absence of any specific predictive value (Keetels and Vroomen, 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
    Full-text · Article · Aug 2013 · Frontiers in Psychology
Show more