Article

Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments.

The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine, Van Etten Building, 1C, 1225 Morris Park Avenue, Bronx, NY 10461, USA.
Experimental Brain Research (Impact Factor: 2.22). 04/2011; 213(2-3):167-74. DOI: 10.1007/s00221-011-2670-7
Source: PubMed

ABSTRACT It is well established that sounds can enhance visual-target detection, but the mechanisms that govern these cross-sensory effects, as well as the neural pathways involved, are largely unknown. Here, we tested behavioral predictions stemming from the neurophysiologic and neuroanatomic literature. Participants detected near-threshold visual targets presented either at central fixation or peripheral to central fixation that were sometimes paired with sounds that originated from widely misaligned locations (up to 104° from the visual target). Our results demonstrate that co-occurring sounds improve the likelihood of visual-target detection (1) regardless of retinal eccentricity and (2) despite wide audiovisual misalignments. With regard to the first point, these findings suggest that auditory facilitation of visual-target detection is unlikely to operate through previously described corticocortical pathways from auditory cortex that predominantly terminate in regions of visual cortex that process peripheral visual space. With regard to the second point, auditory facilitation of visual-target detection seems to operate through a spatially non-specific modulation of visual processing.

0 Bookmarks
 · 
123 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Sensory perception is enhanced by the complementary information provided by our different sensory modalities and even apparently task irrelevant stimuli in one modality can facilitate performance in another. While perception in general comprises both, the detection of sensory objects as well as their discrimination and recognition, most studies on audio-visual interactions have focused on either of these aspects. However, previous evidence, neuroanatomical projections between early sensory cortices and computational mechanisms suggest that sounds might differentially affect visual detection and discrimination and differentially at central and peripheral retinal locations. We performed an experiment to directly test this by probing the enhancement of visual detection and discrimination by auxiliary sounds at different visual eccentricities and within the same subjects. Specifically, we quantified the enhancement provided by sounds that reduce the overall uncertainty about the visual stimulus beyond basic multisensory co-stimulation. This revealed a general trend for stronger enhancement at peripheral locations in both tasks, but a statistically significant effect only for detection and only at peripheral locations. Overall this suggests that there are topographic differences in the auditory facilitation of basic visual processes and that these may differentially affect basic aspects of visual recognition.
    Frontiers in Integrative Neuroscience 01/2013; 7:52.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The frequency of environmental vibrations is sampled by two of the major sensory systems, audition and touch, notwithstanding that these signals are transduced through very different physical media and entirely separate sensory epithelia. Psychophysical studies have shown that manipulating frequency in audition or touch can have a significant cross-sensory impact on perceived frequency in the other sensory system, pointing to intimate links between these senses during computation of frequency. In this regard, the frequency of a vibratory event can be thought of as a multisensory perceptual construct. In turn, electrophysiological studies point to temporally early multisensory interactions that occur in hierarchically early sensory regions where convergent inputs from the auditory and somatosensory systems are to be found. A key question pertains to the level of processing at which the multisensory integration of featural information, such as frequency, occurs. Do the sensory systems calculate frequency independently before this information is combined, or is this feature calculated in an integrated fashion during preattentive sensory processing? The well characterized mismatch negativity, an electrophysiological response that indexes preattentive detection of a change within the context of a regular pattern of stimulation, served as our dependent measure. High-density electrophysiological recordings were made in humans while they were presented with separate blocks of somatosensory, auditory, and audio-somatosensory "standards" and "deviants," where the deviant differed in frequency. Multisensory effects were identified beginning at ∼200 ms, with the multisensory mismatch negativity (MMN) significantly different from the sum of the unisensory MMNs. This provides compelling evidence for preattentive coupling between the somatosensory and auditory channels in the cortical representation of frequency.
    Journal of Neuroscience 10/2012; 32(44):15338-44. · 6.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
    Frontiers in Psychology 01/2013; 4:530. · 2.80 Impact Factor

Full-text (2 Sources)

View
49 Downloads
Available from
May 20, 2014