Article

Acoustic facilitation of object movement detection during self-motion

Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston, MA 02215, USA.
Proceedings of the Royal Society B: Biological Sciences (Impact Factor: 5.29). 02/2011; 278(1719):2840-7. DOI: 10.1098/rspb.2010.2757
Source: PubMed

ABSTRACT In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.

Download full-text

Full-text

Available from: Lucia M. Vaina, Feb 03, 2015
0 Followers
 · 
114 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this review, we evaluate the neurophysiological, neuropsychological, and psychophysical evidence relevant to the claim that multisensory information is processed differently depending on the region of space in which it happens to be presented. We discuss how the majority of studies of multisensory interactions in the depth plane that have been conducted to date have focused on visuotactile and audiotactile interactions in frontal peripersonal space and underline the importance of such multisensory interactions in defining peripersonal space. Based on our review of studies of multisensory interactions in depth, we question the extent to which peri- and extra-personal space (both frontal and rear) are characterized by differences in multisensory interactions (as evidenced by multisensory stimuli producing a different behavioral outcome as compared to unisensory stimulation). In addition to providing an overview of studies of multisensory interactions in different regions of space, our goal in writing this review has been to demonstrate that the various kinds of multisensory interactions that have been documented may follow very similar organizing principles. Multisensory interactions in depth that involve tactile stimuli are constrained by the fact that such stimuli typically need to contact the skin surface. Therefore, depth-related preferences of multisensory interactions involving touch can largely be explained in terms of their spatial alignment in depth and their alignment with the body. As yet, no such depth-related asymmetry has been observed in the case of audiovisual interactions. We therefore suggest that the spatial boundary of peripersonal space and the enhanced audiotactile and visuotactile interactions that occur in peripersonal space can be explained in terms of the particular spatial alignment of stimuli from different modalities with the body and that they likely reflect the result of prior multisensory experience. Copyright © 2014. Published by Elsevier Ltd.
    Neuropsychologia 12/2014; DOI:10.1016/j.neuropsychologia.2014.12.007 · 3.45 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Human perception, cognition, and action are supported by a complex network of interconnected brain regions. There is an increasing interest in measuring and characterizing these networks as a function of time and frequency, and inter-areal phase locking is often used to reveal these networks. This measure assesses the consistency of phase angles between the electrophysiological activity in two areas at a specific time and frequency. Non-invasively, the signals from which phase locking is computed can be measured with magnetoencephalography (MEG) and electroencephalography (EEG). However, due to the lack of spatial specificity of reconstructed source signals in MEG and EEG, inter-areal phase locking may be confounded by false positives resulting from crosstalk. Traditional phase locking estimates assume that no phase locking exists when the distribution of phase angles is uniform. However, this conjecture is not true when crosstalk is present. We propose a novel method to improve the reliability of the phase-locking measure by sampling phase angles from a baseline, such as from a prestimulus period or from resting-state data, and by contrasting this distribution against one observed during the time period of interest.
    Frontiers in Neuroinformatics 02/2013; 7:3. DOI:10.3389/fninf.2013.00003
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.
    Experimental Brain Research 07/2012; 221(2):177-89. DOI:10.1007/s00221-012-3159-8 · 2.17 Impact Factor