Acoustic facilitation of object movement detection during self-motion

Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston, MA 02215, USA.
Proceedings of the Royal Society B: Biological Sciences (Impact Factor: 5.05). 02/2011; 278(1719):2840-7. DOI: 10.1098/rspb.2010.2757
Source: PubMed


In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.

Download full-text


Available from: Lucia M. Vaina, Feb 03, 2015
  • Source
    • "dashed vertical line; taken from Teneggi et al. (2013), with permission). 5 Whereas in these latter studies dynamic stimuli approached or receded from a static observer, the improvement of visual movement detection by congruent auditory motion signals during simulated self-motion would appear to suggest that multisensory (audiovisual) interactions can also be observed during self-motion (Calabro et al., 2011 "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this review, we evaluate the neurophysiological, neuropsychological, and psychophysical evidence relevant to the claim that multisensory information is processed differently depending on the region of space in which it happens to be presented. We discuss how the majority of studies of multisensory interactions in the depth plane that have been conducted to date have focused on visuotactile and audiotactile interactions in frontal peripersonal space and underline the importance of such multisensory interactions in defining peripersonal space. Based on our review of studies of multisensory interactions in depth, we question the extent to which peri- and extra-personal space (both frontal and rear) are characterized by differences in multisensory interactions (as evidenced by multisensory stimuli producing a different behavioral outcome as compared to unisensory stimulation). In addition to providing an overview of studies of multisensory interactions in different regions of space, our goal in writing this review has been to demonstrate that the various kinds of multisensory interactions that have been documented may follow very similar organizing principles. Multisensory interactions in depth that involve tactile stimuli are constrained by the fact that such stimuli typically need to contact the skin surface. Therefore, depth-related preferences of multisensory interactions involving touch can largely be explained in terms of their spatial alignment in depth and their alignment with the body. As yet, no such depth-related asymmetry has been observed in the case of audiovisual interactions. We therefore suggest that the spatial boundary of peripersonal space and the enhanced audiotactile and visuotactile interactions that occur in peripersonal space can be explained in terms of the particular spatial alignment of stimuli from different modalities with the body and that they likely reflect the result of prior multisensory experience.
    Neuropsychologia 12/2014; 70. DOI:10.1016/j.neuropsychologia.2014.12.007 · 3.30 Impact Factor
  • Source
    • "Indeed, non-visual information about the speed (Fajen and Matthis, 2011) and direction (Fajen et al., in press) of self-motion also plays a role in recovering object motion in world coordinates. These findings and those of other researchers (e.g., Dyde and Harris, 2008; Calabro et al., 2011; MacNeilage et al., 2012; Warren et al., 2012) highlight the multisensory nature of the flow parsing problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Locomotion in complex, dynamic environments is an integral part of many daily activities, including walking in crowded spaces, driving on busy roadways, and playing sports. Many of the tasks that humans perform in such environments involve interactions with moving objects-that is, they require people to coordinate their own movement with the movements of other objects. A widely adopted framework for research on the detection, avoidance, and interception of moving objects is the bearing angle model, according to which observers move so as to keep the bearing angle of the object constant for interception and varying for obstacle avoidance. The bearing angle model offers a simple, parsimonious account of visual control but has several significant limitations and does not easily scale up to more complex tasks. In this paper, I introduce an alternative account of how humans choose actions and guide locomotion in the presence of moving objects. I show how the new approach addresses the limitations of the bearing angle model and accounts for a variety of behaviors involving moving objects, including (1) choosing whether to pass in front of or behind a moving obstacle, (2) perceiving whether a gap between a pair of moving obstacles is passable, (3) avoiding a collision while passing through single or multiple lanes of traffic, (4) coordinating speed and direction of locomotion during interception, (5) simultaneously intercepting a moving target while avoiding a stationary or moving obstacle, and (6) knowing whether to abandon the chase of a moving target. I also summarize data from recent studies that support the new approach.
    Frontiers in Behavioral Neuroscience 07/2013; 7:85. DOI:10.3389/fnbeh.2013.00085 · 3.27 Impact Factor
  • Source
    • "ROIs are adapted from Vaina et al. (2010); Calabro et al. (2011) "
    [Show abstract] [Hide abstract]
    ABSTRACT: Human perception, cognition, and action are supported by a complex network of interconnected brain regions. There is an increasing interest in measuring and characterizing these networks as a function of time and frequency, and inter-areal phase locking is often used to reveal these networks. This measure assesses the consistency of phase angles between the electrophysiological activity in two areas at a specific time and frequency. Non-invasively, the signals from which phase locking is computed can be measured with magnetoencephalography (MEG) and electroencephalography (EEG). However, due to the lack of spatial specificity of reconstructed source signals in MEG and EEG, inter-areal phase locking may be confounded by false positives resulting from crosstalk. Traditional phase locking estimates assume that no phase locking exists when the distribution of phase angles is uniform. However, this conjecture is not true when crosstalk is present. We propose a novel method to improve the reliability of the phase-locking measure by sampling phase angles from a baseline, such as from a prestimulus period or from resting-state data, and by contrasting this distribution against one observed during the time period of interest.
    Frontiers in Neuroinformatics 02/2013; 7:3. DOI:10.3389/fninf.2013.00003 · 3.26 Impact Factor
Show more