Deficits and recovery in visuospatial memory during head motion after bilateral labyrinthine lesion.
ABSTRACT To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
- SourceAvailable from: Heather L Jenkin[Show abstract] [Hide abstract]
ABSTRACT: Chuck Oman has been a guide and mentor for research in human perception and performance during space exploration for over 25 years. His research has provided a solid foundation for our understanding of how humans cope with the challenges and ambiguities of sensation and perception in space. In many of the environments associated with work in space the human visual system must operate with unusual combinations of visual and other perceptual cues. On Earth physical acceleration cues are normally available to assist the visual system in interpreting static and dynamic visual features. Here we consider two cases where the visual system is not assisted by such cues. Our first experiment examines perceptual stability when the normally available physical cues to linear acceleration are absent. Our second experiment examines perceived orientation when there is no assistance from the physically sensed direction of gravity. In both cases the effectiveness of vision is paradoxically reduced in the absence of physical acceleration cues. The reluctance to rely heavily on vision represents an important human factors challenge to efficient performance in the space environment.Journal of Vestibular Research 01/2010; 20(1):25-30. DOI:10.3233/VES-2010-0352 · 1.46 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: There is considerable evidence that the encoding of intended actions in visual space is represented in dynamic, gaze-centered maps, such that each eye movement requires an internal updating of these representations. Here, we review results from our own experiments on human subjects that test the additional geometric constraints to the dynamic updating of these spatial maps during whole-body motion. Subsequently, we summarize evidence and present new analyses of how these spatial signals may be integrated with motor effector signals in order to generate the appropriate commands for action. Finally, we discuss neuroimaging experiments suggesting that the posterior parietal cortex and the dorsal premotor cortex play selective roles in this process.Cortex 06/2008; 44(5):587-97. DOI:10.1016/j.cortex.2007.06.001 · 6.04 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.Journal of Vision 11/2012; 12(12). DOI:10.1167/12.12.8 · 2.73 Impact Factor