[Show abstract][Hide abstract] ABSTRACT: One of the hallmarks of an eye movement that follows Listing's law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing's law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing's law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain.
PLoS ONE 04/2014; 9(4):e95234. · 3.53 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: One of the open questions in oculomotor control of visually guided eye movements is whether it is possible to smoothly track a target along a curvilinear path across the visual field without changing the torsional stance of the eye. We show in an experimental study of three-dimensional eye movements in subhuman primates (Macaca mulatta) that although the pursuit system is able to smoothly change the orbital orientation of the eye's rotation axis, the smooth ocular motion was interrupted every few hundred milliseconds by a small quick phase with amplitude <1.5° while the animal tracked a target along a circle or ellipse. Specifically, during circular pursuit of targets moving at different angular eccentricities (5°, 10°, and 15°) relative to straight ahead at spatial frequencies of 0.067 and 0.1 Hz, the torsional amplitude of the intervening quick phases was typically around 1° or smaller and changed direction for clockwise vs. counterclockwise tracking. Reverse computations of the eye rotation based on the recorded angular eye velocity showed that the quick phases facilitate the overall control of ocular orientation in the roll plane, thereby minimizing torsional disturbances of the visual field. On the basis of a detailed kinematic analysis, we suggest that quick phases during curvilinear smooth tracking serve to minimize deviations from Donders' law, which are inevitable due to the spherical configuration space of smooth eye movements.
Journal of Neurophysiology 06/2011; 106(5):2151-66. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Oscillating an animal out-of-phase simultaneously about the roll and pitch axes ("wobble") changes continuously the orientation of the head relative to gravity. For example, it may gradually change from nose-up, to ear-down, nose-down, ear-down, and back to nose-up. Rotations about the longitudinal axis ("spin") can change the orientation of the head relative to gravity in the same way, provided the axis is tilted from vertical. During both maneuvers, the otolith organs in the inner ear detect the change in head orientation relative to gravity, whereas the semicircular canals will only detect oscillations in velocity (wobble), but not any rotation at constant velocity (spin). Geometrically, the whole motion can be computed based on information about head orientation relative to gravity and the wobble velocity. We subjected monkeys (Macaca mulatta) to combinations of spin and wobble and found that the animals were always able to correctly estimate their spin velocity. Simulations of these results with an optimal Bayesian model of vestibular information processing suggest that the brain integrates gravity and velocity information based on a geometrically coherent three-dimensional representation of head-in-space motion.
Journal of Neuroscience 06/2011; 31(22):8093-101. · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.
Journal of Neurophysiology 09/2010; 104(3):1370-81. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We present a method for recording eye-head movements with the magnetic search coil technique in a small external magnetic field. Since magnetic fields are typically non-linear, except in a relative small region in the center small field frames have not been used for head-unrestrained experiments in oculomotor studies. Here we present a method for recording 3D eye movements by accounting for the magnetic non-linearities using the Biot-Savart law. We show that the recording errors can be significantly reduced by monitoring current head position and thereby taking the location of the eye in the external magnetic field into account.
Vision research 03/2010; 50(13):1203-13. · 2.29 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We investigated in normal human subjects how semicircular canal and otolith signals interact in the estimation of the subjective visual vertical after constant velocity or constant acceleration roll tilt. In the constant velocity paradigm, subjects were rotated in darkness at +/-60 degrees/s for five complete cycles before being stopped in one of seven orientations ranging from 0 to +/-90 degrees (right/left ear down). In the constant acceleration paradigm, subjects were rotated with an acceleration of +30 or -30 degrees/s2 to the same seven end positions between -90 and +90 degrees , by way of passing once through the upside-down position. The subjective visual vertical was assessed by measuring the setting of a luminous line that appeared at different test delays after stop rotation in otherwise complete darkness. The data suggest that gravitational jerk signals generated by otolith-semicircular canal interactions and/or carried by phasic otolith signals are responsible for the observed transient bias in the estimation of the subjective visual vertical. This transient bias depended on both rotation and tilt direction after constant velocity rotations, but was almost abolished following constant acceleration rotations.
Journal of Neurophysiology 06/2008; 100(2):657-69. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 +/- 0.28 (mean +/- SD) as compared with 0.51 +/- 0.33 for downward and 1.05 +/- 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 +/- 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.
Journal of Neurophysiology 05/2008; 99(4):1799-809. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To investigate the role of noncommutative computations in the oculomotor system, three-dimensional (3D) eye movements were measured in seven healthy subjects using a memory-contingent vestibulooculomotor paradigm. Subjects had to fixate a luminous point target that appeared briefly at an eccentricity of 20 degrees in one of four diagonal directions in otherwise complete darkness. After a fixation period of approximately 1 s, the subject was moved through a sequence of two rotations about mutually orthogonal axes in one of two orders (30 degrees yaw followed by 30 degrees pitch and vice versa in upright and 30 degrees yaw followed by 20 degrees roll and vice versa in both upright and supine orientations). We found that the change in ocular torsion induced by consecutive rotations about the yaw and the pitch axis depended on the order of rotations as predicted by 3D rotation kinematics. Similarly, after rotations about the yaw and roll axis, torsion depended on the order of rotations but now due to the change in final head orientation relative to gravity. Quantitative analyses of these ocular responses revealed that the rotational vestibuloocular reflexes (VORs) in far vision closely matched the predictions of 3D rotation kinematics. We conclude that the brain uses an optimal VOR strategy with the restriction of a reduced torsional position gain. This restriction implies a limited oculomotor range in torsion and systematic tilts of the angular eye velocity as a function of gaze direction.
Journal of Neurophysiology 02/2008; 99(1):96-111. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Visual stabilization of the retina during rotational head movements requires that in far vision the eyes rotate about the same axis as the head but in opposite direction with a gain close to unity (optimal strategy). To achieve this goal the vestibulo-oculomotor system must be able to independently control all three rotational degrees of freedom of the eye. Studies of the human rotational vestibulo-ocular reflexes (VOR) have shown that its spatial characteristics are best explained by a strategy that lies halfway between the optimal image stabilization and perfect compliance with Listing's law. Here we argue that these spatial characteristics are fully compatible with an optimal strategy under the condition of a restrained gain of the torsional velocity-to-position integration. One implication of this finding is that the rotational VORs must override the default operation mode of the ocular plant that, according to recent findings, mechanically favours movements obeying Listing's law.
Progress in brain research 02/2008; 171:199-206. · 4.19 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: During constant-velocity rotation about a tilted axis (OVAR), the VOR and the rotation perception last indefinitely, but show a striking dependency on tilt angle. We show that, during OVAR, a variety of motions can account for the head motion relative to gravity. Some of these are in conflict with canal signals, but correspond to a lower angular velocity; we suggest that the brain performs a trade-off in order to select the best motion. We show that this theory explains the effect of tilt angle on velocity estimation during OVAR.
Progress in brain research 02/2008; 171:287-90. · 4.19 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Journal of Neurophysiology 08/2007; 98(1):537-44. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Our ability to keep track of objects in the environment, even as we move, has been attributed to various cues including efference copies, vestibular signals, proprioception, and gravitational cues. However, the presence of some cues, such as gravity, may not be used to the same extent by different axes of motion (e.g., yaw vs. roll). We tested whether changes in gravitational cues can be used to improve visuospatial updating performance for yaw rotations as previously shown for roll. We found differences in updating for yaw and roll rotations in that yaw updating is not only associated with larger systematic errors but is also not facilitated by gravity in the same way as roll updating.
Journal of Neurophysiology 05/2006; 95(4):2692-7. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Self-motion disturbs the stability of retinal images by inducing optic flow. Objects of interest need to be fixated or tracked, yet these eye movements can infringe on the experienced retinal flow that is important for visual navigation. Separating the components of optic flow caused by an eye movement from those due to self-motion, as well as using optic flow for visual navigation while simultaneously maintaining visual acuity on near targets, represent key challenges for the visual system. Here we summarize recent advances in our understanding of how the visuomotor and vestibulomotor systems function and interact, given the complex task of compensating for instabilities of retinal images, which typically vary as a function of retinal location and differ for each eye.
[Show abstract][Hide abstract] ABSTRACT: Primates are able to localize a briefly flashed target despite intervening movements of the eyes, head, or body. This ability, often referred to as updating, requires extraretinal signals related to the intervening movement. With active roll rotations of the head from an upright position it has been shown that the updating mechanism is 3-dimensional, robust, and geometrically sophisticated. Here we examine whether such a rotational updating mechanism operates during passive motion both with and without inertial cues about head/body position in space. Subjects were rotated from either an upright or supine position, about a nasal-occipital axis, briefly shown a world-fixed target, rotated back to their original position, and then asked to saccade to the remembered target location. Using this paradigm, we tested subjects' abilities to update from various tilt angles (0, +/-30, +/-45, +/-90 degrees), to 8 target directions and 2 target eccentricities. In the upright condition, subjects accurately updated the remembered locations from all tilt angles independent of target direction or eccentricity. Slopes of directional errors versus tilt angle ranged from -0.011 to 0.15, and were significantly different from a slope of 1 (no compensation for head-in-space roll) and a slope of 0.9 (no compensation for eye-in-space roll). Because the eyes, head, and body were fixed throughout these passive movements, subjects could not use efference copies or neck proprioceptive cues to assess the amount of tilt, suggesting that vestibular signals and/or body proprioceptive cues suffice for updating. In the supine condition, where gravitational signals could not contribute, slopes ranged from 0.60 to 0.82, indicating poor updating performance. Thus information specifying the body's orientation relative to gravity is critical for maintaining spatial constancy and for distinguishing body-fixed versus world-fixed reference frames.
Journal of Neurophysiology 08/2005; 94(1):468-78. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We have examined the spatiotemporal characteristics of postrotatory eye velocity after roll and pitch off-vertical axis rotations (OVAR). Three rhesus monkeys were placed in one of 3 orientations on a 3-dimensional (3D) turntable: upright (90 degrees roll or pitch OVAR), 45 degrees nose-up (45 degrees roll OVAR), and 45 degrees left ear-down (45 degrees pitch OVAR). Subjects were then rotated at +/-60 degrees /s around the naso-occipital or interaural axis and stopped after 10 turns, in one of 7 final head orientations, each separated by 30 degrees . We found that postrotatory eye velocity showed horizontal-vertical components after roll OVAR and horizontal-torsional components after pitch OVAR that varied systematically as a function of final head orientation. The quantitative analysis suggests that, in contrast to the analogous yaw OVAR paradigm, a system of up to 3 real, gravity-dependent eigenvectors and eigenvalues determines the spatiotemporal characteristics of the residual eye velocities after roll and pitch OVAR. One of these eigenvectors closely aligned with gravity, whereas the other 2 determined the orientation of the earth horizontal plane. We propose that the spatial characteristics of eye velocity after roll and pitch OVAR follow the physical constraints of stationary orientation in a gravitational field and reflect the brain's best estimate of head-in-space orientation within an internal representation of 3D space.
Journal of Neurophysiology 04/2005; 93(3):1633-46. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We have examined the characteristics of vergence-induced reduction of ocular counter-roll in near vision. Monkeys were trained to make convergent and divergent refixations with the head and body either upright or in various roll orientations. During near viewing requiring 17 degrees horizontal vergence, we found that static binocular torsion was suppressed by about 68% (averaged over both eyes, two monkeys and both near target locations). This result is in accordance with a previous study in which binocular torsion was quantified based on the displacement planes of eye positions in far and near viewing. Latency and duration of the change in torsional eye position depended (for each eye differently) on body roll and the depth plane of fixation. For instance, during convergent refixations in left-ear-down orientations, the latencies of the left eye were smaller and the durations were longer than those of the right eye. However, both eyes reached their final positions required to fixate the second visual target at roughly the same time. The different dynamics of the two eyes is explained by the fact that each eye rotated temporally when the eyes converged, a pattern named binocular extension of Listing's law. Coming from or aiming at a common torsional value (normal ocular counter-roll) in convergent or divergent refixations, the required torsion differs in the two eyes. The brain compensates for these differences by adjusting the dynamics of each eye's movement.
European Journal of Neuroscience 02/2005; 21(2):549-55. · 3.67 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The aim of this study was to characterize the error pattern of continuously tracking the perceived earth-vertical during roll rotations from upright to right or left ear-down and from right or left ear-down to upright. We compared the tracking responses of two paradigms, which either continuously activated the otoliths organs alone (constant velocity tilt) or both the otolith organs and the semicircular canals (constant acceleration tilt). The tracking responses of the subjective visual vertical showed characteristic differences depending on starting position and tilt direction relative to gravity. The error patterns in the constant-velocity and constant-acceleration tilt paradigm were reversed. Estimations during tracking, when otolith information was continuously changing, were more precise compared to estimations following fast tilts to fixed roll tilt positions. We conclude that the central processing underlying these perceptual tracking responses requires, besides the otolith input, information from the vertical semicircular canals.
Experimental Brain Research 05/2004; 155(3):283-90. · 2.17 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Our understanding of how the brain controls eye movements has benefited enormously from the comparison of neuronal activity with eye movements and the quantification of these relationships with mathematical models. Although these early studies focused on horizontal and vertical eye movements, recent behavioural and modelling studies have illustrated the importance, but also the complexity, of extending previous conclusions to the problems of controlling eye and head orientation in three dimensions (3-D). An important facet in understanding 3-D eye orientation and movement has been the discovery of mobile, soft-tissue sheaths or 'pulleys' in the orbit which might influence the pulling direction of extraocular muscles. Appropriately placed pulleys could generate the eye-position-dependent tilt of the ocular rotation axes which are characteristic for eye movements which follow Listing's law. Based on such pulley models of the oculomotor plant it has recently been proposed that a simple two-dimensional (2-D) neural controller would be sufficient to generate correct 3-D eye orientation and movement. In contrast to this apparent simplification in oculomotor control, multiple behavioural observations suggest that the visuo-motor transformations, as well as the premotor circuitry for saccades, pursuit eye movements and the vestibulo-ocular reflexes, must include a neural controller which operates in 3-D, even when considering an eye plant with pulleys. This review summarizes the most recent work and ideas on this controversy. In addition, by proposing directly testable hypotheses, we point out that, in analogy to the previously successful steps towards elucidating the neural control of horizontal eye movements, we need a quantitative characterization first of motoneuron and next of premotor neuron properties in 3-D before we can succeed in gaining further insight into the neural control of 3-D motor behaviours.
European Journal of Neuroscience 02/2004; 19(1):1-10. · 3.67 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Rotational disturbances of the head about an off-vertical yaw axis induce a complex vestibuloocular reflex pattern that reflects the brain's estimate of head angular velocity as well as its estimate of instantaneous head orientation (at a reduced scale) in space coordinates. We show that semicircular canal and otolith inputs modulate torsional and, to a certain extent, also vertical ocular orientation of visually guided saccades and smooth-pursuit eye movements in a similar manner as during off-vertical axis rotations in complete darkness. It is suggested that this graviceptive control of eye orientation facilitates rapid visual spatial orientation during motion.
Annals of the New York Academy of Sciences 11/2003; 1004:132-41. · 4.31 Impact Factor