Kinematics and eye-head coordination of gaze shifts evoked from different sites in the superior colliculus of the cat.

UMR CNRS 6152 Mouvement et Perception, Faculté des Sciences du Sport, Université de la Méditerranée, CP 910, 163 avenue de Luminy, 13288 Marseille Cedex 09, France.
The Journal of Physiology (Impact Factor: 4.54). 12/2006; 577(Pt 3):779-94. DOI: 10.1113/jphysiol.2006.113720
Source: PubMed

ABSTRACT Shifting gaze requires precise coordination of eye and head movements. It is clear that the superior colliculus (SC) is involved with saccadic gaze shifts. Here we investigate its role in controlling both eye and head movements during gaze shifts. Gaze shifts of the same amplitude can be evoked from different SC sites by controlled electrical microstimulation. To describe how the SC coordinates the eye and the head, we compare the characteristics of these amplitude-matched gaze shifts evoked from different SC sites. We show that matched amplitude gaze shifts elicited from progressively more caudal sites are progressively slower and associated with a greater head contribution. Stimulation at more caudal SC sites decreased the peak velocity of the eye but not of the head, suggesting that the lower peak gaze velocity for the caudal sites is due to the increased contribution of the slower-moving head. Eye-head coordination across the SC motor map is also indicated by the relative latencies of the eye and head movements. For some amplitudes of gaze shift, rostral stimulation evoked eye movement before head movement, whereas this reversed with caudal stimulation, which caused the head to move before the eyes. These results show that gaze shifts of similar amplitude evoked from different SC sites are produced with different kinematics and coordination of eye and head movements. In other words, gaze shifts evoked from different SC sites follow different amplitude-velocity curves, with different eye-head contributions. These findings shed light on mechanisms used by the central nervous system to translate a high-level motor representation (a desired gaze displacement on the SC map) into motor commands appropriate for the involved body segments (the eye and the head).

  • [Show abstract] [Hide abstract]
    ABSTRACT: The mammalian superior colliculus (SC) and its nonmammalian homolog, the optic tectum, constitute a major node in processing sensory information, incorporating cognitive factors, and issuing motor commands. The resulting action-to orient toward or away from a stimulus-can be accomplished as an integrated movement across oculomotor, cephalomotor, and skeletomotor effectors. The SC also participates in preserving fixation during intersaccadic intervals. This review highlights the repertoire of movements attributed to SC function and analyzes the significance of results obtained from causality-based experiments (microstimulation and inactivation). The mechanisms potentially used to decode the population activity in the SC into an appropriate movement command are also discussed.
    Annual Review of Neuroscience 07/2010; 34:205-31. DOI:10.1146/annurev-neuro-061010-113728 · 22.66 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Global motion detection is one of the most important abilities in the animal kingdom to navigate through a 3-dimensional environment. In the visual system of teleost fish direction-selective neurons in the pretectal area (APT) are most important for global motion detection. As in all other vertebrates these neurons are involved in the control of slow phase eye movements during gaze stabilization. In contrast to mammals cortical pathways that might influence motion detection abilities of the optokinetic system are missing in teleost fish. To test global motion detection in goldfish we first measured the coherence threshold of random dot patterns to elicit horizontal slow phase eye movements. In addition, the coherence threshold of the optomotor response was determined by the same random dot patterns. In a second approach the coherence threshold to elicit a direction selective response in neurons of the APT was assessed from a neurometric function. Behavioural thresholds and neuronal thresholds to elicit slow phase eye movements were very similar, and ranged between 10% and 20% coherence. In contrast to these low thresholds for the optokinetic reaction and APT neurons the optomotor response could only be elicited by random dot patterns with coherences above 40%. Our findings suggest a high sensitivity for global motion in the goldfish optokinetic system. Comparison of neuronal and behavioural thresholds implies a nearly one-to-one transformation of visual neuron performance to the visuo-motor output. In addition, we assume that the optomotor response is not mediated by the optokinetic system, but instead by other motion detection systems with higher coherence thresholds.
    PLoS ONE 03/2010; 5(3):e9461. DOI:10.1371/journal.pone.0009461 · 3.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Much research on modeling human performance associated with visual perception is formulated by schematic models based on neural mechanisms or cognitive architectures. But, these two modeling paradigms are limited in the domains of multiple monitor environments. Although the schematic model based on neural mechanisms can represent human visual systems in multiple monitor environments by providing a detailed account of eye and head movements, these models cannot easily be applied in complex cognitive interactions. On the other hand, the cognitive architectures can model the interaction of multiple aspects of cognition, but these architectures have not focused on modeling the visual orienting behavior of eye and head movements. Thus, in this study, a specific cognitive architecture, which is ACT-R, is extended by an existing schematic model of human visual systems based on neural mechanisms in order to model human performance in multiple monitor environments more accurately. And, this study proposes a method of modeling human performance using the extended ACT-R. The proposed method is validated by an experiment, confirming that the proposed method is able to predict human performance more accurately in multiple monitor environments. Relevance to industry Predicting human performance with a computational model can be used as an alternative method to implementing iterative user testing for developing a system interface. The computational model in this study can predict human performance in multiple monitor environments, so that the model can be applied early on in the design phase, to evaluate the system interface in multiple monitor environments.
    International Journal of Industrial Ergonomics 11/2014; DOI:10.1016/j.ergon.2014.09.004 · 1.21 Impact Factor

Full-text (2 Sources)

Available from
May 20, 2014