Blinks slow memory-guided saccades

1Stony Brook University.
Journal of Neurophysiology (Impact Factor: 2.89). 11/2012; 109(3). DOI: 10.1152/jn.00746.2012
Source: PubMed


Memory-guided saccades are slower than visually-guided saccades. The usual explanation for this slowing is that the absence of a visual drive reduces the discharge of neurons in the superior colliculus. We tested a related hypothesis, that the slowing of memory-guided saccades was due also to the more frequent occurrence of gaze-evoked blinks with memory-guided saccades compared to visually-guided saccades. We recorded gaze-evoked blinks in three monkeys while they performed visually-guided and memory-guided saccades and compared the kinematics of the different saccade types with and without blinks. Gaze-evoked blinks were more common during memory-guided saccades than during visually-guided saccades, and the well-established relationship between peak and average velocity for saccades was disrupted by blinking. The occurrence of gaze-evoked blinks was associated with a greater slowing of memory-guided saccades compared to visually-guided saccades. Likewise, when blinks were absent, the peak velocity of visually-guided saccades was only slightly higher than that of memory-guided saccades. Our results reveal interactions between circuits generating saccades and blink-evoked eye movements. The interaction leads to increased curvature of saccade trajectories and a corresponding decrease in saccade velocity. Consistent with this interpretation, the amount of saccade curvature and slowing increased with gaze-evoked blink amplitude. Thus, although the absence of vision decreases the velocity of memory-guided saccades relative to visually-guided saccades somewhat, the co-occurrence of gaze-evoked blinks produces the majority of slowing for memory-guided saccades.

Full-text preview

Available from:
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Eye tracking has become the de facto standard measure of visual attention in tasks that range from free viewing to complex daily activities. In particular, saliency models are often evaluated by their ability to predict human gaze patterns. However, fixations are not only influenced by bottom-up saliency (computed by the models), but also by many top-down factors. Thus, comparing bottom-up saliency maps to eye fixations is challenging and has required that one tries to minimize top-down influences, for example by focusing on early fixations on a stimulus. Here we propose two complementary procedures to evaluate visual saliency. We seek whether humans have explicit and conscious access to the saliency computations believed to contribute to guiding attention and eye movements. In the first experiment, 70 observers were asked to choose which object stands out the most based on its low-level features in 100 images each containing only two objects. Using several state-of-the-art bottom-up visual saliency models that measure local and global spatial image outliers, we show that maximum saliency inside the selected object is significantly higher than inside the non-selected object and the background. Thus spatial outliers are a predictor of human judgments. Performance of this predictor is boosted by including object size as an additional feature. In the second experiment, observers were asked to draw a polygon circumscribing the most salient object in cluttered scenes. For each of 120 images, we show that a map built from annotations of 70 observers explains eye fixations of another 20 observers freely viewing the images, significantly above chance (dataset by Bruce & Tsotsos 2009; shuffled AUC score 0.62±0.07, chance 0.50, t-test p<0.05). We conclude that fixations agree with saliency judgments, and classic bottom-up saliency models explain both. We further find that computational models specifically designed for fixation prediction slightly outperform models designed for salient object detection over both types of data (i.e., fixations and objects).
    Preview · Article · Aug 2013 · Vision research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Video game play has become a common leisure activity all around the world. To reveal possible effects of playing video games, we measured saccades elicited by video game players (VGPs) and non-players (NVGPs) in two oculomotor tasks. First, our subjects performed a double-step task. Second, we asked our subjects to move their gaze opposite to the appearance of a visual target, i.e. to perform anti-saccades. As expected on the basis of previous studies, VGPs had significantly shorter saccadic reaction times (SRTs) than NVGPs for all saccade types. However, the error rates in the anti-saccade task did not reveal any significant differences. In fact, the error rates of VGPs were actually slightly lower compared to NVGPs (34% versus 40%, respectively). In addition, VGPs showed significantly higher saccadic peak velocities in every saccade type compared to NVGP. Our results suggest that faster SRTs in VGPs were associated with a more efficient motor drive for saccades. Taken together, our results are in excellent agreement with earlier reports of beneficial video game effects through the general reduction in SRTs. Our data clearly provides additional experimental evidence for an higher efficiency of the VGPs on the one hand and refutes the notion of a reduced impulse control in VGPs on the other.
    Full-text · Article · Sep 2014 · Vision Research