Science topic
Saccades - Science topic
An abrupt voluntary shift in ocular fixation from one point to another, as occurs in reading.
Questions related to Saccades
Typically saccade amplitude by difference in time between two fixation points should give the saccade velocity if the sampling frequency of the eyetracker was higher than 330 Hz. But for a Tobii 60 Hz eye tracker how to estimate saccade velocity especially for short TOI?
A major goal of learning and consciousness is to automate behavior--i.e., to transition from ‘thinking slow’ to ‘thinking fast’ (Kahneman 2011)--so that when an organism is subjected to a specific context that an automatic response will be executed with minimal participation from volitional circuits (i.e., in the neocortex). When one needs to enter a secure area, it is common for one to be confronted with a keypad upon which one must punch out the code to gain entry. At the beginning of learning the code, one is given a number, e.g., ‘3897’, which must be put to declarative memory. After having entered the facility on numerous occasions, one no longer needs to remember the number, but just the spatial sequence of the finger presses. Thus, the code has been automated by the brain. In fact, often the number is no longer required, since the nervous system automatically punches out the number using implicit memory (something like never needing to recall the rules of grammar to write correct sentences).
So, how does the brain automate behavior? The first clue to this question comes from studies on express, saccadic eye movements (Schiller and Tehovnik 2015). Express saccades are eye movements generated briskly to single targets at latencies between 80 and 125 ms. In contrast, regular saccades are saccadic eye movements generated to a single or to multiple targets (as used in discrimination learning such as match-to-sample) whose latencies vary from 125 to 200 ms, or greater depending on task difficulty (see Figure 14). The behavioral context for the elicitation of express saccades is to have a gap between the termination of the fixation spot and the onset of a single punctate visual target (Fischer and Boch 1983). The distributions of express saccades and regular saccades are bimodal, suggesting that two very different neural processes are in play when these eye movements are being evoked. After carrying out lesions of different parts of the visual system (i.e., the lateral geniculate nucleus parvocellular, the lateral geniculated nucleus magnocellular, area V4, the middle temporal cortex, the frontal eye fields, the medial eye fields, or the superior colliculus) it was found that lesions of the superior colliculus abolished express saccades, and for all other lesion types the express saccades were spared. Thus, a posterior channel starting in V1 and passing through the superior colliculus mediates express saccades (Schiller and Tehovnik 2015). Furthermore, the minimal latency for express saccades (i.e., 80 ms) is accounted for by the summed, signal latency between the retina and area V1 (i.e., 30 ms), the signal latency between area V1 and the superior colliculus (i.e., 25 ms), and the signal latency between the superior colliculus, the saccade generator, and the ocular muscles (i.e., 25 ms, Tehovnik et al. 2003)[1]. What this indicates is that express saccade behavior bypasses the frontal cortex and the posterior association areas of the neocortex (i.e., V4 and the medial temporal cortex), and is transmitted directly from V1 to the brain stem[2].
For oculomotor control, parallel pathways occur between (1) the posterior and the anterior regions of the neocortex (i.e., including, respectively, V1 and the frontal eye fields[3]) and (2) the brain stem ocular generator, which mediates ocular responses in mammals (Figure 15, Tehovnik et al. 2021). The idea that parallel pathways between the neocortex and brain stem mediate specific responses, such as the V1-collicular pathway subserving ocular automaticity, is not new. Ojemann (1983, 1991) has proposed that a multitude of parallel pathways subserves language, since once a language is mastered, it becomes a highly automated act, and electrical perturbation of a focal neocortical site affects a specific component of a language, but not an entire language string, as long as the remaining parallel pathways are intact. Global aphasia occurs when all the parallel pathways of Wernicke’s and Broca’s areas are damaged (Kimura 1993; Ojemann 1991; Penfield and Roberts 1966).
Why is it that express saccades and regular saccades alternate across trials in a quasi-random order (Schiller and Tehovnik 2015)? Lisberger (1984) has studied latency oscillations across trials for the vestibuloocular reflex by measuring the onset of an eye movement after the beginning of a head displacement. He found latency values as low as 12 ms and as high as 20 ms (Lisberger 1984; Miles and Lisberger 1981). At a 12-ms latency, the signal would need to bypass the cerebellar cortex and be transmitted from the vestibular nerve through the vestibular nucleus (which is a cerebellar nucleus) to the abducens (oculomotor) nucleus to contract the eye muscles within 12 ms (Lisberger 1984). At a 20-ms latency, the signal would pass from the vestibular nerve to the cerebellar cortex by way of the granular-Purkinje synapses and then to the vestibular and abducens nuclei to arrive at the muscles within 20 ms. The difference between the fast and slow pathway is 8 ms, and it is the additional 8 ms through the cerebellar cortex that allows for any corrections to be made to the efference-copy code[4].
In the case of regular versus express saccades, the minimal latency difference is 45 ms (i.e., 125 ms – 80 ms = 45 ms, Schiller and Tehovnik 2015). So, what could explain this difference? Regular saccades utilize both posterior and anterior channels in the neocortex, for paired lesions of the superior colliculus and the frontal eye fields are required to abolish all visually guided saccades (Schiller et al. 1980). Perhaps, the longer latency of regular saccades as compared to express saccades is due to transmission by way of the frontal eye fields for regular saccades, as well as having the signal sent through the cerebellar cortex via the pontine nuclei and inferior olive to update any changes to the efference-copy code. Express saccades, on the other hand, utilize a direct pathway between V1 and the saccade generator, with access to the cerebellar nuclei (i.e., the fastigial nuclei[5], Noda et al. 1991; Ohtsuka and Noda 1991) for completion of a response at a latency approaching 80 ms—a latency that is too short for frontal lobe/temporal lobe participation and the conscious evaluation of the stimulus (at least 125 ms is required for a frontal/temporal lobe signal to arrive in V1, Ito, Maldonado et al. 2023)[6]. Utilizing the fast pathway would not permit any changes to the efference-copy code and furthermore there would be no time for the conscious evaluation of the stimulus conditions. This general scheme for slow versus fast ‘thinking’ (Kahneman 2011) can be applied to any behavior, as the behavior changes from a state of learning and consciousness to a state of automaticity and unconsciousness[7].
While thinking slow, the human cerebellum can update as many as 50,000 independent efference-copy representations (Heck and Sultan 2002; Sultan and Heck 2003). And we know that during task execution the entire cerebellar cortex is engaged including circuits not necessary for task execution (Hasanbegović 2024). This global reach assures that all aspects of a behavior are perfected through continuous sensory feedback; hence, evolution left nothing to chance.
The number of neurons dedicated to a behavioral response decreases as a function of automaticity. This translates into a reduction in energy expenditure per response for the neurons as well as for the muscles[8]. The first evidence for this idea came from the work of Chen and Wise (1995ab) on their studies of neurons in the medial and frontal eye fields of primates (see Figure 15, monkey). Monkeys were trained on a trial-and-error association task, whereby an animal fixated a central spot on a TV monitor, and arbitrarily associated a visual object with a specific saccade direction by evoking a saccadic eye movement to one of four potential targets (up, down, left, or right) to get a reward (see Figure 16, left-top panel, the inset). An association was learned to over 95% correctness within 20 trials; unit recordings were made of the neurons in the medial and frontal eye fields during this time. The performance of an animal improved on a novel object-saccade association, such that the neurons exhibited either an increase in unit spike rate over an increase in the proportion of correct trials (Figure 16, novel, top panel), or an increase followed by a decrease in unit spike rate as the proportion of correct trials increased (Figures 16, novel, bottom panel, and Figure 17, novel, top panel). When the neurons were subjected to a familiar association, the discharge often assumed the same level of firing achieved following the asymptotic performance on novel associations: namely, high discharge and modulated (Figure 16, familiar, top panel) or low discharge and unmodulated (Figure 16, familiar, bottom panel; Figure 17, familiar, top panel). Accordingly, many neurons studied exhibited a decline in activity when subjected to familiar objects[9]. Although 33% of the neurons (33 of 101 classified as having learning-related activity) exhibited a declined and a de-modulation in activity during the presentation of a familiar object (e.g., Figure 17, familiar, top), this proportion is likely an underestimation, since many such neurons may have been missed given that unit recording is biased in favor of identifying responsive neurons. For example, a neuron that exhibited a burst of activity on just one trial could have been missed due to data averaging of adjacent trials, using a 3-point averaging method (Chen and Wise 1995ab).
For cells that had the properties shown in figure 16 (novel, top panel) for novel objects—i.e., showing an increase in activity with an increase in task performance—there was no delay in trials between the change in neural firing and the change in performance, as indicated by the downward arrow in the figure representing ‘0’ trials between the curves; this suggests that these cells were tracking the performance. Also, there was a group of cells that exhibited an increase and a decrease in unit firing such that their response to novel and familiar objects declined with the number of trials as well (Figure 16, bottom panels, novel and familiar). This indicates that the decline in activity was being replayed when the object became familiar. Finally, for neurons that exhibited an increase and decrease in spike activity over trials, the declining portion of the neural response (at 50% decline) always followed the increase in task performance by more than half a dozen trials, as indicated by the gap between the downward arrows of figure 16 (novel, bottom) and figure 17 (novel, top), illustrating that these neurons anticipated peak performance. Some have suggested that the short-term modulation in the frontal lobes is channels to the caudate nucleus for long-term storage (Hikosaka et al. 2014; Kim and Hikosaka 2013). More will be said about this in the next chapter.
Imaging experiments (using fMRI) have shown that as one learns a new task, the number of neurons modulated by the task declines. Human subjects were required to perform a novel association task (associate novel visual images with a particular finger response) and to perform a familiar association task (associate familiar visual images with a particular finger response) (Toni et al. 2001). It was found that as compared to the novel association task, the familiar association task activated less tissue in the following regions: the medial frontal cortex and anterior cingulate, the prefrontal cortex, the orbital cortex, the temporal cortex and hippocampal formation, and the caudate nucleus. Furthermore, the over-learning of a finger sequencing task by human subjects from training day 1 to training day 28 was associated with a decline in fMRI activity in the following subcortical areas: the substantia nigra, the caudate nucleus, and the cerebellar cortex and dentate nucleus (Lehericy et al. 2005). Also, there was a decrease in activity in the prefrontal and premotor cortices, as well as in the anterior cingulate.
Finally, it is well-known that a primary language as compared to a secondary language is more resistant to the effects of brain damage of the neocortex and cerebellum, and a primary language, unlike a secondary language, is more difficult to interrupt by focal electrical stimulation of the neocortex (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). Accordingly, the more consolidated a behavior, the fewer essential neurons dedicated to that behavior. Once a behavior is automated, there is no need to recall the details: e.g., punching out a code on a keypad no longer requires an explicit recollection of the numbers. This is why a good scientist is also a good record keeper, which further minimizes the amount of information stored in the brain (Clark 1998). By freeing up neural space, the brain is free to learn about and be conscious of new things (Hebb 1949, 1968).
Summary:
1. Automaticity is mediated by parallel channels originating from the neocortex and passing to the motor generators in the brain stem; behaviors triggered by this process are context dependent and established through learning and consciousness.
2. Express saccades are an example of an automated response that depends on a pathway passing through V1 and the superior colliculus to access the saccade generator in the brain stem. The context for triggering this behavior is a single visual target presented with a gap between the termination of the fixation spot and the presentation of the target.
3. The rhythmical activity between express behavior and non-express activity across trials is indicative of the express behavior bypassing the cerebellar cortex and non-express behavior utilizing the cerebellar cortex to adjust the efference-copy code.
4. Express saccades or express fixations are too short in duration (< 125 ms) for a target to be consciously identified. It takes at least 125 ms for a signal to be transmitted between the frontal/temporal lobes and area V1 to facilitate identification.
5. Automaticity reduces the number of neurons participating in the execution of a behavioral response; this frees up central nervous system neurons for new learning and consciousness.
Footnotes:
[1] The long delay of 25 ms between V1 and the superior colliculus is partly due to the tonic inhibition of the colliculus by the substantia nigra reticulata, which originates from the frontal cortex (Schiller and Tehovnik 2015).
[2] Cooling area V1 of monkeys disables the deepest layers of the superior colliculus, thereby making it impossible for signals to be transmitted between V1 and the saccade generator in the brain stem (see figure 15-11 of Schiller and Tehovnik 2015).
[3] In rodents, the frontal eye field homologue is the anteromedial cortex, and the neurons in this region elicit ocular responses using eye and head movements (Tehovnik et al. 2021). In primates, the frontal eye fields control eye movements independently of head movements hence the name ‘frontal eye field’ (Chen and Tehovnik 2007).
[4] These short latencies are for highly automated vestibular responses. Astronauts returning from space have severe vestibular (and other) problems, and it takes about a week for full adaptation to zero-G conditions (Carriot et al. 2021; Demontis et al. 2017; Lawson et al. 2016). It would be expected that the latencies would far surpass 20 ms, since now vestibular centers of the neocortex (to engage learning and consciousness) would be recruited in the adaptation process (Gogolla 2017; Guldin and Grüsser 1998; Kahane, Berthoz et al. 2003). Patients suffering from vestibular agnosia would be unaware of the adaptation process, as experienced by astronauts (Calzolari et al. 2020; Hadi et al. 2022).
[5] The discharge of monkey fastigial neurons begins to fire 7.7 ms before the execution of a saccadic eye movement (Fuchs and Straube 1993). This nucleus is two synapses away from the ocular muscles.
[6] Presenting an unfamiliar object during an express fixation of an object (i.e., a fixation of less than 125 ms; fixations between electrically-evoked staircase saccades evoked from the superior colliculus are about 90 ms, Schiller and Tehovnik 2015) should fail to be identified consciously by a primate; on the other hand, the identification of a familiar object will only occur using ‘subconscious’ pathways during an express fixation, which are pathways at and below the superior colliculus/pretectum and the cerebellum (see: De Haan et al. 2020; Tehovnik et al. 2021).
[7] The conscious and unconscious states can never be totally independent, since the neocortex constantly monitors the behavior of an animal looking for ways to optimize a response in terms accuracy and latency (Schiller and Tehovnik 2015), and this interaction explains the variability of response latency across a succession of trials.
[8] Lots of aimless movements are generated when learning a new task (Skinner 1938), and when building knowledge, one must dissociate the nonsense from facts to better solve problems. This initially takes energy but in time automaticity saves energy.
[9] When we (Edward J. Tehovnik and Peter H. Schiller) first reviewed this result for publication, we were mystified by the decline of neural responsivity with object familiarity, even though we accepted the paper based on its behavioral sophistication and the challenges of recording from such a large number of neurons (i.e., 476) using a single electrode.
Figure 14. (A) The bimodal distribution of express saccades and regular saccades made to a single target by a rhesus monkey. (B) Before and after a unilateral lesion of the superior colliculus for saccades generated to a target located contralateral to the lesion. (C) Before and after a unilateral lesion of the frontal and medial eye fields for saccades generated to a target located contralateral to the lesion. Data from figure 15-12 of Schiller and Tehovnik (2015).
Figure 15. Parallel oculomotor pathways in the monkey and the mouse. Posterior regions of the neocortex innervate the brain stem oculomotor generator by way of the superior colliculus, and anterior regions of the neocortex innervate the brain stem oculomotor generator directly. For the monkey the following regions are defined: V1, V2, V3, V4, LIP (lateral intraparietal area), MT (medial temporal cortex), MST (medial superior temporal cortex), sts (superior temporal sulcus), IT (infratemporal cortex), Cs (central sulcus), M1, M2, FEF (frontal eye field), MEF (medial eye field), OF (olfactory bulb), SC (superior colliculus), and brain stem, which houses the ocular generator. For the mouse: V1, PM (area posteromedial), AM (area anteromedial), A (area anterior), RL (area rostrolateral), AL (area anterolateral), LM (area lateromedial), LI (area lateral intermediate), PR (area postrhinal), P (area posterior), M1, M2, AMC (anteromedial cortex), OB (olfactory bulb), SC (superior colliculus), and brain stem containing the ocular generator. The posterior neocortex mediates ‘what’ function, and the superior colliculus mediates ‘where’ functions.
Figure 16. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task. A monkey was required to fixate a spot on a monitor for 0.6 seconds, which was followed by a 0.6 second presentation of an object at the fixation location. Afterwards, there was an imposed 2-3 second delay, followed by a trigger signal to generate a response to one of the four target locations to obtain a juice reward; the termination of the fixation spot was the trigger signal (see inset in top-right panel: OB represents object, and the four squares indicate the target locations of the task, and Figure 17, bottom summarizes the events of the task). Chance performance was 25% correctness, and the maximal performance was always greater than 95% correctness established within 20 correct trials. The performance shown is the aggregate performance. In each panel, the normalized (aggregate) unit response is represented by a dashed line. The representations are based on figures 10 and 11 of Chen and Wise (1995a) for the medial eye field, and the neurons were modulated by learning novel object-saccade associations (N = 101 of 476 neurons classified). Some cells modulated by learning were also found in the frontal eye fields (N = 14 of 221 neurons classified, Chen and Wise 1995b). In the lower right panel, the familiar objects induced a decline in the neural response over the 20 trials. The illustrations are based on data from figures 11 and 12 of Chen and Wise (1995a).
Figure 17. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task carried out by a monkey. The dashed curves represent normalized aggregate unit responses. The inset in the right panel shows the task. For other details see the caption of figure 16. The bottom panel summarizes the events of the task. The illustrations are based on data from figures 3C, 4C, 5C, and 10D of Chen and Wise (1995a).




Damage of the neocortex that disconnects this structure from subcortical networks creates a condition in which behavioral routines that depend on the neocortex can no longer be modified. For example, paired lesions of the anterior and posterior ocular pathways of the neocortex by damage of the frontal eye fields and superior colliculi eliminate all visually guided saccadic eye movements (Figure 1), while sparing the vestibuloocular reflex and optokinetic nystagmus, two reflexes mediated by subcortical networks (Schiller and Tehovnik 2015). Significantly, following such damage these reflexes can no longer be modified, even though saccadic eye movements can still be generated, while performing the reflexes. This underscores how dependent subcortical mechanisms are on the neocortex for altering behavior (Hebb 1949), even though it has been found that reflexes based on eye blink conditioning that utilize robust but simplistic stimuli (electric shock, loud tones, or bright visual stimuli) can still be associated in the absence of the neocortex (Swain, Thompson et al. 2011), which could be referred to as ‘blind’ perception or sensation subthreshold to consciousness (Graziano et al. 2016; Tehovnik et al. 2021). Nevertheless, Pavlov (1929) observed that most classically conditioned reflexes in his dogs were abolished following neocortical removal. In short, any behavior that depends on the high-resolution computations of the neocortex—such as language or complex movement sequences—can never be modified following neocortical ablation (Kimura 1993; Vanderwolf 2006).
Hence, the neocortex is the command and control center of the brain by way of learning, and it makes sense that when the neocortex is disconnected from subcortical networks by damage of the pons and midbrain all consciousness is extinguished (Levy et al. 1987; Monti et al. 2010; Owen 2008; Owen et al. 2006; Plum and Posner 1980; Schiff, Llinas et al 2002; also see Arnts et al. 2020 on hydrocephalic patients).
Figure 1: Head-fixed rhesus monkeys were required to grasp food items positioned in a board spanning 60 by 60 degrees of visual angle (panel A), as their saccadic eye movements were measured. Normal subjects had no difficulty obtaining the food items and generating saccadic eye movements toward the targets (fixation location specified by the distribution of the dots, panel B). Following bilateral lesions of either the frontal eye fields or superior colliculus, subjects still grasped the food items and made saccades toward the targets (panels C, D, E, and F). In the absence of both the frontal eye fields and superior colliculi, the subjects could still grasp the food items, but they failed to generate visually guided saccades thereby fixing the eyes in central orbit (panels G and H). In primates, the frontal eye fields are located anterior to the forelimb representation of the motor cortex, and the superior colliculi receive projections from the entire neocortex but especially from the striate and extrastriate visual areas. The frontal eye field and superior colliculi represent the two neocortical channels that interconnect the neocortex and brain stem for the mediation of visually guided saccades (Schiller and Tehovnik 2015). From figure 15-14 of Schiller and Tehovnik (2015).
Hello all,
Currently I am working with eye-movement data collected using the Tobii Pro Glasses 3. We have a clinical group and a healthy control group who performed a series of tasks while equipped with these eye-tracking glasses. Task durations were not controlled as this was also a measure of interest, so participants took different amounts of time (seconds) to complete the tasks (in general, the clinical group taking a longer time to complete the tasks than the healthy control group). We want to compare the number of saccades and fixations performed for each task between the two groups. Since the time taken to complete the task would naturally contribute to the number of eye movements performed, is there a way to perform this between-group comparison while also taking in account this variation in time? Any suggestions would be very appreciated!
Thanks,
Saee.
I'm new to EOG, and would like to maximize the deflections for up- and downward saccades.
There are two types of eye movements, saccade and pursuit.
Are rapid eye movements of REM sleep saccades or a pursuits?
I am working on Eye tracking data from Tobbi Eye tracker. I need to generate heatmaps and scanpath for visualization. I have fixations coordinates for left eye and right eye, and timestamp for each fixation. I need to calculate saccades to generate scanpath. Please let me know if anyone know how to create heatmaps and scanpath ?
when we have only fixations and time.
I am implementing a couple of algorithms for saccades and fixations detection using a VR-headset integrating an eye-tracker. I have used a simple saccade-and-fixation task to have a feeling of how well the algorithms work with my data and I am leaning toward a specific algorithm. To finalize my decision I am looking for more 'ecological' visual tasks that are known to lead to different saccadic and fixation characteristics (e.g. saccades and fixations frequency, saccades amplitude, fixations duration, etc.), so that I can see if the characteristics calculated using a specific algorithm can discriminate among those tasks. Any direct suggestion or reference suggestion is more than welcome !
I have a matlab structure eye data from SRR eyelink. How to combine behavioral data with the eye data and extrcat saccadic amplitude for each trial.
Is it safe to assume that
num(saccades) = num(fixations) ?
If so, can I use the microsaccade algorithm of a 6 point running average used by Engbert et al. ( ) in order to calculate the other parameters of the saccade like velocity and amplitude?
The finding of high gain at Video-HIT is more and more frequent. If on the one hand we discuss a possible technical error in the execution of the test (slipping of the mask?), on the other hand the finding is considered as pathological especially in Menière's disease. In such cases, however, another difficult problem arises: even with very high gains I have never observed refixation saccades. I would like to know your experience and your hypotheses.
Hi, I was reading some to papers that use machine-learning approaches in automated emotion classification tasks, but they don’t specify which eye-tracking variables are the most informative to successfully infer the person’s affective state.
I would like that anyone recommend me papers (articles, books or chapters) that report associations between other eye-tracking measures, (besides pupil size, e.g. fixation duration, saccades, blinks, etc.) and affective variables (valence, arousal, or specific emotions)?
I have not found any reference and I would be interested to know if cupulolithiasis could be accompanied by an increase in gain and backwards compensatory saccades. Thanks
My study is to use only fixation and saccade data. But after removing :unclassified' and 'eye not found' data. I come to know there is 2 back to back fixation duration or saccade what I need to do.. If I'm calculating subsequent saccade (between fixation) by using 2 fixation.
Should i consider 1 fixation duration by adding 2 fixation point.
Plz check the image attached.

How I can calculate saccade ampłtude in degree, if I have xand y coordinates in pixels.. Which formula, algorithm, software can help.. I have used tobii for data collection.
I have Fixation details in 2D, X-Y Coordinates in pixels. Distance between participants and screen.. Saacade duration for each saccade..
So how I can calculate saccade Amplitude in degree for each saccade.. Which software, formula, algorithms need to saccade Amplitude
i have calculated fixations and saccades but how to plot it?
I am looking for the cards for the King-Devick test to be used on patients with neurodegenerative diseases; I was hoping for a special kit to be sold but unfortunately I only found tests for Ipad or laptop at very expensive prices. Can anyone help me in understanding if there is a non-electronic kit to perform the K-D test?
Thank you.
I am currently writing my master thesis on eye-tracking and I want to run a visual search task on a set of products on digital display. I want to record eye movements and evaluate based on fixations and saccedes thorugh webcam based eye-tracking. Then I need to analyze the data, possibly on SPSS. The sample size is 50 participants and I will provide them a questionnaire. Do you know which is the best software I could use to programme the experiment? Or a good company I'd need to contact?
Thank you in advance.
I'm trying to analyse differences in saccadic eye movements (e.g. amplitude, velocity, and latency) between two groups of participants (A and B). My study is small, Group A has 12 participants and Group B has 15 participants. However, there were baseline differences in visual acuity (LogMAR values) based on the Wilxcon Test.
Anyone has any idea how to I adjust or account for differences in baseline visual acuity data to study the differences between the two groups as my visual acuity data is positively correlated with my saccadic amplitude data? Is there a way to account for them in SPSS with non-parametric statistics?? fyi: my data had violated normality as well.
Hi everybody !
Here's my first question on this great plateform :
What eye-tracking devices to choose ?
Actually, I'm interested on pupillometry and fixation time/saccade. With my future team we will do IN LAB test, in front of a screen (so I think glasses are not necessary).
With my lab, we are thinking about buying a screen based bar. With the software we use, we can have a
- Tobii Bar X2-60
or a
- GazePoint GP3 60 Hz
I wanted to know if someone has any recommandation ? :)
The main point in my mind is that, it seems that we can't access to Tobii raw data, only data transformed by Tobii algorithm .. And to publish (because it's part of the point) it could be problematic because this algorithm is not validated in any paper ... am I wrong ?
Thank you very much for your answer,
Cyril Forestier
Does anybody know of a software application that can extract eye movement data from video? I'd like to use the 240 fps video camera on the iPhone 6 to record video of eye movements. I'd like to then extract data sufficient to calculate maximum vertical eye saccade velocity. This is for research into Progressive Supranuclear Palsy (PSP), a degenerative neurological disease. Any help is appreciated. Thanks. John
Any type of training is used besides saccade training? What type of tests used to measure progress of training?
I am looking for any recent review paper on the effects of forward and backward pattern masking on saccades with suggested neural pathways and models explaining them. I am relatively new to vision science and it would be a big help if someone can point me to a good review paper to begin with.
Dear all
I am reading the literature about the ‘centre of gravity’ of saccade and ‘averaging saccade’. The concepts of these two terms are alike. The ‘centre of gravity’ is that targets are surrounded by non-targets, and the saccades, instead of landing at the designated target, land in the midst of the whole configuration [Kowler, E. (2011). Eye movements: the past 25 years. Vision Research, 51(13), 1457–1483]. The ‘saccade averaging’ is that two adjoining stimuli in the same hemifield evoke a short-latency saccade, the saccade tends to land on an intermediate location between these stimuli. [Heeman, J., Theeuwes, J., & Van Der Stigchel, S. (2014). The time course of top-down control on saccade averaging. Vision Research, 100, 29–37]. Besides, in some literature, the saccade averaging is known as the ‘global effect’. Are the concepts of these three terms identical?
Thanks in advance.
I am searching for an affordable eye tracking system, which will provide good fixation and pupillometry data (for detecting saccades will be quite unrealistic with such a system) with a satisfying precision and accuracy. I thought the Gazepoint GP3 (HD) might be a good solution - can anyone share his/her first-hand experience using the system?
Hi all,
With my colleague, I am trying to implement a human-like saccadic movement in an artificial agent.
Our setup requires that the "pupil" of the robot moves across either 30°, 40° or 50° of visual angle as if it was attending two targets on a screen located at 60 cm from its head.
We are not looking for velocity profiles right now, since we just want to test a simplified behavior on the agent and see how it looks.
Can someone suggest some paper reporting the range of time/velocity for human saccades given a certain amount of degrees?
Thanks in advance for your help.
Davide Ghiglino
Hello,
I have a question regarding synchronization of artifact rejection in eeglab. I use the version: "eeglab 14_1_1b" (MATLAB plugin)
I work with epoched data. To find artifacts like blinks and eye movements/saccades I have used the built-in ICA function and followed the recommended guidelines from Chaumon et al. 2015 to pick out components. Furthermore I have used the "simple voltage treshold" til sort out any voltage below -75 microvolt and voltage above 75 microvolt.
My issue arises when I want to synchronize the artifact info in EEG and EVENTLIST. This is a must do step in order for me to compute the average ERPs. eeglab won't let me do this, and it brings me this exact message when i try to synchronize:
"It looks like you have deleted some epochs from your dataset. At the current version of ERPLAB, artifact info synchronization cannot be performed in this case. So, artifact info synchronization will be skipped, and the corrosponding averaged ERP will rely on the information at EEG.event only. Do you want to continue anyway
(yes, no)." I need to find a solution, so that every rejected trails and removed components will be removed from the dataset when averaging the ERPs.
I have not been able to find any version of eeglab which seems to solve the problem.
Any suggestions?
Best regards
Morten

Since saccade velocity is calculated with the help of fixation points, does it depend on the duration of the stimulus movements and it's location on the screen ? If so how ? Also, saccades velocity always have a value of 100, 200, 400 degrees/second etc, but I do not understand how to obtain such values ? Can anyone demonstrate with an example ?
Thank you
I have spatial and temporal data for my eye tracking sessions. I have my x and y positions for the pupils to calculate the distance between two pupil movements in pixels but I don't know how to convert it into angles.
Hi guys, are there any general metrics for measuring attention via Eye tracking? I mean something like - if you got long fixations (above 200ms), there is fewer of them and you got longer saccadic movements - does it mean that your attention on task is well developt? Is there any rule of this kind, that i can use for different tasks in general?
Thanks a lot
I'm looking for any papers comparing drift diffusion model parameters across effectors during perceptual decision making tasks. Ideally I'm looking for within-subject studies where different muscles have been used. But also interested in general comparisons between effectors (e.g. saccade vs reaching). Any help much appreciated.
I am looking for suitable toolbox/package to analyze the eye-tracking data, using R, Matlab or Python. We have a Tobii T60 system and eye tracking data is simultaneously recorded with EEG data. We are using an E-prime extension, so the data is in an Excel (CSV) format. The presented stimuli are videos with fixed AOIs. First we want to do some simple analysis, such as overall looking time and fixations (to compare AOIs). The second step will be to determine the onset of saccade from one AOI to another AOI. Finally I want to visualize the results in heatmaps and scanpaths. Can anyone recommend a toolbox/package that is compatible with our data format and research questions? Thank you in advance!
I need to buy an eye tracking device to use in an experiment in which I will have to track saccades between a text segment and the part of the diagram referred by this text segment.
I have some saccade data from 3 SMI and 3 groups of participants.I want the exact way to calculate saccade accuracy for each person and also mean average for each group
I was wondering what was the typical scalp topography of a vertical saccade on MEG signal? Is that similar to blink?
See attached to this question a blink topography. Could the second topography be a vertical saccade?


I have heard repeatedly that by paralyzing the extra-ocular muscles and thereby stopping saccades sight is disrupted. However, I have been unable to locate a reference for this experiment. Have you seen or done this experiment? Do you have a reference for it? Thanks.
I understand in a reading task, this makes sense, but in a general visual search process such as say looking at different objects of interest, can we still claim that regressive saccades indicate a confusion or difficulty in processing information from a particular AOI?
The user behaviour while in interaction with an interface designed for measuring the perception of emotion can give us an idea about the cognitive load of the particular interface in different conditions. Parameters like gaze fixation, saccade movements, gaze counts and so on are there, but what are the most used and reliable parameters.
I have 'x' and 'y' coorniates and time stamps of eye gaze data from a reading task. Is there any alogrithm/program that helps us identify the regressive saccades using these coordinates.
Thanks.
I have eye gaze data collected with an eye-tracker that uses the dispersion based algorithm to identify fixations (sampling rate "only" 50Hz), i.e. the built-in detector looks first for fixations and the other events (blinks and saccades) are derived. Thus, I know I should look at my saccades with caution! I was wondering whether there are some criteria/guideline to help identify plausible vs. unlikely saccades, e.g., in terms of duration, amplitude, velocity? I read that saccades when lookin at a screen/reading last between 20 and 200ms. In my dataset I have some "saccades" of over 2000ms.
saccade velocity is the velocity with which eyes move from a fixation point to other.
Regarding eye movement, what is the difference between planning and programming of a saccade? Which one needs attention?
I'll be starting an visual fMRI paradigm soon which features eye-tracking (Resonance Technologies/viewpoint) Max 60hz. The eye-tracker is purely to ensure the participant's fixation. Can anyone suggest a reference for some smoothing values, saccade thresholds, etc.?
I need to buy an eye tracking device to use in an experiment in which I will have to track:
a) the gaze movements of someone tracking an object moving on the screen
b) the saccades between text segments
I'm wondering if multiple imputation may be the best way to go about it.
Thank you for all of you who joined the discussion. I need to clarify the question in a better way.
I should use the word "detect" in the question anyway. I am afraid I misunderstood the definition of the microsaccade, I thought it is a special part of the saccade (marco one) from the fixation.
For example, in my own analysis I found this is hard to achieve, the data of (average)velocity was m=97 (SD=50), but the maximum velocity was about 227, which was smaller than six times the SD (which would be 300).
Can anyone help me about this?
I wonder if Deep Layer of Superior Colliculus fires when a saccade on a STATIC contrast is performed. Thanks
Are we simply dividing number of characters in a text with the number of saccades? Should we divide saccade length into 2 categories as regressive saccade length and forward saccade length?
Hi everyone,
I need some advice about statistical tests during longitudinal studies. In my current research, I want to describe second language learners reading skill development throughout an academic year. My eye tracking sessions will have a reading task and I will use eye tracking metrics such as saccade length, fixation duration and regressions. I am planning to correlate them with learner grades in my reading course and describe pedagogical development and reading behavior relationship. Due to the participant limitations, I assume that multilevel modeling, logistic regression or linear regression are not suitable and may not meet required assumptions. However, I insist that second language reading behavior is quite similar to first language reading in many ways and eye tracking metrics I mentioned above are strong predictors. I am planning to include 6 periodical eye tracking tasks for each participant throughout an academic year. And 6 different reading exams also. What do you think? Which statistical procedures are best in longitudinal research in education? Thanks in advance.
Basically, in EEG and MEG research, we reject some trials for our average if our participants blinked or made some saccades, this due to 1) the distortion caused on the MEG/EEG signal and because 2) the visual input will shift across the retina, which creates a saccade-induced ERP response (Luck, 2005). Thus, our participants have to fixate the center of the screen.
It is well known that it's possible to correct the signal using independent component analysis (ICA) or other artifact corrections algorithms. However, which could be the best manner to investigate the early saccadic behavior toward complex picture without altering the sensory ERPs?
EMDR, one of the new kids on the block of psychotherapy stands for Eye Movement Desensitization and Reprocessing. EMDR has made an impressive splash on the psychotherapy scene and media since its description by Francine Shapiro. Shapiro speculated that its beneficial role might be related to the fact that voluntary saccades of the eyes mimic the saccades of REM during sleep, and thus its functional state.
However, the neurobiological mechanism and functional circuits underlying EMDR and its alleged role in inhibiting symptoms of anxiety and stress still remain largely unknown.
I wonder if anyone working in this area of research has ever considered the Frontal Eye Fields (FEF) and its role to suppress anxiety? The FEF (area 8) is located in frontal medial cortex, adjacent to areas like the anterior cingulate (AC) and ventromedial prefrontal cortex (VM). AC and VM have often been linked with processes as attentional control and, more importantly, with suppression of negative emotions. Possibly via inhibitory connections with the amygdala and hypothalamus .
With neuroimaging it could be easily tested to what extent voluntary saccades activate not only the FEF, but also these important neuroaffective structures.
We are not familiar with any group having used it.