Science topic

Eye Movements - Science topic

Voluntary or reflex-controlled movements of the eye.
Questions related to Eye Movements
  • asked a question related to Eye Movements
Question
3 answers
real time Eye movement tracking analysis
Relevant answer
Answer
Maybe these articles will help you.
1) Neuro-Inspired Eye Tracking With Eye Movement Dynamics (thecvf.com)
2) Best practices in eye tracking research - ScienceDirect
  • asked a question related to Eye Movements
Question
4 answers
How can energy harvesting by using eye movement in wearable devices?
It doesn't mean harvesting energy for an intelligent implant. But for wearable devices like glasses for health monitoring.
Relevant answer
Answer
Energy harvesting is highly frequency dependent. Since eye movement is a very low frequency excitation source, It would generate energy but may not be sufficient to drive any load.
  • asked a question related to Eye Movements
Question
3 answers
Hi everyone, I'm looking for recommendations about Eye Tracking Devices, specifically for use in behavioral and physiological research. My plan is to register eye movement, pupil size and EEG recordings while participants complete computerized behavioral tasks.
Really appreciate in advance any info about brands, companies, prices and prior experiences using the product.
Relevant answer
Answer
We used Pupil Labs https://pupil-labs.com/products/ (~2000 euro)
we could synchronize everything to both the stimulation computer and therefore the EEG (BrainVision) no problem. The device itself had hardware glitches that we had to send the glasses back to have fixed, but otherwise it was a very usable product, both the hardware, the GUIs and scripting the software. I would say there’s a reason it costs an order of magnitude less than more traditional eye tracking hardware (e.g. Tobii), but it got the job done, so if you can’t afford the next level up, it’s what I’d recommend.
  • asked a question related to Eye Movements
Question
2 answers
I am implementing a couple of algorithms for saccades and fixations detection using a VR-headset integrating an eye-tracker. I have used a simple saccade-and-fixation task to have a feeling of how well the algorithms work with my data and I am leaning toward a specific algorithm. To finalize my decision I am looking for more 'ecological' visual tasks that are known to lead to different saccadic and fixation characteristics (e.g. saccades and fixations frequency, saccades amplitude, fixations duration, etc.), so that I can see if the characteristics calculated using a specific algorithm can discriminate among those tasks. Any direct suggestion or reference suggestion is more than welcome !
Relevant answer
Answer
Federico hi,
We have a working implementation of online fixation/saccade detection for any VR eye-tracker supported in Unity. The detection is based on a velocity analysis (I-VT). If you are interested in details, you can contact me.
Regarding the tasks, I would use a family of visual search tasks, which are natural and often require fixations on each of presented targets.
  • asked a question related to Eye Movements
Question
4 answers
actually i'm using optisystem, then after i run my circuit diagram i get BER analyzer result.
Relevant answer
Answer
Q-Factor represents the quality of the SNR in the “eye” of a digital signal, the “eye” being the human eye shaped pattern on an oscilloscope that indicates transmission system performance. The best place for determining whether a given bit is a “1” or a “0” is the sampling phase with the largest “eye opening.” The larger the eye opening, the greater the difference between the mean values of the signal levels for a “1” and a “0”. The greater that difference is, the higher the Q-Factor and the better the BER performance.
Q factor is the difference between the mean values of the signal levels for a "1" and a "0" (μ1 and μ0 ), divided by the sum of the noise values (σ1 and σ0 ) at those two signal levels assuming Gaussian noise and the probability of a ‘1’ and ‘0’ transmission being equal (P(1) = P(0) = ½)."
  • asked a question related to Eye Movements
Question
9 answers
We are currently looking for a new mobile eye tracker. In the past, we used the SMI ETG. Since this company was bougth by apple, it does not provide support anymore. Therefore we are looking for a new company that provides eye trackers as well as supporting software. What experiences do you have? What are the best options available at the moment?
Relevant answer
Answer
Tobii eye-trackers are usually good as well as cheaper than SMI eye-trackers. However, after being bought by apple SMI has started manufacturing good quality eye-tracking glasses which are very convenient and portable for recording.
  • asked a question related to Eye Movements
Question
2 answers
Hi everyone,
In eye-movement tracking studies with babies, it is sometimes difficult to get a perfect calibration. I wonder if there are well-established criteria, thresholds or recommendations for excluding calibrations.
Any input - tutorials, method reviews or drawn from researchers' own experience would be very helpful.
Second, has anyone experienced slighted shifted calibrations (i.e. experimenter perceives that the baby is looking at the right target but the ET maps the eye-mvt with a shift, e.g. to the right direction probably due to issue with the initial calibration)? Are there ways to correct those or shall the participants' data be discarded?
Many thanks in advance for experienced input.
Relevant answer
Answer
Aude hi,
To my understanding, in many baby studies, you need to track gaze positions on real objects, e.g. toys on the table. In contrast, many eye-trackers calibrate and validate gaze positions on an orthogonal plane, such as a monitor surface. This can lead to intrinsic errors in baby's gaze tracking. In our stimulus presentation software, EventIDE (www.okazolab.com), we address it by offering a 3D perspective correction for the default eye-tracker calibration. Apart of it, the software includes a quick, semi-automatic, correction for slighted shifts (e.g. as result of head movement). If you like to see a demo, please write me here or to i.korjoukov@okazolab.com
  • asked a question related to Eye Movements
Question
3 answers
Hi! Is there any papers in regards with fine differences of Eye Movements protocol in EMDR?
I am particularly interested:
-Distance between eyes and focus points of movements
-angular velocity
-Angle of movements (or length of point of movement's trajectory)
-Count of repetitions
I've found only these two studies, but there are not enough details:
Physiological effects of eye movements of different speeds and eye fixation during engagement in negative autobiographical memories: Experimental research regarding EMDR · Francine Shapiro Library. (б. д.). van Veen, S. C., van Schie, K., Wijngaards-de Meij, L. D. N. V., Littel, M., Engelhard, I. M., & van den Hout, M. A. (2015). Speed Matters: Relationship between Speed of Eye Movements and Modification of Aversive Autobiographical Memories. Frontiers in Psychiatry, 6. https://doi.org/10.3389/fpsyt.2015.00045
Relevant answer
Answer
  • asked a question related to Eye Movements
Question
3 answers
Does anybody know of a software application that can extract eye movement data from video? I'd like to use the 240 fps video camera on the iPhone 6 to record video of eye movements. I'd like to then extract data sufficient to calculate maximum vertical eye saccade velocity. This is for research into Progressive Supranuclear Palsy (PSP), a degenerative neurological disease. Any help is appreciated. Thanks. John
Relevant answer
Answer
Hi John,
There is a lot of geometry behind eyetracking.
Any iphone algorithm may be useful to track iris or pupils but NOT to give direction of gaze.
None of the current camera-based eyetracking devices work OK without calibrating for angle Kappa (e.g see Fig 4.3: http://www.opt.indiana.edu/v665/CD/CD_Version/CH4/CH4.HTM).
Just try a basic portable eyetracker from one of the known manufacturers e.g. Tobii.
Good luck with the project
Antonio
  • asked a question related to Eye Movements
Question
5 answers
Looking to potentially purchase new cameras for collecting footage for canine behaviour studies. Ideally we want the quality to be good enough to consider some facial expression/eye movement within a sensible distance. The best thing out there appears to be GoPro in terms of quality of camera, footage quality, battery life, can be synced with the app etc. but they are very pricey particularly for the number of cameras we're after.
Can anyone suggest alternatives or are GoPro's worth it as they're robust and will last?
Thanks!
Relevant answer
Answer
Hi!
Do you need high frame rates, super wide (fisheye) angles or battery powered mini devices? If not, I'd recommend to check out surveillance camera systems such as this one: https://www.lorextechnology.com/4k-ip-camera-system/4k-ultra-hd-ip-nvr-security-camera-system-with-10-ip-cameras/4KHDIP1612W-1-p
Lots of channels, lots of cameras and high (4K) resolution for a great price. This would obviously only work for experimental conditions, i.e., in a test arena where you can install the cameras. Also, you mentioned footage of eye movements - you might need higher frame rates for this purpose (with 4K resolution, you would usually not have more than 30-60 fps).
  • asked a question related to Eye Movements
Question
9 answers
I apologize for asking again the same question, but il really don't have a satisfactory response. In different article, I find more and more (focused on eye movement in reading) that interaction is expressed with "t" value and "b" value. I just ask myself how it is possible. I have try to find a response by without success, even asking directly the concerned authors. I think that my question is not really relevant. But for me, it is.
I known that different solution exist (as anova funtion or chi2) but i would like to know whether it is possible with "t" value.
Thank your in advance for any answer that could remove the doubts?
Relevant answer
Answer
Khelifi, if you want say, the (group*freq*predict) interaction( I will call that term int). to get the t information then:
. 1. Make sure that term is in your model , full <- lm(y ~ + x1+....+int) where lm is the R function that you want to call.
2. follow that with summary(full) then
3 t and F information should print out.
here lm is the standard linear model function from base R.which you will replace by the desired function from the package you are using.
Hope this helps, David Booth
  • asked a question related to Eye Movements
Question
4 answers
Dear all
I am reading the literature about the ‘centre of gravity’ of saccade and ‘averaging saccade’. The concepts of these two terms are alike. The ‘centre of gravity’ is that targets are surrounded by non-targets, and the saccades, instead of landing at the designated target, land in the midst of the whole configuration [Kowler, E. (2011). Eye movements: the past 25 years. Vision Research, 51(13), 1457–1483]. The ‘saccade averaging’ is that two adjoining stimuli in the same hemifield evoke a short-latency saccade, the saccade tends to land on an intermediate location between these stimuli. [Heeman, J., Theeuwes, J., & Van Der Stigchel, S. (2014). The time course of top-down control on saccade averaging. Vision Research, 100, 29–37]. Besides, in some literature, the saccade averaging is known as the ‘global effect’. Are the concepts of these three terms identical?
Thanks in advance.
Relevant answer
Answer
Hi,
Basically, all three terms refer to the same phenomenon: the observation of systematic saccade endpoint deviations towards a central location within a stimulus configuration consisting of multiple, nearby visual targets/distractors.
The origin/use of the three different terms can be attributed to the different levels at which the phenomenon has been/is being described.
The term ‚center of gravity‘ refers to the fact that respective saccades are executed towards the so called center of gravity of the respective stimulus configuration.
The term ‚averaging saccade‘ or ‚saccade averaging‘ refers to the notion of oculomotor averaging. The concept of averaging is reflected both at the behavioral and neuronal level and further conveys a theoretical assumption concerning the mechanism underlying the computation of respective saccade vectors. At the behavioral level, respective saccades roughly land at an averaged, intermediate location in between the stimuli causing the oculomotor competition (roughly, because saccade endpoints vary as a function of specific features of the stimulus configuration, such as the relative size and saliency of the stimuli, etc.). At the neuronal level, the phenomenon has been linked to an averaging of the population activity associated with the different visual stimuli (i.e. the formation of an intermediate activity hub) within oculomotor maps such as the Superior Colliculi (however, at what processing stage such averaging ultimately occurs is not finally resolved).
The term ‚global effect‘ refers to the notion that respective saccades are thought to be of reflexive nature and are influenced by the global configuration of the visual scene (rather than by a specific local target). Also, respective saccades have been described as to arise from a coarse (or global) visual processing.
peace
  • asked a question related to Eye Movements
Question
12 answers
Dear Esteemed Researchers,
For a research purpose, I need eye-tracker glass to monitor the eye movement. I have found two such Glasses "Tobii Pro Glasses" and "Pupil Labs". They are multi-functioned and expensive. I need to track the eye movement only during driving & walking, and I am looking at low cost. Would you mind to suggest me any eye-tracker at reasonable cost?
Thanks in advance.
Relevant answer
Answer
Tobii pro is the best one. I used several eye trackers for my research. Let me know if you need more information.
  • asked a question related to Eye Movements
Question
3 answers
Hi all,
With my colleague, I am trying to implement a human-like saccadic movement in an artificial agent.
Our setup requires that the "pupil" of the robot moves across either 30°, 40° or 50° of visual angle as if it was attending two targets on a screen located at 60 cm from its head.
We are not looking for velocity profiles right now, since we just want to test a simplified behavior on the agent and see how it looks.
Can someone suggest some paper reporting the range of time/velocity for human saccades given a certain amount of degrees?
Thanks in advance for your help.
Davide Ghiglino
Relevant answer
Answer
Hi Davide,
This is à long list papers wich may help you.
[1]
Joseph H.S., Liversedge S.P., Blythe H.I., White S.J., Rayner K. Word length and landing position effects during reading in children and adults Vision Res 2009 ;  49 : 2078-2086 [cross-ref]
[2]
Jacobson J.Z., Dodwell P.C. Saccadic eye movements during reading Brain Lang 1979 ;  8 : 303-314
[3]
Blythe H.I., Liversedge S.P., Joseph H.S., White S.J., Rayner K. Visual information capture during fixations in reading for children and adults Vision Res 2009 ;  49 : 1583-1591 [cross-ref]
[4]
Chekaluk E., Llewellyn K.R. Visual stimulus input, saccadic suppression, and detection of information from the postsaccade scene Percept Psychophys 1990 ;  48 : 135-142
[5]
Rayner K., Duffy S.A. Lexical complexity and fixation times in reading: effects of word frequency, verb complexity, and lexical ambiguity Mem Cognit 1986 ;  14 : 191-201
[6]
Zee D.S., Fitzgibbon E.J., Optican L.M. Saccade-vergence interactions in humans J Neurophysiol 1992 ;  68 : 1624-1641
[7]
Steinman R., Collewijn Binocular retinal image motion during active head rotation Vision Res 1980 ;  20 : 415-429 [cross-ref]
[8]
Kapoula Z.A., Robinson D.A., Hain T.C. Motion of the eye immediately after a saccade Exp Brain Res 1986 ;  61 : 386-394
[9]
Collewijn H., Erkelens C.J., Steinman R.M. Binocular coordination of human horizontal saccadic eye movements J Physiol 1988 ;  404 : 157-182
[10]
Bains R.A., Crawford J.D., Cadera W., Vilis T. The conjugacy of human saccadic eye movements Vision Res 1992 ;  32 : 1677-1684 [cross-ref]
[11]
Kloke W.B., Jaschinski W. Individual differences in the asymmetry of binocular saccades, analysed with mixed-effects models Biol Psychol 2006 ;  73 : 220-226 [cross-ref]
[12]
Zee D.S., Fitzgibbon E.J., Optican L.M. Saccade-vergence interactions in humans J Neurophysiol. 1992 ;  68 : 1624-1641
[13]
Erkelens A.J., Sloot O.B. Initial directions and landing positions of binocular saccades Vision Res 1995 ;  35 : 3297-3303
[14]
Deubel H., Bridgeman B. Fourth Purkinje image signals reveal eye-lens deviations and retinal image distortions during saccades Vision Res 1995 ;  35 : 529-538 [cross-ref]
[15]
Deubel H., Bridgeman B. Perceptual consequences of ocular lens overshoot during saccadic eye movements Vision Res 1995 ;  35 : 2897-2902 [cross-ref]
[16]
King W.M., Zhou W. New ideas about binocular coordination of eye movements: is there a chameleon in the primate family tree? Anat Rec 2000 ;  261 : 153-161 [cross-ref]
[17]
Bucci M.P., Kapoula Z. Binocular coordination of saccades in 7-years-old children in single word reading and target fixation Vision Res 2006 ;  46 : 457-466 [cross-ref]
[18]
Vernet M., Yang Q., Daunys G., Orssaud C., Eggert T., Kapoula Z. How the brain obeys Hering’s law: a TMS study of the posterior parietal cortex Invest Ophthalmol Vis Sci 2008 ;  49 : 230-237 [cross-ref]
[19]
Kapoula Z., Yang Q., Coubard O., Daunys G., Orssaud C. Role of the posterior parietal cortex in the initiation of saccades and vergence: right/left functional asymmetry Ann N Y Acad Sci 2005 ;  1039 : 184-197 [cross-ref]
[20]
Yang Q., Kapoula Z. Binocular coordination of saccades at far and at near in children and in adults J Vis 2003 ;  3 : 554-561
[21]
Bucci M.P., Brémond-Gignac D., Kapoula Z. Speed and accuracy of saccades, vergence and combined eye movements in subjects with strabismus before and after eye surgery Vision Res 2009 ;  49 : 460-469 [cross-ref]
[22]
Jaschinski W., Svede A., Jainta S. Relation between fixation disparity and the asymmetry between convergent and divergent disparity step responses Vision Res 2008 ;  48 : 253-263 [cross-ref]
[23]
Fioravanti F., Inchingolo P., Pensiero S., Spanio M. Saccadic eye movement conjugation in children Vision Res 1995 ;  35 : 3217-3228 [cross-ref]
[24]
Patel S.S., Oğmen H., White J.M., Jiang B.C. Neural network model of short-term horizontal disparity vergence dynamics Vision Res 1997 ;  37 : 1383-1399 [cross-ref]
[25]
Collewijn H., Erkelens C.J., Steinman R.M. Trajectories of the human binocular fixation point during conjugate and non-conjugate gaze-shifts Vision Res 1997 ;  37 : 1049-1069 [cross-ref]
[26]
Collewijn H., Erkelens C.J., Steinman R.M. Voluntary binocular gaze-shifts in the plane of regard: dynamics of version and vergence Vision Res 1995 ;  35 : 3335-3358 [cross-ref]
[27]
Kirkby J.A., Webster L.A., Blythe H.I., Liversedge S.P. Binocular coordination during reading and non-reading tasks Psychol Bull 2008 ;  134 : 742-763 [cross-ref]
[28]
Sylvestre P.A., Galiana H.L., Cullen K.E. Conjugate and vergence oscillations during saccades and gaze shifts: implications for integrated control of binocular movement J Neurophysiol 2002 ;  87 : 257-272
[29]
Coubard O.A., Kapoula Z. Saccades during symmetrical vergence Graefes Arch Clin Exp Ophthalmol 2008 ;  246 : 521-536 [cross-ref]
[30]
Collewijn H., Erkelens C.J., Steinman R.M. Binocular coordination of human vertical saccadic eye movements J Physiol 1988 ;  404 : 183-197
[31]
Kirkby J.A., Blythe H.I., Benson V., Liversedge S.P. Binocular coordination during scanning of simple dot stimuli Vision Res 2010 ;  50 : 171-180 [cross-ref]
[32]
Yang Q., Kapoula Z. Aging does not affect the accuracy of vertical saccades nor the quality of their binocular coordination: a study of a special elderly group Neurobiol Aging 2008 ;  29 : 622-638 [cross-ref]
[33]
Luna B., Thulborn K.R., Munoz D.P., Merriam E.P., Garver K.E., Minshew N.J., and al. Maturation of widely distributed brain function subserves cognitive development Neuroimage 2001 ;  13 : 786-793 [cross-ref]
[34]
Turkeltaub P.E., Gareau L., Flowers D.L., Zeffiro T.A., Eden G.F. Development of neural mechanisms for reading Nat Neurosci 2003 ;  6 : 767-773 [cross-ref]
[35]
Vernet M., Kapoula Z. Binocular motor coordination during saccades and fixations while reading: a magnitude and time analysis J Vis 2009 ;  9 : 1-13
[36]
Nazir T.A., Jacobs A.M., O’Regan J.K. Letter legibility and visual word recognition Mem Cognit 1998 ;  26 : 810-821
[37]
Reichle E.D., Rayner K., Pollatsek A. Eye movement control in reading: accounting for initial fixation locations and refixations within the E-Z Reader model Vision Res 1999 ;  39 : 4403-4411 [cross-ref]
[38]
Blythe H.I., Liversedge S.P., Joseph H.S., White S.J., Findlay J.M., Rayner K. The binocular coordination of eye movements during reading in children and adults Vision Res 2006 ;  46 : 3898-3908 [cross-ref]
[39]
Liversedge S.P., Rayner K., White S.J., Findlay J.M., McSorley E. Binocular coordination of the eyes during reading Curr Biol 2006 ;  16 : 1726-1729 [cross-ref]
[40]
Nuthmann A., Kliegl R. An examination of binocular reading fixations based on sentence corpus data J Vis 2009 ;  9 : 1-28
[41]
Juhasz B.J., Liversedge S.P., White S.J., Rayner K. Binocular coordination of the eyes during reading: word frequency and case alternation affect fixation duration but not fixation disparity Q J Exp Psychol (Colchester) 2006 ;  59 : 1614-1625 [cross-ref]
[42]
Liversedge S.P., White S.J., Findlay J.M., Rayner K. Binocular coordination of eye movements during reading Vision Res 2006 ;  46 : 2363-2374 [cross-ref]
[43]
Ishida T., Ikeda M. Temporal properties of information extraction in reading studied by a text-mask replacement technique J Opt Soc Am 1989 ;  10 : 1624-1632 [cross-ref]
[44]
Heller D., Radach R. Eye movements in reading: are two eyes better than one? Current oculomotor research: physiological and psychological aspects  NY: Plenum press (1999). p. 341–48.
[45]
Simola J., Holmqvist K., Lindgren M. Right visual field advantage in parafoveal processing: evidence from eye-fixation-related potentials Brain Lang 2009 ;  111 : 101-113 [cross-ref]
[46]
Rayner K. Eye movements in reading and information processing: 20 years of research Psychol Bull 1998 ;  124 : 372-422 [cross-ref]
[47]
Hendriks A.W. Vergence eye movements during fixations in reading Acta Psychol (Amst) 1996 ;  92 : 131-151 [cross-ref]
[48]
Qin D., Takamatsu M., Nakashima Y. Disparity limit for binocular fusion in fovea Opt Rev 2006 ;  13 : 34-38 [cross-ref]
[49]
London R., Crelier R.S. Fixation disparity analysis: Sensory and motor approaches Optometry 2006 ;  77 : 590-608 [cross-ref]
[50]
Yang Q., Bucci M.P., Kapoula Z. The latency of saccades, vergence, and combined eye movements in children and in adults Invest Ophthalmol Vis Sci 2002 ;  43 : 2939-2949
[51]
Aghababian V., Nazir T.A. Developing normal reading skills: aspects of the visual processes underlying word recognition J Exp Child Psychol 2000 ;  76 : 123-150 [cross-ref]
[52]
Huestegge L., Radach R., Corbic D., Huestegge S.M. Oculomotor and linguistic determinants of reading development: a longitudinal study Vision Res 2009 ;  49 : 2948-2959 [cross-ref]
[53]
Rayner K. Eye movements and the perceptual span in beginning and skilled readers J Exp Child Psychol 1986 ;  41 : 211-236 [cross-ref]
[54]
Häikiö T., Bertram R., Hyönä J., Niemi P. Development of the letter identity span in reading: evidence from the eye movement moving window paradigm J Exp Child Psychol 2009 ;  102 : 167-181
[55]
Duffy F.H., McAnulty G. Neurophysiological heterogeneity and the definition of dyslexia: preliminary evidence for plasticity Neuropsychologia 1990 ;  28 : 555-571 [cross-ref]
[56]
Quercia P., Fourage R., Guillarme L., Marino A., Quercia M., Saltarelli S. Traitement proprioceptif et dyslexie  Beaune: AF3dys édition (2008). p. 622. AF3dys@neuf.fr.
[57]
Stein J., Walsh V. To see but not to read; the magnocellular theory of dyslexia Trends Neurosci 1997 ;  20 : 147-152 [cross-ref]
[58]
Eden G.F., Van Meter J.W., Maisog J.M., Woods R.P., Zeffiro T.A. Abnormal processing of visual motion in dyslexia revealed by functional brain imaging Nature 1996 ;  382 : 66-70
[59]
Cornelissen P.L., Richardson A.J., Mason A.J., Stein J.F. Contrast sensitivity and coherent motion detection measured at photopic luminance levels in dyslexics and controls Vision Res 1995 ;  35 : 1483-1494 [cross-ref]
[60]
Skottun B.C., Skoyles J.R. Is coherent motion an appropriate test for magnocellular sensitivity? Brain Cogn 2006 ;  61 : 172-180 [cross-ref]
[61]
Eden G.F., Stein J.F., Wood H.M., Wood F.B. Differences in eye movements and reading problems in dyslexic and normal children Vision Res 1994 ;  34 : 1345-1358 [cross-ref]
[62]
Biscaldi M., Fisher B., Aiple F. Saccadic eye movements of dyslexic and normal reading children Perception 1994 ;  23 : 45-64
[63]
Hutzler F., Wimmer H. Eye movements of dyslexic children when reading in a regular orthography Brain Lang 2004 ;  89 : 235-242 [cross-ref]
[64]
Latvala M.L., Korhonen T.T., Penttinen M., Laippala Ophthalmic findings in dyslexic schoolchildren Br J Ophthalmol 1994 ;  78 : 339-343 [cross-ref]
[65]
Bucci M.P., Brémond-Gignac D., Kapoula Z. Latency of saccades and vergence eye movements in dyslexic children Exp Brain Res 2008 ;  188 : 1-12 [cross-ref]
[66]
Ram-Tsur R., Faust M., Caspi A., Gordon C.R., Zivotofsky A.Z. Evidence for ocular motor deficits in developmental dyslexia: application of the double-step paradigm Invest Ophthalmol Vis Sci 2006 ;  47 : 4401-4409 [cross-ref]
[67]
Kapoula Z., Ganem R., Poncet S., Gintautas D., Eggert T., Brémond-Gignac D., and al. Free exploration of painting uncovers particularly loose yoking of saccades in dyslexics Dyslexia 2009 ;  15 : 243-245
[68]
Bucci M.P., Brémond-Gignac D., Kapoula Z. Poor binocular coordination of saccades in dyslexic children Graefes Arch Clin Exp Ophthalmol 2008 ;  246 : 417-428 [cross-ref]
[69]
Stein J.F., Riddell P.M., Fowler S. Disordered vergence control in dyslexic children Br J Ophthalmol 1988 ;  72 : 162-166 [cross-ref]
[70]
Thaler V., Urton K., Heine A., Hawelka S., Engl V., Jacobs A.M. Different behavioral and eye movement patterns of dyslexic readers with and without attentional deficits during single word reading Neuropsychologia 2009 ;  47 : 2436-2445 [cross-ref]
[71]
Motsch S., Mühlendyck H. Frequency of reading disability caused by ocular problems in 9- and 10-year-old children in a small town Strabismus 2000 ;  8 : 283-285 [cross-ref]
[72]
Kapoula Z., Bucci M.P., Jurion F., Ayoun J., Afkhami F., Brémond-Gignac D. Evidence for frequent divergence impairment in French dyslexic children: deficit of convergence relaxation or of divergence per se? Graefes Arch Clin Exp Ophthalmol 2007 ;  245 : 937-946
good luck :) !
  • asked a question related to Eye Movements
Question
3 answers
Emotion recognition is a broader area in sentiment analysis. In the current scenarioo, most of the research works for emotion recognition is done in terms of using facial images, gestures and using signals like EEG,ECG,EMG and many,,
how about the use of EOG for emotion recognition? Is eye movement based emotion recognition possible with EOG?
It is fact that the human eye tend to show differences for each emotion by means of dilation in pupil
Is it possible to predict human emotion using EOG along..
Relevant answer
Answer
You can also find additional information in the book Artificial Cognition Architectures Springer. We discuss modeling human emotions among others.
  • asked a question related to Eye Movements
Question
4 answers
Dear all
I am a psychology PhD student in Malaysia, and I am searching for conferences held in China (local conferences instead of international ones, as I would like to present in Mandarin). Does anyone have any suggestions? My research areas are vision, eye movement, and EEG.
Thanks in advance
Chen
Relevant answer
Answer
Thanks for your reply. I will try to search for relevant information on Baidu. Meanwhile, I will try to communicate with professors in China or Taiwan and inquiry related details.
  • asked a question related to Eye Movements
Question
4 answers
Hi all
I am just wondering whether it is necessary to monitor subject's eye movement when they have a visual search task. I notice that most studies on visual search used a eye-tracking to monitor one's eye movement trace.But someone said that visual search ccould be occurred without eye movement.
Relevant answer
Answer
Dear Qing He,
The answer to your primary question would be driven by the objectives of your research. In particular, is your research attempting to specifically address attention issues? As pointed out by Talis Bachmann in the preceding post, information processing within the brain is influenced by both covert and overt attention. It is often problematic to directly analyse covert processing, even if fMRI or EEG data is also being collected. However, overt attention is normally relatively easy to capture through various saccade tracking systems. This collected data can provide extremely useful insights, by providing direct information on:
(1) bottom-up (e.g. salience driven activations), which then drive neural processing and cognition; and
(2) top-down attention drivers (e.g. fixation due to cognitive attention drivers).
Such insights are best assessed by overlaying the saccades and fixations over the real-time screen content. However, the underlying data can also be quite useful for specific trend analysis (e.g. saccade distances in differing visual conditions, etc.). The insights that are exposed by the saccade and fixation data (which expose overt attention) can therefore be particularly important in assessing broader or specific implications.
For example, in some of my research, I have been assessing the implications of differing forms of visualisation on viewer comprehension and impressions. I can utilise tests and instruments that facilitate outcome assessment for these aspects. However, the eye tracking data shows the start of the processing into the afferent systems, and therefore delivers some highly useful insights into causative factors. In other words, capturing the overt attention can show the beginning point for processing, and this data can then be matched against the outcomes.
If you want to learn more about the implications of overt and covert attention and visual processing, have a look at Appendix 1 in Volume 2 of the thesis at: http://researchrepository.murdoch.edu.au/id/eprint/30198/ I hope that will be of help.
Wishing you all the very best,
Bruce Hilliard
  • asked a question related to Eye Movements
Question
3 answers
I'm wondering whether eye movement, facial expression or EEG signal can be used to calculate the attention level on the content when a subject is watching a film, an advertisement video or any other kind of videos?
Could any one give me some light on this?
Relevant answer
Answer
I have seen eye movement and heart beat mesurement used, but naybe you shozl look into neuromarketing research.. I am yure they hva gone much further than this.
  • asked a question related to Eye Movements
Question
15 answers
I’m analyzing eye-tracking data derived from flight simulator studies with pilots and I’m coming to gaze pattern analysis right now. I’m interested who is doing the same analyses right now with data from the aviation domain or other domains. Which kind of approach are you applying? Analyzing transition probabilities with (Hidden) Markov models? Analyzing longer gaze patterns with regard to the structure of the patterns? Different approaches?
Relevant answer
Answer
This year I released an R package 'GrpString' for scanpath (or string in general) analysis: https://CRAN.R-project.org/package=GrpString
The package emphasizes quantifying the comparison of groups of scanpaths. It can export patterns, return transition matrices, compute entropy, calculate statistical difference, etc. More functions such as HMM will be added in the near future.
  • asked a question related to Eye Movements
Question
6 answers
Anyone knows any eye tracker commercialy available, that can be used to record eye movements , but is portable and can be use for research purposes?
I have found one "eyebrain"  - that seems to be ok, but the company seems not to exist anymore.
Relevant answer
Answer
Dear Maja,
first of all you should decide if you are interested in a mobile eye tracking system or a static one. That is an important start.
In my opinion - here is a nice and valid summary:
Good luck with your research!
Zuzana
  • asked a question related to Eye Movements
Question
1 answer
I was wondering what was the typical scalp topography of a vertical saccade on MEG signal? Is that similar to blink?
See attached to this question a blink topography. Could the second topography be a vertical saccade?
Relevant answer
Answer
I do not mind This topic. I'm sorry
  • asked a question related to Eye Movements
Question
6 answers
A patient suffers of a tinnitus phenomenon. She complains about a causal dependance between her tinnitus and some specific movement of the homolateral eye !
I know that eardrum is made of collagen(type II)as well as some parts of the eye, but it is difficult to guess an explanation for connecting these two remarks...
Have you any idea about that ?
Relevant answer
Answer
Tinnitus volume may fluctuate with certain eye movement -intermittently-, jaw & neck movement including teeth grinding and yawning.
  • asked a question related to Eye Movements
Question
9 answers
Based on recommandations, I am working on designing my experiment using Psychtoolbox on Windows 10 platform. 
I have been encountering countless synchronization failure messages because the VBL isnt working properly on my laptop. And since I want to design an accurate stimulus presentation, I have the following questions: 
1. If you have used Psychtoolbox on windows, how you managed to get around the synchronization problem? or is it better to switch to MAC with retina display?
2. For recording eye movements during visual search, is there a general guidline of what photos to use, how many and for how long? Is it better to display what the user need to search for before the image or can we tell him verbally (any significant difference)?  
The same question goes for recording eye movements during scene memorization: do we ask the user to memorize certain number of items in a photo? 
3. Also for scene memorization, I have read in one paper that they perform a following test to check how much the user did memorize from the scene. Is that necessary or I can skip that step. 
P.S: I aim at recording eye movements as basic step for classifying visual task for a later use in an AR setting. 
Thank you in advance for any help
Relevant answer
Answer
To get back to your questions (I can only try to answer the first two):
Using PTB on a Laptop is problematic if you do not have a dedicated graphics card. Most (!) Laptops only have onboard GCs, which are low quality and will cause sync failures. This is more often the case for Windows systems. Here, especially Windows 7/10 are cumbersome as you can not turn of the Window manager anymore. In addition, LCDs are generally less timing friendly than CRTs - to be very polite. The best setup would be to use a Linux System with a CRT monitor (for the full "oculomotor nerd setup" take a look at Carpenter's book Vision Research). Since this is again a switch-the-system answer, a solution for you setup would be to add a photodiode to the screen for which you record the output as a synchronized stream to your eye data. This gives you precise timing information of your stimuli.
There are also very good resources from PTB on that topic:
Another useful resource for writing experiments in PTB is Peter Scarfe:
And finally the work by M. Kleiner is very helpful (see pub).
For the second question, I would just advise to be aware that using a visual example is easier (as it instantaneously gives you a template to look for) but will prime a location. As you will most likely start in the center of the screen anyway, that should not be a big problem - just keep it in mind.
Greetings, David
  • asked a question related to Eye Movements
Question
4 answers
I want to train a classifier to distinguish between three visual tasks based on recorded eye movements specific features.
According to what I have read so far, the training set consists of instances with their appropriate labels.
My question is, how do I determine the appropriate label, is it done manually during experiment, when I ask the user to perform a certain visual task and I record the measurements and label the instances knowningly?
Relevant answer
Answer
Each visual task features along with its labels you have to give for training set. For testing with out labels you have to give the test visual task features only. It is comes under supervised classification. 
Each visual task u record separately and save those file names according to that class then only your are easily discriminating and assigning labels at that time of training.
  • asked a question related to Eye Movements
Question
3 answers
Hi, could anybody comment on the performance of the Dikablis eye tracker? Also, does anybody have experience of dealing with Argus Science since they took over from ASL?
Many thanks for your help and advice,
Sheree
Relevant answer
Answer
Hi Sharee,
I'm currently using the Dikablis Professional Glasses for my project. As you may already be aware Ergoneers has two models i.e. Professional (Binocular, 60Hz, Full HD standard lens field camera) and Essential Glasses (Monocular, 50Hz, 768x576 Fish-eye lens scene camera). I have tried both systems. I'm not sure what aspects of performance you are referring to. Each eye tracker has its own pros and cons; purchasing a system ultimately depends on your questions and the type of variables you're trying to extract.
That said, the pupil tracking technology is fairly impressive in my opinion. I have bespectacled elderly participants with scratched lens coming through. Additionally, I even had polarised lens attached to their spectacles. So far, I haven't had any major problems with not being able to detect pupils, calibrate the system and obtain data. I would say it's fairly easy and straight forward to use.  
Regardless of the type of scientific research (with the assumption of extracting saccadic and fixational data), I probably won't recommend the Essential Glasses due to substantial lens distortion from the Fish-eye field camera. This has implications for calculating saccadic amplitude and velocity, which, ultimately has compounding effects on other variables including fixation durations etc. I would say the quality of the scene camera images from the Essential glasses is not comparable with the full HD images from the Professional glasses. 
I've had issues with eye camera 'shakes' when participants walk around due to the design of the hardware and this creates a lot false positives (saccades). Some of the data are just not physiologically possible. Unfortunately, the Ergoneers D-Lab software adopts a very simplistic way of parsing data i.e. solely based on velocity thresholds and doesn't allow manual correction for those errors. Currently, I'm not able to use the software to analyse my data.
Hope this helps...
Stacy
  • asked a question related to Eye Movements
Question
4 answers
I know as an artist that the human eye constraints at the focal point at first sight, and we have to put the main object at that point, or may be to contrast it. But I have to find some research about this, and still couldn't find any.
Thank you,
Relevant answer
Answer
Thank you so much!
  • asked a question related to Eye Movements
Question
2 answers
May I know how  fixation detection algorithm is affected by Horizontal and Vertical FOV? Should I restrict these parameters in eye tracker software to a certain limit to get more accurate results?
Relevant answer
Answer
What can I do exactly? Can you clarify?
  • asked a question related to Eye Movements
Question
1 answer
I am using Pupil Labs binocular eye tracker for recording different eye movements. According to the documentation, the gaze position and fixation position are given in normalized x, y  screen coordinates (if the calibrated area is the screen).
I have two questions:
1. Will calculating eye movements measures that include fixation position such as Area of Interest (AOI) based measurments still be accurate?
2. How can I determine AOI in this case, since AOI will reflect a part of visual stimuli dispalyed on monitor (in pixel) ? I read through their open source code, and still didnt get how they normalized the coordinates.
Is there a standard method for normalization to follow?   
Relevant answer
Answer
Jihad, I do not know this device, but what I read in the Pupil Labs Wiki seems fairly clear:
"We use a normalized coordinate system with the origin 0,0 at the bottom left and 1,1 at the top right" (https://github.com/pupil-labs/pupil/wiki/Data-Format)
Even if the the data format within the eye tracking software was only single precision the numerical accuracy would be 0.000001, i.e. one could distinguish one million view points in each direction. Note that a full hd display has roughly two thousand (sorry I had written "million" before my edit) pixels in each row so that even in the worst case of only single precision one would be able to decribe the view point on the display within two thousands of a pixel.
Given the the measurement accuracy of only 0.6 deg of the device (the display will subtend no more than 30 deg with a properly positioned subject) I think there is no need to worry anyway. 
  • asked a question related to Eye Movements
Question
3 answers
trying to find a different way of measuring depth of anesthesia. Is there a slim non invasive device I could use on anesthetized patients, under surgical drapes, to record eye movements? 
Relevant answer
Answer
Hey Carine,
EOG is the only option to record eye position with closed eyes over a longer period of time (that is why it is used in many sleep labs). SSC is problematic (in humans) since it is invasive and it also operates with magnetic fields, which might be prohibitive under clinical conditions (e.g. in the ER).
However there are several problems with EOG. As Andreas already mentioned there might be differences in the measured peak velocity w.r.t. other methods. More prominently, EOG has the highest noise level of the current recording techniques which limits the achievable accuracy to roughly 1deg. Also as Andreas said, EOG has the tendency for strong baseline drift which requires you to high-pass filter your signal - which is a bad thing if you want to record slow eye movements. Brown recommended using 0.1Hz as the lower frequency. However we use much less (around 0.01Hz) to avoid distortions in stable fixations. Relating to the baseline drift, EOG is very light sensitive, and sudden changes in luminance cause long lasting, slow oscillations in the signal (see North).
All that being said, EOG is easily applicable in a clinical setting (as you already have the electrodes and experts on their placement) and it will also provide you with blink parameters (which might be interesting in your context, as they are indicators of drowsiness; see Caffer, although their results were not recorded using EOG most of their parameters can be extracted from the EOG signal as well). In addition, for your purpose, you probably won't need high accuracy and also baseline drift might not be much of a problem (as you would only look at the relative signal, not the overall position).
Hope that helps, Greetings, David
EDIT: Somehow the publications have not been added by RG, so here is the list:
  • Brown et al, ISCEV standard for clinical electro-oculography (EOG) 2006. Documenta Ophthalmologica, 2006, 113(3), 205-212
There is an update from 2010, but the basics are in the 2006 version.
  • North, Accuracy and precision of electro-oculographic recording. Investigative Ophthalmology & Visual Science, 1965, 4(3), 343-348
  • Caffer et al, Experimental evaluation of eye-blink parameters as a drowsiness measure. European Journal of Applied Physiology, 2003, 89(3-4), 319-325
  • asked a question related to Eye Movements
Question
9 answers
I am working with eye movement metrics to infer visual task based on observed eye movement patterns.
One of these metrics is total scanpath length. Based on what I have read, I need to find the length of movement (saccade) between consecutive fixations. I assumed that this can be easily calculated using Euclidean distance in pixels, then converting it into degree of visual angle. Is my assumption correct?
When I searched how a scanpath is computed, I encountered some programs, where the scanpath is compressed and also similarity between scanpaths is calculated. Do I need to compress a scanpath (by removing repetitive sequence) in order to correctly calculate its length? For what purpose the similarity between two scanpaths is used?
Are they any additional criteria that I should also consider?
Relevant answer
Answer
Hey Jihad!
There are is a myriad of different ways of comparing scanpaths which result in different measures and require different preprocessing stages. A good overview on measures you can use and there specific dis-/advantages is given in Holmqvist's book (chapter III section 10/11).
If you have a fixed amount of viewing time (and negligible data loss) for each participant, euclidean distance will serve you well. But this also depends on your research question (e.g. are you actually interested in the travel distance or is it more important to measure the area covered by the scan path).
For comparing scanpaths between different tasks (or views) I would recommend some more elaborate metric (e.g. recurrence quantification, multi-match).
Hope that helps, Greetings, David
  • asked a question related to Eye Movements
Question
1 answer
I am wondering whether,  in  patients with severe retinal dystraphies, nystagmus (repetitive, jerky  eye movements) function similar to stimming in autism. Since the brain deams the visual information perceived by the remaining photoreceptors to be useless, nystagmus develops as a way to drown out this abberrent  information. Because nystagmus is repetitive, and disrupts fixation, the information obtained by the eyes can be tuned out. The movement of the eyes is also  atenuated.
Relevant answer
Answer
Stimming in autism is associated with other autistic symptoms, but there do not seen to be any psychological disorders associated with congenital nystagmus.
  • asked a question related to Eye Movements
Question
5 answers
I have eye gaze data collected with an eye-tracker that uses the dispersion based algorithm to identify fixations (sampling rate "only" 50Hz), i.e. the built-in detector looks first for fixations and the other events (blinks and saccades) are derived. Thus, I know I should look at my saccades with caution! I was wondering whether there are some criteria/guideline to help identify plausible vs. unlikely saccades, e.g., in terms of duration, amplitude, velocity? I read that saccades when lookin at a screen/reading last between 20 and 200ms. In my dataset I have some "saccades" of over 2000ms.
Relevant answer
Answer
There are many methods to identify saccades and determine onset and offset. This is what I used in Smeets, J. B. J., & Hooge, I. T. C. (2003). Nature of variability in saccades. Journal of Neurophysiology, 90, 12-20.
Saccade detection was done by a Matlab program that marked saccades by a velocity threshold of 75°/s. After detection of all saccades, the program searched for onsets and offsets of the saccade. First it determined the averaged absolute velocity of a 100-ms period of fixation that started approximately 200 ms preceding the saccade. Onsets and offsets were determined by searching the time when absolute saccade velocity reached a value 3 SDs higher than the absolute velocity during the fixation.
  • asked a question related to Eye Movements
Question
8 answers
I am a psychotherapist looking to learn more about the visual field and the brain as I am a trainer and clinician in Brainspotting Therapy. We use a single fixed gaze point resonant with traumatic distress and get amazing positive results with clients. Looking to find ways to learn more about the brain and vision to design a study for my PhD.
  • asked a question related to Eye Movements
Question
6 answers
Dear all, we want to use eye-tracking for night drives in automobiles. Which eye-tracking system would you recommend? And why? Thanks in advance for your answers. Ulrich
Relevant answer
Answer
Dear Andreas, Thank you very much, Fantastic advice. Have a nice day, Ulrich
  • asked a question related to Eye Movements
Question
5 answers
I am looking for a book or a good material for various statistical methods and visualization techniques that can be applied only to eye movement data. Can anyone please suggest one?
Is there any online course  for the same?
Thanks in advance
Relevant answer
Answer
Hi Chandrika,
I would recommend the following book as a good basis:
Holmqvist, Kenneth (2011): Eye tracking. A comprehensive guide to methods and measures. Oxford, New York: Oxford University Press.
If you are interested in more specific ones, have a look here:
Antes, James R.; Kristjanson, Arlinda F. (1991): Discriminating artists from nonartists by their eye-fixation patterns. In: Perceptual and Motor Skills 73, S. 893–894.
Beatty, Jackson (1982): Task-Evoked Pupillary Response, Processing Load, and the Structure of Processing Resources. In: Psychological Bulletin 91 (2), S. 276–292. Online verfügbar unter http://content.ebscohost.com/pdf29_30/pdf/ddd/pdh/1982/1982-11578-001.pdf?T=P&P=AN&K=1982-11578-001&S=L&D=pdh&EbscoContent=dGJyMNLe80SeqLI4yOvqOLCmr0yeprBSrqa4TbKWxWXS&ContentCustomer=dGJyMPGqsU61rrZPuePfgeyx44Dt6fIA, zuletzt geprüft am 24.07.2014.
Boccignone, Giuseppe; Ferraro, Mario (2014): Ecological Sampling of Gaze Shifts. In: IEEE Transactions on Cybernetics 44 (2), S. 266–279, zuletzt geprüft am 08.08.2014.
Castelhano, Monica S.; Mack, M. L.; Henderson, John M. (2009): Viewing task influences eye movement control during active scene perception. In: Journal of Vision 9 (3), S. 6. DOI: 10.1167/9.3.6.
Cole, Michael J.; Gwizdka, Jacek; Liu, Chang; Bierig, Ralf; Belkin, Nicholas J.; Zhang, Xiangmin (2011): Task and user effects on reading patterns in information search. In: Interacting with Computers 23 (4), S. 346–362. DOI: 10.1016/j.intcom.2011.04.007.
Dalmaijer, Edwin S. (2014): Is the low-cost EyeTribe eye tracker any good for research? DOI: 10.7287/PEERJ.PREPRINTS.585V1.
Dewhurst, Richard; Nyström, Marcus; Jarodzka, Halszka; Foulsham, Tom; Johansson, Roger; Holmqvist, Kenneth (2012): It depends on how you look at it: scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. In: Behav Res Methods 44 (4), S. 1079–1100. DOI: 10.3758/s13428-012-0212-2.
Engbert, Ralf; Trukenbrod, Hans A.; Barthelmé, Simon; Wichmann, Felix A. (2015): Spatial statistics and attentional dynamics in scene viewing. In: Journal of Vision 15 (1). DOI: 10.1167/15.1.14.
Gamito, Pedro Santos Pinto; Rosa, Pedro Joel (Hg.) (2014): I see me, you see me. Inferring cognitive and emotional processes from gazing behaviour. International Conference on Eye Tracking, Visual Cognition and Emotion. Cambridge: Cambridge Scholars Publishing.
Hayhoe, Mary M.; Ballard, Dana H. (2014): Modeling Task Control of Eye Movements. In: Curr. Biol. 24 (13), S. R622-R628. DOI: 10.1016/j.cub.2014.05.020.
Henderson, John M.; Luke, Steven G. (2014): Stable individual differences in saccadic eye movements during reading, pseudoreading, scene viewing, and scene search. In: Journal of Experimental Psychology: Human Perception and Performance. DOI: 10.1037/a0036330.
Holmqvist, Kenneth; Andrà, Chiara; Lindström, Paulina; Arzarello, Ferdinando; Ferrara, Francesca; Robutti, Ornella; Sabena, Christina (2011): A method for quantifying focused versus overview behavior in AOI sequences. In: Behav Res Methods 43 (4), S. 987–998. DOI: 10.3758/s13428-011-0104-x.
Holmqvist, Kenneth; Nyström, Marcus; Mulvey, Fiona (2012): Eye tracker data quality: what it is and how to measure it. ETRA 2012. Santa Barbara, CA, USA, 2012.
Hooge, Ignace; Camps, Guido (2013): Scan path entropy and arrow plots: capturing scanning behavior of multiple observers. In: Front. Psychol. 4, S. 996. DOI: 10.3389/fpsyg.2013.00996.
Hyönä, Jukka (2010): The use of eye movements in the study of multimedia learning. In: Learning and Instruction 20 (2), S. 172–176. DOI: 10.1016/j.learninstruc.2009.02.013.
Hyönä, Jukka; Lorch, Robert F.; Rinck, Mike (2003): Eye Movement Measures to Study Global Text Processing. In: Jukka Hyönä, Ralph Radach und Heiner Deubel (Hg.): The mind's eye. Cognitive and applied aspects of eye movement research. Amsterdam, Boston: North-Holland, S. 313–334, zuletzt geprüft am 21.10.2014.
Jacob, Robert J. K.; Karn, Keith S. (2003): Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises. In: Jukka Hyönä, Ralph Radach und Heiner Deubel (Hg.): The mind's eye. Cognitive and applied aspects of eye movement research. Amsterdam, Boston: North-Holland, S. 573–605.
Lai, Meng-Lung; Tsai, Meng-Jung; Yang, Fang-Ying; Hsu, Chung-Yuan; Liu, Tzu-Chien; Lee, Silvia Wen-Yu et al. (2013): A review of using eye-tracking technology in exploring learning from 2000 to 2012. In: Educational Research Review 10, S. 90–115. DOI: 10.1016/j.edurev.2013.10.001.
Larsson, Linnéa; Nyström, Marcus; Stridh, Martin (2013): Detection of saccades and postsaccadic oscillations in the presence of smooth pursuit. In: IEEE transactions on bio-medical engineering 60 (9), S. 2484–2493. DOI: 10.1109/TBME.2013.2258918.
Liu, Chang; Liu, Jingjing; Belkin, Nicholas J.; Cole, Michael J.; Gwizdka, Jacek (2011): Using Dwell Time as an Implicit Measure of Usefulness in Different Task Types. ASIST. New Orleans, 2011, zuletzt geprüft am 13.06.2014.
Liu, Jingjing; Gwizdka, Jacek; Liu, Chang; Belkin, Nicholas J. (2010): Predicting task difficulty for different task types. In: Proc. Am. Soc. Info. Sci. Tech. 47 (1), S. 1–10. DOI: 10.1002/meet.14504701173.
Marti, Sébastien; Bayet, Laurie; Dehaene, Stanislas (2014): Subjective report of eye fixations during serial search. In: Consciousness and cognition 33C, S. 1–15. DOI: 10.1016/j.concog.2014.11.007.
Mayer, Richard E. (2010): Unique contributions of eye-tracking research to the study of learning with graphics. In: Learning and Instruction 20 (2), S. 167–171. DOI: 10.1016/j.learninstruc.2009.02.012.
Miwa, Makiko; Egusa, Yuka; Saito, Hitomi; Takaku, Masao; Terai, Hitoshi; Kando, Noriko (2011): A method to capture information encountering embedded in exploratory Web searches. In: Information Research 16 (3), S. 487. Online verfügbar unter http://www.informationr.net/ir/16-3/paper487.html, zuletzt geprüft am 24.07.2014.
Morimoto, Carlos Hitoshi (Hg.) (2010): Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications. New York: Association for Computing Machinery.
Mulvey, Fiona; Zemblys, Raimondas; Larsson, Linnéa; Holmqvist, Kenneth: Clarifying the validity of eye movement measures from various eye tracker types. Vision Sciences Socitey Annual Meeting. Vision Sciences Society. 2014, zuletzt geprüft am 05.11.2014.
Nyström, Marcus; Andersson, Richard; Holmqvist, Kenneth; van de Weijer, Joost (2013): The influence of calibration method and eye physiology on eyetracking data quality. In: Behav Res Methods 45 (1), S. 272–288. DOI: 10.3758/s13428-012-0247-4.
Otero-Millan, Jorge; Macknik, Stephen L.; Langston, Rachel E.; Martinez-Conde, Susana (2013): An oculomotor continuum from exploration to fixation. In: Proceedings of the National Academy of Sciences of the United States of America 110 (15), S. 6175–6180. DOI: 10.1073/pnas.1222715110.
Radach, Ralph; Kennedy, Alan (2004): Theoretical perspectives on eye movements in reading: Past controversies, current issues, and an agenda for future research. In: European Journal of Cognitive Psychology 16 (1-2), S. 3–26. DOI: 10.1080/09541440340000295.
Radach, Ralph; Kennedy, Alan (2013): Eye movements in reading: some theoretical context. In: Q J Exp Psychol (Hove) 66 (3), S. 429–452. DOI: 10.1080/17470218.2012.750676.
Raney, Gary E.; Campbell, Spencer J.; Bovee, Joanna C. (2014): Using eye movements to evaluate the cognitive processes involved in text comprehension. In: J Vis Exp (83), S. e50780. DOI: 10.3791/50780.
Richardson, Daniel C.; Matlock, Teenie; Randall Crosby, Jennifer; Dale, Rick (2006): Figurative, spontaneous, interactive and potentially offensive: Three projects with rich visual and linguistic stimuli (Cognitive Science 2006 Workshop: What have eye movements told us so far, and what is next?). Online verfügbar unter http://cognaction.org/rdmaterials/pdfs/confpapers/richardson_et_al_2006.pdf.
San Agustin, Javier; Skovsgaard, Henrik; Mollenbach, Emilie; Barret, Maria; Tall, Martin; Witzner Hansen, Dan; Paulin Hansen, John (2010): Evaluation of a low-cost open-source gaze tracker. In: Carlos Hitoshi Morimoto (Hg.): Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications. New York: Association for Computing Machinery, S. 77–80, zuletzt geprüft am 24.07.2014.
Santos, Inês P.; Moniz-Pereira, Leonor (2014): Gaze Fixation Patterns in a Route with Obstacles: Comparison Between Young and Elderly. In: Gamito, Pedro Santos Pinto und Pedro Joel Rosa (Hg.): I see me, you see me. Inferring cognitive and emotional processes from gazing behaviour. Cambridge: Cambridge Scholars Publishing, S. 86–103.
Thaler, L.; Schütz, A. C.; Goodale, M. A.; Gegenfurtner, K. R. (2013): What is the best fixation target? The effect of target shape on stability of fixational eye movements. In: Vision Res. 76, S. 31–42. DOI: 10.1016/j.visres.2012.10.012.
van der Lans, Ralf; Wedel, Michel; Pieters, Rik (2011): Defining eye-fixation sequences across individuals and tasks: the Binocular-Individual Threshold (BIT) algorithm. In: Behav Res Methods 43 (1), S. 239–257. DOI: 10.3758/s13428-010-0031-2.
van Gog, Tamara; Kester, Liesbeth; Nievelstein, Fleurie; Giesbers, Bas; Paas, Fred G. (2009): Uncovering cognitive processes: Different techniques that can contribute to cognitive load research and instruction. In: Computers in Human Behavior 25 (2), S. 325–331. DOI: 10.1016/j.chb.2008.12.021.
Vandeberg, Lisa; Bouwmeester, Samantha; Bocanegra, Bruno R.; Zwaan, Rolf A. (2013): Detecting cognitive interactions through eye movement transitions. In: Journal of Memory and Language 69 (3), S. 445–460. DOI: 10.1016/j.jml.2013.05.006.
Veneri, Giacomo; Rosini, Francesca; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra (2011): Evaluating gaze control on a multi-target sequencing task: the distribution of fixations is evidence of exploration optimization. In: Computers in biology and medicine 42, S. 235–244, zuletzt geprüft am 21.10.2014.
Wang, Hsiao-shen; Chen, Yi-Ting; Lin, Chih-Hung (2014): The learning benefits of using eye trackers to enhance the geospatial abilities of elementary school students. In: Br J Educ Technol 45 (2), S. 340–355. DOI: 10.1111/bjet.12011.
Wass, S. V.; Smith, T. J.; Johnson, M. H. (2013): Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults. In: Behav Res Methods 45 (1), S. 229–250. DOI: 10.3758/s13428-012-0245-6.
Wengelin, Åsa; Torrance, Mark; Holmqvist, Kenneth; Simpson, Sol; Galbraith, David; Johansson, Victoria; Johansson, Roger (2009): Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production. In: Behav Res Methods 41 (2), S. 337–351. DOI: 10.3758/BRM.41.2.337.
White, Sarah J.; Staub, Adrian (2012): The distribution of fixation durations during reading: Effects of stimulus quality. In: Journal of Experimental Psychology: Human Perception and Performance 38 (3), S. 603–617. DOI: 10.1037/a0025338.
Hope, it will help.
All the best!
Matthias
  • asked a question related to Eye Movements
Question
4 answers
I would like to try how fixation correction works using softwares.  Please suggest some softwares.
Relevant answer
Answer
With my pleasure, all the best. Vladimir
  • asked a question related to Eye Movements
Question
3 answers
I know this a one type of chronobiological disorder. Can anyone tell me what is the actual reason of this type of disorder? Because I feel myself this type of problem last few months.
Any suggestions would be much appreciated.
Relevant answer
Answer
Hello,
REM sleep also known as paradoxical sleep (PS) and sometimes desynchronized sleep because of physiological similarities to waking states, including rapid, low-voltage desynchronized brain waves. The REM sleep is mainly due to a strong activity from the mesopontine cholinergic system (ascending cholinergic systems) which inhibits the thalamic reticular nucleus and induces an activation of the thalamocortical loops which cause the strong cortical activity observed. Otherwise, cholinergic neurons also induce the activation of glycinergic neurons (through glutamatergic activity on the reticular magnocellular nucleus) which cause inhibition of spinal motoneurons (paralysis). The rapid eyes movements are due to a superior colliculus activation by cholinergic system which stimulates the paramedian pontine reticular formation (PPRF) responsible for the eyes movement's direction and frequency. In REM sleep, there is also a shutdown of monoaminergic system.
In NREM sleep, the mesopontine cholinergic system is inactive. The thalamic reticular nucleus is active and exercises a strong inhibition on thalamocortical loops which cause a low cortical activity and slow frequency brain waves. The monoaminergic system are also active.
On a chronobiological point of view, there are many factors which could cause sleep disorders. The short days of winter with the decrease of the photoperiod for example, can induce a specific kind of depression called seasonal affective disorder (SAD), which could be cause by the modification of monoaminergic system and notably serotonin (strong implication in the sleep/wake rhythm). A unsteady sleep/wake rhythm or chronic stress could also desynchronises your circadian rhythm and induces sleep disorders. More rarely, very specific mutations of clock genes (notably per) have been described and seem to be responsible for several complex insomnia or hypersomnia cases.
Sleep/wake rhythm is the result of several complex neural activities including many brain structures. there are many possible reasons in case of dysfunction. Many of mechanisms are still poorly understood today.
  • asked a question related to Eye Movements
Question
35 answers
Reading the introduction to:
Incidental memory for parts of scenes from eye movements
Jenn H Olejarczyk · Steven G Luke · John M Henderson ·
I stumbled, as I always do, at a very standard phrasing, which referred to the eyes 'taking in' 'visual information'. At some visceral level, simply can not accept this formulation. Do the eyes 'take in'? What is 'visual information'?
To be clear : my question is about axiomatic assumptions and paradigms that define the way we think.  For the last sixty years or so, the cognitivist- computationalist paradigm has been the dominant explanation of human cognition - at least in the academy. This paradigm, as we know, is based on analogies to reasoning machines. And while there is abounding evidence that the brain is not a computer and scant evidence that it is, we still use electro-industrial metaphors of input and output, and of thinking as internal reasoning on  mental representations.
I am not persuaded by this. I feel that enactivist and Gibsonian approaches get closer to a fair description of what is really going on. These descriptions are as almost incomprehensible to cognitivists, as fundamental ideas in these paradigms are incommensurable.
Could it be that we find the brain mysterious in part because we apply inappropriate structuring metaphors which confound our inquiry? 
Relevant answer
Answer
Simon, you're absolutely right to be critical and to question existing research paradigms. The problem with any new paradigm is that, at first, it may give a sense of deeper insight, but later, starts to wear off until it indeed may impede further progress. Yet, in the absence of a better paradigm, one often keeps on working with it. This is troublesome especially if its paradigmatic ideas are mistaken for the “truth” (whatever that may be). By the way, as I argued in my CogProc paper, I do not think that the cognitivist/computationalist paradigm has worn off, but I do think that it needs to connect to complementary concepts and ideas from, e.g., connectionism, dynamic systems theory, and neuroscience.
You're also absolutely right that the terminology in any paradigm carries a lot of questionable ontological bagage. This is nagging but I think also inevitable in our continuing quest for appropriate analogies, methaphors, and models by which we can only approximate the “real thing” (whatever that may be). In this sense, I am open to what is called a metaphysical (or ontological) reading of pluralism (which assumes that a "grand unifying theory" is possible), but for the moment, I adopt an explanatory (or epistemological) reading of pluralism – which, more pragmatically and in the spirit of David Marr, focuses on differences and parallels between existing explanations at different levels of description to see if and how they might be combined. My hope is that, eventually, this will lead to new and better thinking structures.
– Peter
  • asked a question related to Eye Movements
Question
4 answers
I want measure the spatial movement of the eye simultaneously EEG recording. What are the available methods to do this ? The eye tracker should be placed away from the brain cap, remotely.
Relevant answer
Answer
Hey Sanjaya,
so you want to have a remote eye tracker (presumably head-free to avoid getting pressure on electrodes through the headstrab).
There are a lot of eye trackers which can do that. The SMI RED series seems to be very robust and they have a good support.
I recommend to look also at the other questions on "which eye tracker to buy", where also other options are elaborated.
Best wishes, David
P.S.: On a sidenote: I am sure you are aware of the saccade-related artifacts in EEG.
  • asked a question related to Eye Movements
Question
4 answers
Piloting an experiment, ALL participants show a difference between static vs. illusory motion conditions following the initial light reflex. 6/9 participants show greater dilation to illusory motion (as predicted), 3/9 show greater dilation to static images (strong reverse effect).
This is counter-intuitive to literature on pupil-dilation as an exploratory/arousal mechanism...have any studies been conducted on vast individual variability in the pupil's arousal response to this extent? In essence, is there anything to support testing the absolute difference in pupil response given that every participant shows a difference between conditions?
Any insight appreciated!
Relevant answer
Answer
Dear Steve,
Following up on Abdul Aziz's comment, I think it would be worth having a look at the eye movements in your task. They could certainly affect your pupil size estimates. Was fixation enforced?
Another important question is how you test the significance of your effect subjectwise. Pupillary data is a bit dangerous because of strong autocorrelation. This could enflate your effects artifically...
Best,
Alex
  • asked a question related to Eye Movements
Question
9 answers
If it exists, which is a good behavioral parameter, even indirect, to do this?
Relevant answer
Answer
Hi Analisa,
yes, as Vittorio Porciatti wrote, there is a reliable way to do that. It's the Westheimer paradigm (no "r" in Westheimer), and the field size that is measured by it is called the "perceptive field size".Oehler (1985) has even used it with monkeys, and the seminal paper is by Lothar Spillmann. There is a chapter in my review on peripheral vision on it:
(or go to my website, ww.hans.strasburger.de)
  • asked a question related to Eye Movements
Question
10 answers
saccade velocity is the velocity with which eyes move from a fixation point to other.
Relevant answer
Answer
Dear Riti!
Short answer: Use a two-point central difference (e.g. GRADIENT in Matlab).
Long answer:
Raymondas is partially right. The simple sample-to-sample difference is the easiest way. Nevertheless, this will significantly increase your signal noise.
Using a Sav-Golay filter is less noisy and seems to be very good in keeping the peak velocity of the saccade undistorted (since the velocity trace is approximately a polynomial function; see e.g. the lecture notes by Schaefer and the paper by Bromba).
However a simpler but also quite efficient method is the two-point central difference (basically just an extension of the s2s difference) since it simultaneously smoothes the data. It has been shown to yield very good (if not the best) results for computing velocity traces (see paper by Bahill).
In general I can recommend reading the paper by Inchingolo who summarizes the processing steps quite well.
Greetings, David
  • asked a question related to Eye Movements
Question
4 answers
Has anyone had problems of synchrony between an eye-tracking system and the stimulus delivery software sending log messages to the eye-tracking system logs? And if so, what the possible sources of the problem could be, how to avoid the problem, and how to deal with it?
We have an MR-compatible eye tracker from MR Technologies hooked onto Arrington Research's ViewPoint EyeTracker software on one PC. On a different but connected PC, Neurobehavioral Systems Presentation software controls stimulus delivery. I have Presentation communicate to the eye tracking software's logs when my video stimulus starts and ends because I was told it was more reliable to manually start the eye tracking system rather than trying to control it through commands triggered from Presentation. As far as I understand, the eye-tracking system logs a line of data every 33ms even when there's tracking loss. I expect that I should have the same number of eye-tracking data lines between my video log markers -- so if my videos are 33 fps, I assume I should have the same number of eye data points as frames for a given video -- is that correct?
However, the eye-tracking data corresponding to a video is on the order of up to 3 seconds (1-80 data points) longer than the video. For example, for a random video and according to the Presentation log files for some random 2 subjects:
video X: 102 frames (25fps) = 4.1 sec length of video (according to Presentation log files and video)
sub1: 182 lines eye data @ 30fps = 6 sec of data marked as recorded during the  length of video
sub2: 166 lines eye data @ 30fps = 5.5 sec
I am very hesitant to assume that the "video start" log marker I had Presentation send to the eye-tracker system log files really corresponds to when the video started (and then take only as much eye data as video length, ignoring the "video end" log marker -- or could this be a safe assumption?
Thanks in advance for any help and explanations!
Relevant answer
Answer
Dear Gina,
we used the Arrington ViewPoint software in our lab with the 220USB system.
Some thoughts why you might get different frame numbers:
1) It is not a constant rate camera, so actually it runs at around 220Hz but not at exactly 220Hz. That may be the same case for your camera (although there might be less variability in the 30Hz version).
2) I found the marker insertion in the ViewPoint software rather unreliable (we used the remote Ethernet connection and the markers where sometimes dropped or capped at the start/end even with a security margin of starting the eye tracker some hundred ms before the experiment).
In conclusion, if your video start/end markers do not coincide with the data requisition start/end markers in the ViewPoint protocol, you are safe to discard the rest of the eye data. If the they to coincide, then the data might probably be corrupted.
Anyway, keep in mind, that the variable rate of the eye tracker will not guarantee you a one-to-one correspondence between video frames and eye tracker samples (the important property is the "delta time" in the ViewPoint protocol which gives you the the time interval between successive samples; if you have suffered a rate change, there will be variability in these numbers).
Hope that helps, Greetings, David
  • asked a question related to Eye Movements
Question
11 answers
As we know now scientists can predict your dreams by the signals your brain emit when you are asleep. Now the question is: " Are there any connections between the eye movement, the signal they get from your brain and the dream you have?"
Relevant answer
Answer
Yes. This really cool study measured single neuron activity in temporal cortex during dreaming and showed that eye movements during dreams seem to have a similar function as during normal perception. http://www.nature.com/ncomms/2015/150811/ncomms8884/full/ncomms8884.html
  • asked a question related to Eye Movements
Question
5 answers
For a long time now, I have been trying to figure out how marijuana enables me to gain some access to vision, even though I am ordinarily totally blind with light perception. This involves translating the subjective experience into language others can understand, and pinpointing the difference between my experience with and without cannabinoids. Recently, I have arrived at the insight that while under the influence of cannabinoids, I am able to maintain momentum of my eye movements. When I move my eyes, they seem to obey the laws of Newtonian physics, in that if I build up the velocity of my eye movements, they will maintain that velocity. when I "push" or "throw" my eyes forward, the effort needed for that push or throw gives me information about my surroundings (i.e. white or reflective objects require very little effort, and my eyes slide across them with very little friction). When I am not under the influence of cannabinoids, I don't feel the kickback or reverberation when I move my eyes, so they feel numb and heavy and impossible to control.  Currently, I am developing an interactive eye training program that gives me auditory feedback about my gaze direction. However, this is just a start, and I am very much thinking that some sort of  wearable device is needed. I am starting to think that the function of this device would be to increase my ability to maintain this sense of momentum, without the influence of marijuana, but what would that look like on a practical level? Any thoughts are welcome.
Relevant answer
Answer
A couple of issues.
There is always a problem trying to understand a first-hand experience (perceptual/cognitive/emotional) from a third-person perspective. So it's not clear to me what you mean by momentum. This is particularly important for nystagmus as the evidence is that how and what nystagmus sufferers feel, think, experience, can have a material effect on the nystagmus itself. So for example, if cannabinoids have the general effect of calming someone, this is likely to damp the nystagmus waveform (at least to some extent). But of course, in these circumstances, it may also alter the perception of what's going on, independent of any effect on the nystagmus.
For these reasons, trying to draw any conclusions without an actual recording of the nystagmus (ie an objective recording) is probably unwise.
Secondly, the balance of evidence is that we have little or no conscious perception of what our eyes are actually doing from moment to moment. Intramuscular signals from the extraocular muscles that move the eyes in the orbits are available to the central nervous system, but it remains unclear what these are used for - over the years there's been lots of suggestions (including suggestions about their role in nystagmus as it happens). But what you feel you are doing with your eyes, and what's actually happening to them, may be two very different things. Again a decent recording will help clarify things.
None of this is to say that your insights are unimportant, or that some form of biofeedback might not enable you to change the pattern of your nystagmus.
  • asked a question related to Eye Movements
Question
6 answers
I know about iMotions, but it is very expensive. Maybe someone wants to share research?
Relevant answer
Answer
Illia
I'm interested in your solution. Please contact me airton@popmindpesquisas.com.br
  • asked a question related to Eye Movements
Question
3 answers
Regarding eye movement, what is the difference between planning and programming of a saccade? Which one needs attention?
Relevant answer
Answer
These two papers may help you:
Hermens F, Walker R, 2010, "The influence of onsets and offsets on saccade programming" i-Perception 1(2) 83–94; doi:10.1068/i0392
Ptak, R., & Müri, R. M. (2013). The parietal cortex and saccade planning: lessons from human lesion studies. Frontiers in Human Neuroscience, 7, 254. doi:10.3389/fnhum.2013.00254
  • asked a question related to Eye Movements
Question
11 answers
This relates to either eye movement or gaze aversion in a face-to-face conversation. This applies to body language analysis.
Relevant answer
Answer
I pursuit of our scientific approach I appeal to all to see whether someone would be interested doing research on body language. There is a scientific threshold to overpass in order to maximize recognition not only in the activity sectors who use it already but also in the scientific community in general. Interesting topic may be develop from this.
  • asked a question related to Eye Movements
Question
11 answers
Hi
I have some experience with the EyeTribe(ET) and I wonder whether anybody has similar problems as I do/did, so, my questions are:
How many times do you repeat the calibration until you have a reliable one?
How long are your sessions if you use it in user studies? or
How long can you collect data with the ET? Does it automatically shut down after a while?
What is the viewing distance?
Do you use any extra tool to stabilize user's head or to preserve the calibration and the viewing distance?
Do you use ET more for post analysis or for real-time interaction?
Thanks
Relevant answer
Answer
Hi Kenan,
I used The Eye Tribe-Tracker for three experiments with a total of about 120 participants. The combination with ogama 5.0 software for post analysis, which I used, worked well.
The mean session time for using this combination was about 20 minutes.
I used a 12 point calibration with no extra tool to stabilize user's head and had to exclude about 8% of my participants because of low data quality (e.g. mascara-problems..)
The viewing distance was 60 cm using a 1280x1024 Display.
Repeating of calibration:
In about 70% i had to calibrate only once..
in about 25% i needed a second one.
in about 5% i needed more than 2 calibrations
I had some shut down problems with my EyeTribe-Ogama combination, when the mouse curser left partcipants screen.
May be shutdowns depend on computing power. I have an i7-machine and no further problems even with long session duration.
Best regards,
Matthias
  • asked a question related to Eye Movements
Question
9 answers
Dear all,
I am looking for a lightweight, small, wide field of view (> 90deg horizontally) video camera, which we can use as scene camera on a head-mounted eye tracker. Ideally, the refresh rate should be 50+ Hz and the resolution 720p+. So far, I'm using a GoPro, which is a little too bulky (as are other "action-cams"). I looked at supercircuits' spy cameras but these seem to have not the best resolutions.
Any suggestions or experiences would be very welcome!
Greetings, David
Relevant answer
Answer
I looked at the current offerings and did not see the model that I used.  I will need to double check, but I think the model number was pc50 or pc51.  I believe I also have a couple of the pc206xp, which are smaller, but never used them for a project.  The ones that I did use have a case roughly the size of a sugar cube, and accept interchangeable micro-lenses.  We used these cameras in a helicopter flight study that was conducted both in daylight and at night, and they performed fine.  I have a paper describing this project at http://scanpath.arc.nasa.gov/pub_files/spie05.pdf, it gives the model number of everything except the cameras!?  That was 10 years ago, so it is not completely surprising that the model is no longer available.  At the time they cost around $50.
Another supplier that you might look at is Marshall Electronics.
  • asked a question related to Eye Movements
Question
3 answers
Lets say, we have given learners a pre test including 10 words. After 5 minutes, they sat for the eye tracking session and read a paragraph also including those 10 words. If we can talk about a priming effect on these words, is it destructive for research design? To what rate can it inflate temporal measures (first fixation, gaze duration, total fixation, second pass etc.) Thanks.
Relevant answer
Answer
Hi Emrah,
Your question is very interesting.
In our researchs, we can observe movement f eyes, acomodation, visual convergence and other visual skills. Perhaps is a good idea that we organize a pretest same your research question.
Soundess!
  • asked a question related to Eye Movements
Question
1 answer
I have a myGaze device and the software myGaze SDK (from myGaze company - http://www.mygaze.com/) and I want to record graphically the eye movement trajectories. Has anyone used such a system? Could you help me?
Relevant answer
Answer
do not know aout myGaze test.  We're looking into the Tobii eye tracking instruments for our own work.
  • asked a question related to Eye Movements
Question
18 answers
I am planning to use Eye Tribe eye tracker for my research. I am new to eye tracking. I am having some doubts.
1) Can we able to get saccade parameters ( saccade duration, inter saccade interval, saccadic peak velocity, and saccade amplitude) from the output of eye tribe.
2) What type of features we get from output data. How to analise output
3) Can any one provide me sample eye tracking data and any material available
Thank You
Relevant answer
Answer
Hi!
1) NO! Saccadometry is pretty much out of the question. In theory, testing a LOT of saccades, you could potentially create a somewhat representative velocity profile, but this is a very roundabout way of going at it, and should not be recommended. For more info on what kind of analysis you can and cannot do, see the attached pre-print of mine (NOTE: a pre-print is not peer reviewed; in this case the manuscript is under revision).
2) Again, see attached pre-print. Brief summary: if you use an optimal setup (i.e. with chinrest), the EyeTribe seems to be fine for fixation analysis as well as for pupilometry. So if you want to know where people have looked for how long, or test their pupillary light response (or general arousal or whatever you want to conclude from pupil size changes), the EyeTribe could potentially be a good tool.
3) I have the data files for the attached pre-print. If you give me an email address, I could send you a few.
Good luck!
  • asked a question related to Eye Movements
Question
12 answers
I have an eye-tracker dataset that includes pupil size information. The data was recorded primarily for examining eye movements and fixations, but I am interested in looking into whether the pupil size data says anything interesting about cognitive effort during a peripheral detection task. However, I am relatively new to pupillometry and having read some of the literature around pupil dilation and cognitive effort/attention, I can't identify a standard approach to cleaning and analysing a pupil size dataset (e.g. how to smooth the data, deal with blinks or missing data, how to identify outlying datapoints etc.). Is there such a standard approach or method?
Relevant answer
Answer
Hi Jim,
There's a pretty straightforward way of pre-processing, which entails the following steps:
1) Interpolate blinks from the signal. These are characterised by rapid declines towards 0 at blink onset, and rapid rises from 0 back to a regular value at blink offset. For a blink removal algorithm, see Sebastiaan Mathot's approach (link attached), which is a good way of doing it.
2) Reject artifacts, e.g. by Hampel filtering. The link to a Matlab function is attached.
3) Optionally smooth the data (depending on your parameters, the Hampel filter might actually act as a smoothing function too). A popular approach is to use a moving window, e.g. a Hanning window (see attached Wikipedia link for clarification).
4) Divide the signal (e.g. timepoint 0 ms to timepoint 3000 ms) by the median pupil size during a baseline period (e.g. timepoint -200 ms to timepoint 0 ms). This is an important step, as most trackers tend to work with arbitrary numbers, whereas most papers report changes as a proportional change.
N.B. Please note that some trackers report the pupil AREA, whereas others report the pupil DIAMETER. The proportional increase over time of the AREA will be a different function than the proportional increase over time of the DIAMETER, because the two are not linearly related!
EDIT: Additionally, make sure that no eye movements happened in the intervals from which you collected pupil data. And be sure that your stimuli are equiluminant. This is important, as systematic changes in luminance between conditions will result in a systematic difference in pupil size between conditions because of the pupilary light response. For a recent paper using pupilometry, see our recent publication in Journal of Vision (attached). The design of our experiment is a good example of how to capitalise on the pupilary light response to test spatial attention. It's also a good example of an equiluminant experimental paradigm.
  • asked a question related to Eye Movements
Question
13 answers
I believe some metrics related to the eye, such as pupil dilation, may give an indication of the extent to which something being looked at is being actively processed. However, I am interested in ways to determine whether someone is paying attention to (cognitively processing) what they are looking at in natural, real-world conditions, where changing light levels may make it difficult to use pupil dilation as a measure. I am therefore wondering if there are any tell-tale signs from eye movements that can reveal whether something is being actively processed and has some cognitive importance to the observer.
For example, research on inattentional blindness shows that just because something in our environment is fixated does not mean it is perceived or processed. Also, research has been carried out about mind-wandering during reading which suggests eye movements may be qualitatively different during periods of mind-wandering compared with when what is being read is being processed. Are there any similar findings for natural situations such as just walking through an environment?
Relevant answer
Answer
Bear in mind that covert shifts of attention can happen in the absence of eye movements (which is why they're called covert).  See for example the introduction of the article linked below.
  • asked a question related to Eye Movements
Question
5 answers
I'm wondering if multiple imputation may be the best way to go about it.
Relevant answer
Answer
Even though I have never had the issue, and thus I have never tried this approach, it seems that non-linear mixed models can handle this kind of NMAR situation. 
Here is an example paper: http://www.wuss.org/proceedings12/32.pdf
I think the answer should depend also a lot on what you are trying to infer from your saccade data. 
  • asked a question related to Eye Movements
Question
16 answers
Thank you for all of you who joined the discussion. I need to clarify the question in a better way.
I should use the word "detect" in the question anyway. I am afraid I misunderstood the definition of the microsaccade, I thought it is a special part of the saccade (marco one) from the fixation. 
For example, in my own analysis I found this is hard to achieve, the data of (average)velocity was m=97 (SD=50), but the maximum velocity was about 227, which was smaller than six times the SD (which would be 300).
Can anyone help me about this? 
Relevant answer
Answer
You could also use a Matlab implementation of the original method by Engbert & Kliegl (Engbert, R. and R. Kliegl (2003). "Microsaccades uncover the orientation of covert attention." Vision Res 43(9): 1035-45) by downloading it at http://kobi.nat.uni-magdeburg.de/edfImport or using the file I've attached. 
  • asked a question related to Eye Movements
Question
18 answers
Experiments have shown that eye movements loads the spatial subsystem of visuospatial memory. Reading involves eye movements, so can one assure that reading  loads spatial memory? In which article can I find this conclusion supported by an experiment?
Relevant answer
Answer
This is a non-elaborated suspicion: Expert reading and one type of inner speech could share a resource.
The issue that I am currently working in is inner speech. 
(Two types of inner speech: 1) expressive type; 2) inner speech for self-regulation of attention. In 1) silence must be got by costly inhibition; in 2) silence is really an energy saving resource)
In inner speech (or rather, in my view, in the inner speech for self-regulation of attention), as Oppenheim & Del ( 2008, p. 534) found, "slips exhibit lexical bias" (where the unity of the word is the key), "but not the phonemic similarity effect" (where the sequence of motor steps is the key). This particularity could have to do with a facilitation and enhancement of working-memory. (This enhancement would explain that –even when are on our own and we do not need to, therefore, defend the privacy of our thoughts–we adults prefer thinking in inner speech and not out loud.)
It is that resource (i.e. the focus on the semantic unity and not on the articulatory-phonetic sequence) that might perhaps be shared by expert– silent, quick– reading.
But that suspicion is too vague and unfounded. Thus, I want to replace it with this humble, trivial recommendation: With regard to their needs of memory, do not assume that expert and non-expert reading will be one and the same thing.
  • asked a question related to Eye Movements
Question
6 answers
I wonder if Deep Layer of Superior Colliculus fires when a saccade on a STATIC contrast is performed. Thanks
Relevant answer
Answer
Uhm...very interesting!  All papers I have about SC Superficial Layer state that it is responsive only to perceived movement, not to static stimuli. If you have some papers on the argument, please furnish the reference, I need it a lot for my PhD thesis!  About microsaccade, you are right, but they are a quite new argument for me , and I need to study about it. Thanks again! 
  • asked a question related to Eye Movements
Question
8 answers
The context is in a project examining how TV viewers multi-task and have their attention divided between tasks, then re-visit the TV screen for certain events while they have been visually attending to another task. 
Relevant answer
Answer
You might like to look at this latest PAC presentation. It's a bit long but it kind of tracks through OK.
Vision-Space: Self Reference Pt 4, painting phenomenal field, accessing the umwelt? http://youtu.be/g8rOhQhcl0A
  • asked a question related to Eye Movements
Question
4 answers
The antisaccade task has been a well-studied paradigm to assess executive functioning. People are asked to look in the opposite direction of a salient peripheral stimulus. I am wondering if there are similar tasks out there that use eye movements, other than the antisaccade task, to measure executive functioning or cognitive control. Inhibition of return (IOR) could be similar to the antisaccade task and there is also the gap effect. Any suggestions on particular paradigms or authors would be a huge help. Thanks!
Relevant answer
Answer
Voluntary saccades to imagined target features in complete darkness. There are some fMRI studies on saccades in the dark .. worth reviewing them, but the one I am bringing to your attention has not been explored to the best of my knowledge
  • asked a question related to Eye Movements
Question
13 answers
That is, data recorded in eye tracking experiments where the (ground truth or a good proxy for) cognitive load is known (even if it is at a very coarse-grained categorical level, like "cognitively engaged" vs "resting/disengaged").
It would be nice if it also has data gathered with varying lighting levels, but I don't count on it... :)
Thanks!
Relevant answer
Answer
Hi,
>You mean that cognitive load should be constant across the different tests/conditions in your experiment?
Yes, we didn't impose any cognitive load (no double-task, etc.), rather some mild psychological stress.
Have a look at the Methods section where everything is explained in detail.
Regards,
Marco
  • asked a question related to Eye Movements
Question
7 answers
We are considering to obtain a Jazz eye tracker to be used in conjunction with our existing BioSemi EEG amplifiers. I would be very interested in hearing what experiences other labs have made with such a setup. How convenient and reliable is it to use? How much calibration is necessary? Does the head-mounted eye tracker cause artifacts in the EEG? Is it comfortable for the participant to wear? Is there an easy way to feed back the eye movement data to the stimulation PC during the experiment (e.g. using the Psychophysics Toolbox) or is it suitable mainly for offline analysis together with the EEG data?
Relevant answer
Answer
I have used this system for some quick experiments in clinical setting. It is reliable, as we also correlated the outcome with search coil system. I agree that vertical eye movements may be an issue, but if you are interested in saccade velocity and dynamic properties of saccades, it should work just fine. There are no issues with horizontal system. The major drawback of this system is that it gives conjugate eye position (averages both sides - i believe). In this case, it is not an appropriate system to measure eye movements of a subject who has dysconjugate eye movements.
thanks,
  • asked a question related to Eye Movements
Question
5 answers
I am searching for theories which explain eye movements while learning. I already know the theories about cognitive load and eye movements. Is there anything else?
Relevant answer
Answer
A very impressive piece of research about learning strategies and eye tracking:
Ponce, Hector R.; Mayer, Richard E.
Qualitatively different cognitive processing during online reading primed by different study activities
  • asked a question related to Eye Movements
Question
1 answer
For my research I want to track eye movements of the people. And I want to measure Oculomotor based parameters i.e Saccade Parameters that includes distributions of saccade duration, inter saccadic interval, saccadic peak velocity, and saccade amplitude. These parameters are than used in post processing.
Relevant answer
Answer
  • asked a question related to Eye Movements
Question
4 answers
I want to know an approximative number of images for training and testing
Relevant answer
Answer
I use the databases for research , but in all the article they only mentioned number of image ans number of user, they dont metione the number of image for test and for training
  • asked a question related to Eye Movements
Question
133 answers
I have tried to do some research but the explanations are not very satisfying to me. On the other hand, if it really works, take it if you can get it right?
Relevant answer
Answer
There is a bunch of recent work from the group of Marcel van den Hout at Utrecht University that has looked into possible mechanisms of action of EMDR. Forget about the pseudo-neurobiological theory proposed by Helen Shapiro for why it works. In fact, the eye movements are likely not to be of major importance. Basic research suggests that the crucial aspect is the fact that working memory is taxed during exposure. See for instance [Hout, M.A. van den, Bartelski, N., & Engelhard, I.M. (2012). On EMDR: eye movements during retrieval reduce subjective vividness and objective memory accessibility during future recall. Cognition and Emotion, 27, 177-184] and [Hout, M.A. van den & Engelhard, I.M. (2012). How does EMDR work? Journal of Experimental Psychopathology, 5, 724-738]. So yes, it probably works, but not for the reasons that many in the EMDR field proclaim.
  • asked a question related to Eye Movements
Question
15 answers
I want to record EEG and eye movement simultaneously. One problem I may face is with the chin rest I have. The metallic chin rest may interfere with the EEG. Please provide information like which portable EEG system is widely used particularly with eye tracker.
Relevant answer
Answer
The Lab Streaming Layer (LSL) software framework of Christian Kothe provides a means to synchronize concurrent recordings from any number of devices - drivers are already available for several EEG and eye tracking systems and are often not difficult to extend to new devices. See http://code.google.com/p/labstreaminglayer
We are using LSL to sync EEG, eye tracking, audio, video, experiment control, subject responses, etc. based on our proposed Mobile Brain/Body Imaging (MoBI) paradigm.
  • asked a question related to Eye Movements
Question
7 answers
I want to start a trial when subject is fixating on the fixation cross. I want to combine the signal of fixation given by the eye-tracker to use in the e-prime program.
Relevant answer
Answer
I have some code I wrote that I can send to you.
  • asked a question related to Eye Movements
Question
16 answers
I am currently using the Applied Science Laboratories (ASL) E6000 Eye-Head-Integrated System with a camera that should be able to collect eye movement data at 120, 240, and 360 frames per second. I am having a hard time, however, collecting data at these speeds because of issues with capturing both pupil and cornea reflections. Subjects experience discomfort when wearing the head gear for more than 20-30 minutes and inadvertent movement of the the hot mirror/monocle occurs. We are aware of several systems that claim to have the same accuracy as the ASL system (for example SmartEye, SMI's RED250, Tobi), but I would like to hear from your personal experience with such systems.
Relevant answer
Answer
Hello Ben,
In my personal experience I got a very hard time with the ASL long range optic L6-HS in a MEG setting. We then changed for the long range EyeLink and it was piece of cake