Journal of Eye Movement Research

Online ISSN: 1995-8692
Publications
Article
The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipulations caused disruption in all measures on the manipulated words, but the patterns of spill-over differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009).
 
Article
All vertebrates share a characteristic pattern of eye-movements which consists of periods of stationary fixation, separated by fast gaze-relocating saccades. The underlying reason for this strategy is the need to keep the retinal image almost stationary, to avoid blur. Primates, and a few other vertebrates, have an additional system for tracking small targets. If vision has the same basic requirements in all sighted animals, then evolutionarily unrelated creatures should share this pattern. Cuttlefish, crabs and many insects all show this pattern of fixations and saccades, with reflex compensation for body rotation. In some flying insects the same eye movements occur, but—unencumbered by contact with the ground—it is now the whole body that makes the saccades and fixations, or in some cases tracks a target.There are, however, a few animals which employ a quite different strategy, taking in information when the eye is moving (scanning). Some sea-snails have a narrow retina which scans perpendicular to its long dimension in order to detect prey in the surrounding ocean. Mantis shrimps have a strip of retina across their compound eyes which contains their colour-vision system, and they move this so as to “colour in” the monochrome image in the rest of the eye. Jumping spiders have both conventional and scanning eyes viewing the same fields; the former detect motion and the latter elucidate pattern, distinguishing potential mates from prey. In all these cases the scanning movements are unlike saccades in being sufficiently slow for the receptors to generate fully modulated responses.
 
Article
Little is known of the interplay between deixis and eye movements in remote collaboration. This paper presents quantitative results from an experiment where participant pairs had to collaborate at a distance using chat tools that differed in the way messages could be enriched with spatial information from the map in the shared workspace. We studied how the availability of what we defined as an Explicit Referencing mechanism (ER) affected the coordination of the eye movements of the participants. The manipulation of the availability of ER did not produce any significant difference on the gaze coupling. However, we found a primary relation between the pairs recurrence of eye movements and their task performance. Implications for design are discussed.
 
Article
This document contains the schedule of the 13th European Conference on Eye Movements, August 14-18 2005 in Bern, Switzerland
 
Article
This issue contains the abstracts submitted for presentation at the Thirteenth European Conference on Eye Movements (ECEM13), Bern, August 14 – 18, 2005, and reviewed by the Scientific Board, consisting of W. Becker, Ulm; C.J. Erkelens, Utrecht; J.M. Findlay, Durham; A.G. Gale, Derby; C.W. Hess, Bern; J. Hyönä, Turku; A. Kennedy, Dundee; K. Koga, Nagoya; G. Lüer, Göttingen; M. Menozzi, Zürich; W. Perrig, Bern; G. d’Ydewalle, Leuven; D. Zambarbieri, Pavia. A quarter of a century ago, in 1980 initiated by Rudolf Groner and Dieter Heller, a transdisciplinary network called European Group of Scientists active in Eye Movement Research was founded. This group included scientists who used eye movement registration as a research tool and developed models based on oculomotor data obtained from a wide spectrum of phenomena, ranging from the neurophysiological to the perceptual and the cognitive level. The group was intended to serve the purpose of (1) exchanging information about current research, equipment and software, (2) organizing a conference (ECEM) at a different location all over Europe every other year. Over the years ECEM has grown. At the first conference in Bern the relatively small number of participants made it possible for the organisers to avoid conflicting parallel sessions, whereas with the ECEM’s steady growth, the introduction of parallel sessions soon became necessary. Although we are very happy about this year’s new record of 273 scientific contributions, we regret at the same time that this large number of participants necessitated the introduction of no less than four parallel sessions for oral presentations. Part of the ECEM culture are the books with a selection of edited contributions which have traditionally always been published after the conferences. Unfortunately, over the years the sale prices of books have become prohibitively expensive and book chapters have increasingly been given a low rating in comparison to publications in peer reviewed journals. As a consequence of this trend, we are now considering to launch an online journal Eye Movement Research which would publish scientific papers either on the base of individual submissions by the authors or as a follow-up of workshops or thematic sessions at ECEM. In either case, a fair peer reviewing process should guarantee a high quality of the contributions. Acknowledgements Last but not least, we are happy to express our deep gratitude to the main sponsors of our conference and to all the people who helped to keep it going. The Max and Elsa Beer-Brawand Foundation generously funded the invited speakers. The Swiss Academy of Humanities and Social Sciences (SAGW) sponsored the organization of workshops and made it possible for us to reduce fees for students. Novartis Neuroscience sponsored the reception at the Zentrum Paul Klee Bern. The University of Bern hosted the conference in its magnificent historical building. A team of devoted young scientists acted as staff during the conference: Eva Siegenthaler, Liliane Braun, Miriam Lörtscher, Esther Schollerer, Daniel Stricker, Simon Raess, Philipp Sury, Bartholomäus Wissmath, Linda Bodmer, Martina Brunnthaler, Daniela Häberli, Nadine Messerli, Felicie Notter, Didier Plaschy, Svetlana Ognjanovi, David Weibel, Yves Steiner and Dominik Moser. We dedicate this book to the memory of two important men in eye movement research: Dieter Heller as one of the founders of the ECEM group, and Lawrence W. Stark as pioneer in cognitive modelling of oculomotor control. In an early planning stage of ECEM13 both had been invited as keynote speakers, but their untimely death made this plan impossible. In many sessions of ECEM13 the influence of their work will prevail.
 
Article
The European Conference on Eye Movements, ECEM2007, is the 14th in a series of international scientific conferences dedicated to transdisciplinary research on eye movements. The series was initiated in 1981 by Rudolf Groner in Bern and is organized every second year by a group of European scientists active in eye movement research. This meeting in Potsdam is the third one in Germany, after Göttingen in 1987 and Ulm in 1997. The broad range of topics of the ECEM conferences attracts scientists from psychology, cognitive and visual neuroscience, computer science and related disciplines with interests from basic research to medical and applied aspects. Some 400 scientists from 27 countries, literally from around the world, have registered as participants of ECEM2007 and submitted over 300 oral and poster presentations.
 
Article
Welcome to the 15th European Conference on Eye Movements. The conference will begin with an address by the Mayor of Southampton, followed by a brief welcome from Professor Rudolf Groner. This will then be followed by a packed scientific program of almost 350 presentations. We hope you enjoy both the academic and intellectual aspects of the conference, as well as the social events that we have organised.
 
Article
This document contains the author index of the 16th European Conference on Eye Movements, August 21-25 2011 in Marseille, France
 
Article
This document contains all abstracts of the 16th European Conference on Eye Movements, August 21-25 2011 in Marseille, France. It was a real honour and a great pleasure to welcome more than 500 delegates to Marseille for the 16th edition of the European Conference on Eye Movements. The series of ECEM conferences started in 1981 under the auspices of Rudolf Groner in Bern. This year, we therefore celebrated the 30th Anniversary of ECEM. For this special occasion we had as a special guest Rudolf Groner, and honoured Alan Kennedy and George W. McConkie for their contributions to our field in two special symposia. We had the pleasure of listening to six keynote lectures given respectively by Patrick Cavanagh, Ralf Engbert, Edward L. Keller, Eileen Kowler, Rich Krauzlis and Gordon E. Legge. These exceptional scientific events were nicely complemented by all submissions, which made the ECEM 2011 program a very rich and interdisciplinary endeavor, comprising 19 symposia, 243 talks and 287 poster presentations, and a total of about 550 participants. The conference opened with an address given by Denis Bertin, vice president of the scientific committee of the University of Provence, and representing Jean-Paul Caverni, President of the University of Provence. It closed with Rudolf Groner’s address and the awarding of the best poster contributions by students and postdocs. This year, three posters were awarded; the first prize was offered by SR Research, the second prize was given by the Cognitive Science Society, and the third, the Rudolf Groner Prize, was offered by the ECEM organizing committee. The conference was held on the St Charles campus of the University of Provence, and to mark the return of ECEM in Southern Europe, many events including lunches, coffee breaks, aperitifs and poster sessions took place outside under the trees of our campus. Luckily, the sun was with us for the five days of the conference ! Françoise, Stéphanie, Stéphane, Eric & Laurent
 
Article
This document contains all abstracts of the 17th European Conference on Eye Movements, August 11-16 2013 in Lund, Sweden ECEM 2013 has been the 17th European Conference on Eye Movements, with the original aims ‘to exchange information on current research, equipment and software’ remaining at the forefront. ECEM is transdisciplinary, promoting new approaches, co-operation between research fields and communication between researchers. It has grown from it’s origins as a small, specialist conference to a large international event, covering all aspects of basic and applied research using eye movements ( see information from previous conferences in the archive). Today, ECEM is the largest conference on eye movements in the world, based on number of submissions. In keeping with the tradition of supporting young researchers and promoting new research. In the days prior to the conference, ECEM 2013 has included methods courses for all interested delegates on several aspects of eye movements research and applications, led by top international experts (see method workshops). Panel discussions during the conference have provided a forum for communication between researchers, manufacturers and interface designers on new and emerging themes in eye movements research and technology.(see program). The exhibition included top eye tracker manufacturers (see exhibition). ECEM 2013 bought together neurophysiologists, psychologists, neuropsychologists, clinicians, linguists, computational and applied scientists, engineers and manufacturers interested in the movements of the eyes, with an emphasis on learning from each other and promoting development of the field. This ECEM was hosted by the Eye Tracking Group at Lund University, Sweden, and organised by the Eye Movement Researcher's Association (EMRA) and the COGAIN (communication through gaze interaction) association, to promote interdisciplinary, basic and applied research excellence.
 
Article
Video stream: https://vimeo.com/356859979 Production and publication of the video stream was sponsored by SCIANS Ltd http://www.scians.ch/ We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments.
 
Article
This document contains all abstracts of the 19th European Conference on Eye Movements, August 20-24, 2017, in Wuppertal, Germany
 
Article
Video stream: https://vimeo.com/357473408 Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g. Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research. - Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction.
 
Conference Paper
Video stream: https://vimeo.com/362645755 Eye-movement recording has made it possible to achieve a detailed understanding of oculomotor and cognitive behavior during reading and of changes in this behavior across the stages of reading development. Given that many students struggle to attain even basic reading skills, a logical extension of eye-movement research involves its applications in both the diagnostic and instructional areas of reading education. The focus of this symposium is on eye-movement research with potential implications for reading education. Christian Vorstius will review results from a large-scale longitudinal study that examined the development of spatial parameters in fixation patterns within three cohorts, ranging from elementary to early middle school, discussing an early development window and its potential influences on reading ability and orthography. Ronan Reilly and Xi Fan will present longitudinal data related to developmental changes in reading-related eye movements in Chinese. Their findings are indicative of increasing sensitivity to lexical predictability and sentence coherence. The authors suggest that delays in the emergence of these reading behaviors may signal early an increased risk of reading difficulty. Jochen Laubrock’s presentation will focus on perceptual span development and explore dimensions of this phenomenon with potential educational implications, such as the modulation of perceptual span in relation to cognitive load, as well as preview effects during oral and silent reading --and while reading comic books.
 
Article
Video stream: https://vimeo.com/362367119 During visual fixation, the eyes make small and fast movements known as microsaccades (MSs). The effects of MSs on neural activity in the visual cortex are not well understood. Utilizing voltage-sensitive dye imaging, we imaged the spatiotemporal patterns of neuronal responses induced by MSs in early visual cortices of behaving monkeys. Our results reveal a continuous “visual instability” during fixation: while the visual stimulus moves over the retina with each MS, the neuronal activity in V1 ‘hops’ within the retinotopic map, as dictated by the MS parameters. Neuronal modulations induced by MSs are characterized by neural suppression followed by neural enhancement and increased synchronization. The suppressed activity may underlie the suppressed perception during MSs whereas the late enhancement may facilitate the processing of new incoming image information. Moreover, the instability induced by MSs applies also to neural correlates of visual perception processes such as figure-ground (FG) segregation, which appear to develop faster after fixational saccades.
 
Article
Complex stimuli and tasks elicit particular eye movement sequences. Previous research has focused on comparing between these scanpaths, particularly in memory and imagery research where it has been proposed that observers reproduce their eye movements when recognizing or imagining a stimulus. However, it is not clear whether scanpath similarity is related to memory performance and which particular aspects of the eye movements recur. We therefore compared eye movements in a picture memory task, using a recently proposed comparison method, MultiMatch, which quantifies scanpath similarity across multiple dimensions including shape and fixation duration. Scanpaths were more similar when the same participant's eye movements were compared from two viewings of the same image than between different images or different participants viewing the same image. In addition, fixation durations were similar within a participant and this similarity was associated with memory performance.
 
Article
Video stream: https://vimeo.com/365522806 The human ability for visualization extends far beyond the physical items that surround us. We are able to dismiss the constant influx of photons hitting our retinas, and instead picture the layout of our kindergarten classroom, envision the gently swaying palm trees of our dream vacation, or foresee the face of a yet-to-be-born child. As we inspect imaginary objects and people with our mind’s eye, our corporeal eyeballs latch onto the fantasy. Research has found that our eyes can move as if seeing, even when there is nothing to look at. Thus, gaze explorations in the absence of actual vision have been reported in many contexts, including in visualization and memory tasks, and perhaps even during REM sleep. This symposium will present the manifold aspects of gaze dynamics in conditions when the visual input is impoverished or altogether absent. Presentations will address the characteristics of large and small eye movements during imagined and remembered scenes, the impact of visual field deficits on oculomotor control, and the role of eye movements in the future development of neural prosthetics for the blind.
 
Article
Video stream: https://vimeo.com/358415199 Despite a wealth of studies using eye tracking to investigate mental processes during vision or reading, the investigation of oculomotor activity during natural reading of longer texts –be it newspaper articles, narratives or poetry– is still an exception in this field (as evidenced by the program of ECEM 2017 in Wuppertal). Following up on our symposium at ECEM 2017, here we bring together eye movement research on natural text reading to report recent progress in a coordinated way sharing data, experiences and software skills in this highly complex subfield. More specifically, in this symposium we will address several challenges faced by an eye tracking perspective on the reading of longer texts which involve a surplus of intervening variables and novel methods to analyze the data. In particular, the following issues will be addressed: - Which text-analytical and statistical methods are best to deal with the myriad of surface and affective semantic features potentially influencing eye movements during reading of ‘natural’ texts? - What are the pros and cons of using machine learning assisted predictive modeling as an alternative to the standard GLM/LMM frameworks? - Which kind of theoretical models can deal with the level of complexity offered by reading longer natural texts?
 
Minimum and maximum values of peak velocity and distance as extracted from the reviewed literature. The blue dots indicate the reported maximum velocities and the red dots the reported minimum velocities. Maximum and minimum points from the same publication are linked by a grey line. The red dotted line shows the unit diagonal while the black dotted line shows a regression line fitted to the data. To ensure that distance and velocity samples correspond to the same saccade only values read from graphs are included. The grey lines connecting one orange to one blue point represent samples reported in the same publication.
Maximum microsaccade size by year of publication. The area of each circle is scaled in relation to the number of participants. The circle colour represents the type of eye tracker. The dashed grey line indicates a 1° microsaccade.
A: Comparison of microsaccade detection methods and their mapping onto smoothed gaze data, speed and correlation profiles for P4. The red lines indicate data from the left eye and the blue lines indicate data from the right. The green solid line represents the moving correlation of the speeds between both eyes with a window size of 65 ms. The corresponding finely dashed green line represents a η value of 7 and the coarsely dashed green line indicates a ρ value of .45. A total of just over 2000 samples or 2 seconds is shown. Note: Speed in the left eye is represented as negative to facilitate readability of the graph. B: Graphical explanation of how the ground truth and detection method were compared. TP: true positive; FN: false negative; FP: false positive; ME: merged event; SE: split event. Further explanation is given in the text.
Article
We have developed a new method for detecting microsaccades in eye-movement data. The impetus was the review of the literature on microsaccades presented in this paper, which revealed (1) large changes in the size and speed of reported microsaccades over the last 70 years and (2) references to monocular microsaccades, which have recently been shown to be artefacts of analysis methods (Nyström et al, 2017; Fang et al, 2018). The changes in reported microsaccade characteristics, such as size and speed, must be due to experimental factors, such as methods of recording and analysis, and different levels of experience of the participants in the task: They cannot represent a change in the fundamental characteristics of microsaccades. In this paper we present a review of reported microsaccade properties between the 1940s and today and we determine the range within which certain physical parameters of microsaccades are thought to occur. These parameters drive our new microsaccade detection method. We have validated this method on two datasets of binocular eye-movements recorded using video-based systems: one from within our lab, and one from Nyström et al, 2017. We have additionally applied our method to eye-movement data collected using an adaptive optics scanning laser ophthalmoscope (AOSLO), to show its adaptability to a fundamentally different method of data capture. This confirmed that the microsaccade detection method produces microsaccade detection rates within expected limits across very different methods of recording. Our new microsaccade detection method is easy to implement and intuitive to understand, and affords researchers flexibility in adjusting it to their experimental set-up.
 
Article
This study compared the time required to produce nine-directional ocular photographs using the conventional method to that using the newly devised 9Gaze application. In total, 20 healthy adults, 10 adult patients with strabismus, and 10 pediatric patients with amblyopia or strabismus had their ocular photographs taken using a digital camera with PowerPoint 2010, and with an iPad, and iPod touch with 9Gaze. Photographs of 10 healthy patients were taken by orthoptists with <1 year of experience, and the other participants had theirs taken by those with >1 year of experience. The required time was compared between the three devices in all patients and the two orthoptist groups in 20 healthy adults (>1 year and <1 year of experience). The required times were significantly different between the devices: 515.5 ± 187.0 sec with the digital camera, 117.4 ± 17.8 sec with the iPad, and 76.3 ± 14.1 sec with the iPod touch. The required time with the digital camera was significantly different between the two orthoptist groups (404.7 ± 150.8 vs. 626.3 ± 154.2 sec, P=0.007). The use of the 9Gaze application shortened the recording time required. Furthermore, 9Gaze can be used without considering the years of experience of the examiner.
 
Frequency distribution plot of saccade latency in Experiment 1 showing two distinct peaks for express (80 ms -130 ms) and regular saccades (> 130 ms)
Article
Subliminal cues have been shown to capture attention and modulate manual response behaviour but their impact on eye movement behaviour is not well-studied. In two experiments, we examined if subliminal cues influence constrained free-choice saccades and if this influence is under strategic control as a function of task-relevancy of the cues. On each trial, a display containing four filled circles at the centre of each quadrant was shown. A central coloured circle indicated the relevant visual field on each trial (Up or Down in Experiment 1; Left or Right in Experiment 2). Next, abrupt-onset cues were presented for 16 ms at one of the four locations. Participants were then asked to freely choose and make a saccade to one of the two target circles in the relevant visual field. The analysis of the frequency of saccades, saccade endpoint deviation and saccade latency revealed a significant influence of the relevant subliminal cues on saccadic decisions. Latency data showed reduced capture by spatially-irrelevant cues under some conditions. These results indicate that spatial attentional control settings as defined in our study could modulate the influence of subliminal abrupt-onset cues on eye movement behaviour. We situate the findings of this study in the attention-capture debate and discuss the implications for the subliminal cueing literature.
 
Article
Experiments with the Rashbass ‘step-ramp’ paradigm have revealed that the initial catchup saccade that occurs near pursuit onset uses target velocity as well as position information in its programming. Information about both position and motion also influences smooth pursuit. To investigate the timing of velocity sampling near the initiation of saccades and smooth pursuit, we analyzed the eye movements made in nine ‘step-ramp’ conditions, produced by combining –2, 0 and +2 deg steps with –8, 0 and +8 deg/s ramps. Each trial had either no temporal gap or a 50-ms gap during which the laser target was extinguished, beginning 25, 50, 75 or 100 ms after the step. Six subjects repeated each of the resulting 45 conditions 25 times. With no temporal gap, saccades were larger in the step-ramp-away’ than the ‘step-only’ condition, confirming that saccade programming incorporates ramp velocity information. A temporal gap had no effect on the accuracy of saccades on ‘step-only’ trials, but often caused undershoots in ‘step-ramp’ trials. A 50-ms gap within the first 100 ms also increased the latency of the initial saccade. Although initial pursuit velocity was unaffected by a temporal gap, a gap that started at 25 ms reliably delayed pursuit onset for ramp motion of the target toward the fovea. Later gaps had a minimal effect on initial pursuit latency. The similar timing of the temporal gaps in target motion information that affect the initiation of saccades and pursuit provides further behavioral evidence that the two types of eye movements share pre-motor neural mechanisms.
 
Example where adaptive thresholding detects saccades in a data-driven manner, but can fail with non-robust statistics (A) Schematic of AT algorithm. The threshold for detection is initialized at qPT1 (dashed line). All points below this (thick line segments of the curve) are then used to calculate qPT2 (solid horizontal line), a new threshold used on the next iteration to determine the next threshold This algorithm proceeds until it converges to a solution. (B) Velocity of simulated scanpath with 20 low amplitude saccades. Horizontal lines depict the final threshold qend as determined by the AT algorithm (red) and AT-MAD algorithm (black). The AT-MAD algorithm finds a lower bound than AT, though still well above the background noise. (C-D) Example saccade that was not detected by the AT algorithm (C) but was detected by the AT-MAD algorithm (D), corresponding to the arrow in (B). (see Figure 5E-J for examples from human data). The initial threshold is depicted as a dashed line. Solid, red horizontal lines represent the threshold on successive iterations, with darker (lighter) lines showing earlier (later) iterations. Notice that it increases beyond the initial threshold, but the AT-MAD algorithm successfully stops iterating, whereas AT does not.
AT-MAD outperforms AT on real world data (A-D) Performance of the AT (orange) and AT-MAD (blue) algorithms on two minutes of human gaze data (n=12), depicted as the (A) F1 score, (B) True positive rate, (C) False positive rate, and (D) False negative rate. AT-MAD outperforms AT for all levels of lambda greater than 6. (E-H) Same as (A-D) but for three individual subjects (i-iii). In all three subjects, AT-MAD outperforms AT for lambda >6. In one subject (Ei), the F1 score is undefined because precision and recall are zero (I,J) Example saccades in these subjects that AT could not detect (I) but AT-MAD could (J). Dashed line represents the initial threshold, and solid red lines are threshold on subsequent iteration, with darker (lighter) representing earlier (later) iterations.
Article
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives – especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
 
Article
Video eye trackers rely on the position of the pupil centre. However, the pupil centre can shift when the pupil size changes. This pupillary artifact is investigated for binocular vergence accuracy (i.e. fixation disparity) in near vision where the pupil is smaller in the binocular test phase than in the monocular calibration. A regression between recordings of pupil size and fixation disparity allows correcting the pupillary artefact. This corrected fixation disparity appeared to be favourable with respect to reliability and validity, i. e. the correlation of fixation disparity versus heterophoria. The findings provide a quantitative estimation of the pupillary artefact on measured eye position as function of viewing distance and luminance, both for measures of monocular and binocular eye position.
 
Article
The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. In between the presentation of each note, participants sight-read a notated melody on an electric piano in the tempo of 70 bpm. All melodies were unique but contained four types of note pairs: eighth-eighth, eighthquarter, quarter-eighth, quarter-quarter. Analyses revealed that reading with fewer fixations was associated with a more accurate note onset. Fewer fixations might be advantageous for sight-reading as fewer saccades have to be planned and less information has to be integrated. Moreover, the quarter-quarter note pair was read with a larger number of fixations and the eighth-quarter note pair was read with a longer gaze duration. This suggests that when rhythm is processed, additional beats might trigger re-fixations and unconventional rhythmical patterns might trigger longer gazes. Neither recall accuracy nor chunking processes were found to explain additional variance in the eye movement data.
 
Article
For calibrating eye movement recordings, a regression between spatially defined calibration points and corresponding measured raw data is performed. Based on this regression, a confidence interval (CI) of the actually measured eye position can be calculated in order to quantify the measurement error introduced by inaccurate calibration coefficients. For calculating this CI, a standard deviation (SD) - depending on the calibration quality and the design of the calibration procedure - is needed. Examples of binocular recordings with separate monocular calibrations illustrate that the SD is almost independent of the number and spatial separation between the calibration points even though the later was expected from theoretical simulation. Our simulations and recordings demonstrate that the SD depends critically on residuals at certain calibration points, thus robust regressions are suggested.
 
Article
Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changing scenarios. In addition to this method, algorithms based on AAM and Harris corner detector present better accuracies than recent high performance face points tracking methods such as Intraface.
 
Article
Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e. g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e. g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e. g. pot, tick). In Experiment 1, sentence context supporting either the early-or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities.
 
Article
Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e. g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e. g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e. g. pot, tick). In Experiment 1, sentence context supporting either the early-or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities.
 
Article
The predictability of an upcoming word has been found to be a useful predictor in eye movement research, but is expensive to collect and subjective in nature. It would be desirable to have other predictors that are easier to collect and objective in nature if these predictors were capable of capturing the information stored in predictability. This paper contributes to this discussion by testing a possible predictor: conditional co-occurrence probability. This measure is a simple statistical representation of the relatedness of the current word to its context, based only on word co-occurrence patterns in data taken from the Internet. In the regression analyses, conditional co-occurrence probability acts like lexical frequency in predicting fixation durations, and its addition does not greatly improve the model fits. We conclude that readers do not seem to use the information contained within conditional co-occurrence probability during reading for meaning, and that similar simple measures of semantic relatedness are unlikely to be able to replace predictability as a predictor for fixation durations. Keywords: Co-occurrence probability, Cloze predictability, frequency, eye movement, fixation duration
 
Article
Many patients with heterophoria report on symptoms related to impaired vision. To investigate whether these symptoms are provoked by saccades this study examines whether in heterophoria effects on intrasaccadic and postsaccadic vergence movements are linked to effects on visual performance. Visual acuity was measured in 35 healthy subjects during fixation and immediately after asymmetric diverging saccades. Binocular position traces were recorded by video-oculography. Subjects with exophoria showed larger intrasaccadic divergence amplitudes, which in turn led to smaller postsaccadic divergence amplitudes. Visual acuity did not depend on heterophoria or vergence amplitudes. The results suggest that compensating for exophoria requires increased convergence activity as compared to orthophoria or compensated esophoria. Visual acuity seemed relatively robust with respect to postsaccadic vergence movements.
 
Article
This study evaluated the dynamic visual acuity of candidates by implementing a King–Devick (K-D) test chart in a virtual reality head-mounted display (VR HMD) and an augmented reality head-mounted display (AR HMD). Hard-copy KD (HCKD), VR HMD KD (VHKD), and AR HMD KD (AHKD) tests were conducted in 30 male and female candidates in the age of 10S and 20S and subjective symptom surveys were conducted. In the subjective symptom surveys, all except one of the VHKD questionnaire items showed subjective symptoms of less than 1 point. In the comparison between HCKD and VHKD, HCKD was measured more rapidly than VHKD in all tests. In the comparison between HCKD and AHKD, HCKD was measured more rapidly than AHKD in Tests 1, 2, and 3. In the comparison between VHKD and AHKD, AHKD was measured more rapidly than VHKD in Tests 1, 2, and 3. In the correlation analyses of test platforms, all platforms were correlated with each other, except for the correlation between HCKD and VHKD in Tests 1 and 2. There was no significant difference in the frequency of errors among Tests 1, 2, and 3 across test platforms. VHKD and AHKD, which require the body to be moved to read the chart, required longer measurement time than HCKD. In the measurements of each platform, AHKD was measured closer to HCKD than VHKD, which may be because the AHKD environment is closer to the actual environment than the VHKD environment. The effectiveness of VHKD and AHKD proposed in this research was evaluated experimentally. The results suggest that treatment and training could be performed concurrently through the use of clinical test and content development of VHKD and AHKD.
 
Article
Microsaccades are involuntary, small, jerk-like eye-movements with high-velocity that are observed during fixation. Abnormal microsaccade rates and characteristics have been observed in a number of psychiatric and developmental disorders. In this study, we examine microsaccade differences in 43 non-clinical participants with high and low levels of ADHDlike traits, assessed with the Adult ADHD Self-Report Scale [28]. A simple sustained attention paradigm, which has been previously shown to elicit microsaccades, was employed. A positive correlation was found between ADHD-like traits and microsaccade rates. No other differences in microsaccade properties were observed. The relationship between ADHD traits and microsaccades suggests that oculomotor behaviour could potentially lead to the development of a biomarker for the ADHD.
 
Summary of model choices related to the potential of interest for the two following subsections
Averaged bootstrap variances for the three estimations of the evoked potential at stimulus onset, on PZ and OZ electrodes: by average í µí± ̂ (0) (í µí±¡), by the two-class GLM í µí± ̂ (1) (í µí±¡) and by the three-class GLM í µí± ̂ (2) (í µí±¡)
Article
The Eye Fixation Related Potential (EFRP) estimation is the average of EEG signals across epochs at ocular fixation onset. Its main limitation is the overlapping issue. Inter Fixation Intervals (IFI) - typically around 300 ms in the case of unrestricted eye movement- depend on participants’ oculomotor patterns, and can be shorter than the latency of the components of the evoked potential. If the duration of an epoch is longer than the IFI value, more than one fixation can occur, and some overlapping between adjacent neural responses ensues. The classical average does not take into account either the presence of several fixations during an epoch or overlapping. The Adjacent Response algorithm (ADJAR), which is popular for event-related potential estimation, was compared to the General Linear Model (GLM) on a real dataset from a conjoint EEG and eye-tracking experiment to address the overlapping issue. The results showed that the ADJAR algorithm was based on assumptions that were too restrictive for EFRP estimation. The General Linear Model appeared to be more robust and efficient. Different configurations of this model were compared to estimate the potential elicited at image onset, as well as EFRP at the beginning of exploration. These configurations took into account the overlap between the event-related potential at stimulus presentation and the following EFRP, and the distinction between the potential elicited by the first fixation onset and subsequent ones. The choice of the General Linear Model configuration was a tradeoff between assumptions about expected behavior and the quality of the EFRP estimation: the number of different potentials estimated by a given model must be controlled to avoid erroneous estimations with large variances.
 
Article
Following a patent owned by Tobii, the framerate of a CMOS camera can be increased by reducing the size of the recording window so that it fits the eyes with minimum room to spare. The position of the recording window can be dynamically adjusted within the camera sensor area to follow the eyes as the participant moves the head. Since only a portion of the camera sensor data is communicated to the computer and processed, much higher framerates can be achieved with the same CPU and camera. Eye trackers can be expected to present data at a high speed, with good accuracy and precision, small latency and with minimal loss of data while allowing participants to behave as normally as possible. In this study, the effect of headbox adjustments in real-time is investigated with respect to the above-mentioned parameters. It was found that, for the specific camera model and tracking algorithm, one or two headbox adjustments per second, as would normally be the case during recording of human participants, could be tolerated in favour of a higher framerate. The effect of adjustment of the recording window can be reduced by using a larger recording window at the cost of the framerate.
 
Article
Purpose: Measuring near point of convergence (NPC) has recently emerged as a concussion assessment tool. Differences in administration of the test can be seen within the literature, which may affect results and normative values. There has been little investigation examining if clinically accessible target types affects NPC and no examination of NPC in a healthy, active young adult population. Methods: NPC was measured in 39 subjects using 5 different targets two times each with an accommodative ruler. Results: NPC ranged from 1.5-10cm in this population with an overall mean of 5.9±1.6 cm. There were significant differences between the middle sized font and the line (p = .024) and pen (p = .047), and also between the largest sized font and the line (p = .026). Conclusion: For physically active young adults, the measurement of NPC is affected by target type.
 
Article
Systematic tendencies such as the center and horizontal bias are known to have a large influence on how and where we move our eyes during static onscreen free scene viewing. However, it is unknown whether these tendencies are learned viewing strategies or are more default tendencies in the way we move our eyes. To gain insight into the origin of these tendencies we explore the systematic tendencies of infants (3 - 20-month-olds, N = 157) and adults (N = 88) in three different scene viewing data sets. We replicated common findings, such as longer fixation durations and shorter saccade amplitudes in infants compared to adults. The leftward bias was never studied in infants, and our results indicate that it is not present, while we did replicate the leftward bias in adults. The general pattern of the results highlights the similarity between infant and adult eye movements. Similar to adults, infants’ fixation durations increase with viewing time and the dependencies between successive fixations and saccades show very similar patterns. A straightforward conclusion to draw from this set of studies is that infant and adult eye movements are mainly driven by similar underlying basic processes.
 
Article
Recent technical developments and increased affordability of high-speed eye tracking devices have brought microsaccades to the forefront of research in many areas of sensory, perceptual, and cognitive processes. The present thematic issue on “Microsaccades: Empirical Research and Methodological Advances” invited authors to submit original research and reviews encompassing measurements and data analyses in fundamental, translational, and applied studies. We present the first volume of this special issue, comprising 14 articles by research teams around the world. Contributions include the characterization of fixational eye movements and saccadic intrusions in neurological impairments and in visual disease, methodological developments in microsaccade detection, the measurement of fixational eye movements in applied and ecological scenarios, and advances in the current understanding of the relationship between microsaccades and cognition. When fundamental research on microsaccades experienced a renaissance at the turn of the millennium (c.f. Martinez-Conde, Macknik, & Hubel, 2004), one could hardly have been so bold as to predict the manifold applications of research on fixational eye movements in clinic and practice. Through this great variety of areas of focus, some main topics emerge. One such theme is the applicability of microsaccade measures to neurological and visual disease. Whereas microsaccade quantifications have been largely limited to participants with intact visual and oculomotor systems, recent research has extended this interest into the realm of neural and ophthalmic impairment (see Alexander, Macknik, & Martinez-Conde, 2018, for a review). In this volume, Becker et al analyze “Saccadic intrusions in amyotrophic lateral sclerosis (ALS)” and Kang et al study “Fixational eye movement waveforms in amblyopia”, delving into the characteristics of fast and slow eye movements. Two other articles focus on how the degradation of visual information, which is relevant to many ophthalmic pathologies, affects microsaccadic features. Tang et al investigate the “Effects of visual blur on microsaccades on visual exploration” and conclude that the precision of an image on the fovea plays an important role in the calibration of microsaccade amplitudes during visual scanning. Otero-Millan et al use different kinds of visual stimuli and viewing tasks in the presence or absence of simulated scotomas, to determine the contributions of foveal and peripheral visual information to microsaccade production. They conclude that “Microsaccade generation requires a foveal anchor”. The link between microsaccadic characteristics and cognitive processes has been a mainstay of microsaccade research for almost two decades, since studies in the early 2000s connected microsaccade directions to the spatial location of covert attentional cues (Engbert & Kliegl, 2003; Hafed & Clark, 2002). In the present volume, Dalmaso et al report that “Anticipation of cognitive conflict is reflected in microsaccades”, providing new insights about the top-down modulation of microsaccade dynamics. Ryan et al further examine the relationship between “Microsaccades and covert attention” during the performance of a continuous, divided-attention task, and find preliminary evidence that microsaccades track the ongoing allocation of spatial attention. Krueger et al discover that microsaccade rates modulate with visual attention demands and report that “Microsaccades distinguish looking from seeing”. Taking the ecological validity of microsaccade investigations one step further, Barnhart et al evaluate microsaccades during the observation of magic tricks and conclude that “Microsaccades reflect the dynamics of misdirected attention in magic”. Two articles examine the role of individual differences and intraindividual variability over time on microsaccadic features. In “Reliability and correlates of intra-individual variability in the oculomotor system” Perquin and Bompas find evidence for intra-individual reliability over different time points, while cautioning that its use to classify self-reported individual differences remains unclear. Stafford et al provide a counterpoint in “Can microsaccade rate predict drug response?” by supporting the use of microsaccade occurrence as both a trait measure of individual differences and as a state measure of response to caffeine administration. Methodological and technical advances are the subjects of three papers in this volume. In “Motion tracking of iris features to detect small eye movements” Chaudhary and Pelz describe a new video-based eye tracking methodology that relies on higher-order iris texture features, rather than on lower-order pupil center and corneal reflection features, to detect microsaccades with high confidence. Munz et al present an open source visual analytics system called “VisME: Visual microsaccades explorer” that allows users to interactively vary microsaccade filter parameters and evaluate the resulting effects on microsaccade behavior, with the goal of promoting reproducibility in data analyses. In “What makes a microsaccade? A review of 70 years research prompts a new detection method” Hauperich et al review the microsaccade properties reported between the 1940s and today, and use the stated range of parameters to develop a novel method of microsaccade detection. Lastly, Alexander et al switch the focus from the past of microsaccade research to its future, by discussing the recent and upcoming applications of fixational eye movements to ecologically-valid and real-world scenarios. Their review “Microsaccades in applied environments: real-world applications of fixational eye movement measurements” covers the possibilities and challenges of taking microsaccade measurements out of the lab and into the field. Microsaccades have engaged the interest of scientists from different backgrounds and disciplines for many decades and will certainly continue to do so. One reason for this fascination might be microsaccades’ role as a link between basic sensory processes and high-level cognitive phenomena, making them an attractive focus of interdisciplinary research and transdisciplinary applications. Thus, research on microsaccades will not only endure, but keep evolving as the present knowledge base expands. Part 2 of the special issue on microsaccades is already in progress with articles currently under review and will be published in 2021. References Alexander, R.G., Macknik, S.L., & Martinez-Conde, S. (2018). Microsaccade characteristics in neurological and ophthalmic disease. Frontiers in Neurology, 9:144. Engbert, R. & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43, 1035–1045. Hafed, Z. M. & Clark, J. J. (2002). Microsaccades as an overt measure of covert attention shifts. Vision Research, 42, 2533–2545. Martinez-Conde, S., Macknik, S.L., & Hubel, D.H. (2004). The role of fixational eye movements in visual perception. Nature Reviews Neuroscience, 5(3), 229-40.
 
First-Pass reading time (Panel A) and Total Reading Time (Panel B) in both Groups as a function of Condition. Both First-Pass and Total reading times was larger in non-native speakers. Only native speakers showed an advantage in terms of Total Reading Time when reading Formulaic Sequences. 
Article
Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like competence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string frequency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process.
 
Article
Reading with two eyes necessitates efficient processes of binocular vision, which provide a single percept of the text. These processes come with a binocular advantage: binocular reading shows shorter average fixation durations and sentence reading times when compared to monocular reading. A couple of years ago, we showed for a small sample (N=13) that binocular advantages critically relate to the individual heterophoria (the resting state of vergence). In the present, large-scale replication we collected binocular eye movements (Eyelink II) for 94 participants who read 20 sentences monocularly and 20 sentences binocularly. Further, individual heterophorias were determined using three different optometric standards: objective eye tracking (EyeLink II at 60 cm), Maddox wing test (at 30 cm) and measures following the “Guidelines for the application of the Measuring and Correcting Methodology after H.-J. Haase” (MCH; at 6 m). Binocular eye movements showed typical pattern and we replicated (1) binocular advantages of about 25 ms for average fixation durations and (2) a reduction in binocular advantages when heterophoria increased – but only when heterophoria was identified by EyeLink II or Maddox wing measures; MCH measures of heterophoria did not affect binocular advantages in reading. For large heterophorias binocular reading even turned into a disadvantage. Implications for effect estimations and optometric testing will be discussed.
 
Descriptive statistics of advert saliency measures. 
The figure shows the saliency analysis of one single frame from an animated advert: original advert (top-left), edge pixels (top-right), motion pixels (bottom-left), and luminance pixels (bottom-right). The number of white pixels in each sub-image constitutes the respective values for the saliency measures on this frame. 
The figure shows the proportion of correct antisaccades by gender. 
Article
Twenty-six children in 3rd grade were observed while surfing freely on their favourite websites. Eye movement data were recorded, as well as synchronized screen recordings. Each online advert was analyzed in order to quantify low-level saliency features, such as motion, luminance and edge density. The eye movement data were used to register if the children had attended to the online adverts. A mixed-effects multiple regression analysis was performed in order to test the relationship between visual attention on adverts and advert saliency features. The regression model also included individual level of gaze control and level of internet use as predictors. The results show that all measures of visual saliency had effects on children's visual attention, but these effects were modulated by children's individual level of gaze control.
 
Article
This study investigated the correlations between the form features and legibility of Chinese characters by employing the eye tracking method in two experiments: Experiment 1 examined factors affecting Chinese character legibility with character modules and identified the correlations between character form and legibility of crossing strokes; and Experiment 2 examined the effect of crossing strokes on subjective complicacy perception in both Chinese characters and English letters. This study determined that enclosed Chinese characters affect subjective complicacy perception and reduce saccadic amplitude. In addition, greater number of stroke crossings produced higher subjective complicacy perceived for both Chinese characters and English letters. The results of this study serve as a reference for predicting Chinese character legibility and assessing type design superiority.
 
Illustration of the alternating cover test in esophoria. Solid/dashed: gaze lines during right-eye/left-eye viewing conditions. Gaze directions during right-eye viewing are more rightward than under left-eye viewing conditions. The right-left difference  (positive in this case) equals the convergent (esophoric) vergence error , i.e. the difference between the actual vergence angle (1) and the required vergence angle 0.
A) ANOVA plot of the phoria angle dependent on the factors day (1-3) and method (automated alternating cover test (squares and solid lines), manual prism cover test (diamonds and dashed lines)). Lines and error bars: means across subjects and the 95% confidence interval of the means. The manual test yielded smaller exophoria measurements than the automated test. B) Scatter plot of the paired measurements. Dashed: line with slope one. Solid: linear regression (slope 0.53). The underestimate of exophoria by the manual prism cover test increased linearly with the phoria angle.
The symbols depict the phoria angles of the five subjects, and the lines the corresponding five linear regressions. The regression slopes (4.99 ± 1.90deg/s) differed significantly from zero (p < 0.05).
Comparison of within-subject variability between studies:
Article
In within-subject and within-examiner repeated measures designs, measures of heterophoria with the manual prism cover test achieve standard deviations between 0.5 and 0.8 deg. We addressed the question how this total noise is composed of variable errors related to the examiner (measurement noise), to the size of the heterophoria (heterophoria noise), and to the availability of sensory vergence cues (stimulus noise). We developed an automated alternating cover test (based on a combination of VOG and shutter glasses) which minimizes stimulus noise and has a defined measurement noise (sd=0.06 deg). In a within-subject design, 19 measures were taken within 1.5 min and multiple such blocks were repeated either across days or across 45 min. Blocks were separated by periods of binocular viewing. The standard deviation of the heterophoria across blocks from different days or from the same day (sd=0.33 deg) was 6 times larger than expected based on the standard deviation within the block. The results show that about 42% of the inter-block variance with the manual prism cover test was related to variability of the heterophoria and not to measurement noise or stimulus noise. The heterophoria noise across blocks was predominantly induced during the inter-mediate binocular viewing periods.
 
The number of one-point measures we need in order get a two-sample t-test significant at the 95 % level for three different effect magnitudes. Values below 5 data points are not shown as t-tests are not reliable for such small samples.
The empirical distribution of a two-point dwell time sampling error.
Article
We use simulations to investigate the effect of sampling frequency on common dependent variables in eye-tracking. We identify two large groups of measures that behave differently, but consistently. The effect of sampling frequency on these two groups of measures are explored and simulations are performed to estimate how much data are required to overcome the uncertainty of a limited sampling frequency. Both simulated and real data are used to estimate the temporal uncertainty of data produced by low sampling frequencies. The aim is to provide easy-to-use heuristics for researchers using eye-tracking. For example, we show how to compensate the uncertainty of a low sampling frequency with more data and post-experiment adjustments of measures. These findings have implications primarily for researchers using naturalistic setups where sampling frequencies typically are low.
 
Article
Previous cross-cultural studies have found that cultures can shape eye movement during scene perception, but those researches have been limited to the West. This study recruited Chinese and African students to document cultural effects on two phases of scene perception. In the free-viewing phase, Africans fixated more on the focal objects than Chinese, while Chinese paid more attention to the backgrounds than Africans especially on the fourth and fifth fixations. In the recognition phase, there was no cultural difference in perception, but Chinese recognized more objects than Africans. We conclude that cultural differences exist in scene perception when there is no explicit task and more clearly in its later period, and that some differences may be hidden in deeper processes (e.g., memory) during an explicit task.
 
Scientific illustration on the flight of birds. Otto Lilienthal, Der Vogelflug als Grundlage der Fliegekunst, Berlin, 1889. 
Exemplary task from the multimedia condition. Adapted from Ögren et al. (2016). 
Eye movement modeling examples with a traditional dot display (left) and a spotlight display (right). Material used in Jarodzka et al. (2010). 
Article
Eye tracking is increasingly being used in Educational Science and so has the interest of the eye tracking community grown in this topic. In this paper we briefly introduce the discipline of Educational Science and why it might be interesting to couple it with eye tracking research. We then introduce three major research areas in Educational Science that have already successfully used eye tracking: First, eye tracking has been used to im-prove the instructional design of computer-based learning and testing environments, often using hyper- or multimedia. Second, eye tracking has shed light on expertise and its devel-opment in visual domains, such as chess or medicine. Third, eye tracking has recently been also used to promote visual expertise by means of eye movement modeling examples. We outline the main educational theories for these research areas and indicate where fur-ther eye tracking research is needed to expand them.
 
Article
We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data.
 
Top-cited authors
Kenneth Holmqvist
  • Nicolaus Copernicus University; Universität Regensburg; University of the Free State
Reinhold Kliegl
  • Universität Potsdam
Umesh Patil
  • University of Cologne
Benjamin W Tatler
  • University of Aberdeen
Peter König
  • Universität Osnabrück