Conference PaperPDF Available

Evaluating Quality of Dispersion Based Fixation Detection Algorithm



Information hidden in the eye movement signal can be a valuable source of knowledge about a human mind. This information is commonly used in multiple fields of interests like psychology, medicine, business, advertising or even software developing. The proper analysis of the eye movement signal requires its elements to be extracted. The most important ones are fixations-moments when eyes are almost stable and the brain is acquiring information about the scene. There were several algorithms, aiming at detecting fixations, developed. The studies presented in this paper focused one of the most common dispersion based algorithm-I-DT one. The various ways of evaluating its results were analyzed and compared. Some extensions in this algorithm were made as well.
Evaluating quality of dispersion based fixation
detection algorithm
Katarzyna Hare˙
zlak and Paweł Kasprowski
Abstract Information hidden in the eye movement signal can be a valuable source
of knowledge about a human mind. This information is commonly used in multiple
fields of interests like psychology, medicine, business, advertising or even software
developing. The proper analysis of the eye movement signal requires its elements
to be extracted. The most important ones are fixations - moments when eyes are
almost stable and the brain is acquiring information about the scene. There were
several algorithms, aiming at detecting fixations, developed. The studies presented
in this paper focused one of the most common dispersion based algorithm - I-DT
one. The various ways of evaluating its results were analyzed and compared. Some
extensions in this algorithm were made as well.
1 Introduction
Human eyes play important role in the interpersonal communication and gathering
knowledge regarding surrounding world. The desire to understand this learning pro-
cess leads to asking many questions: What is a subject looking at? What does one
see looking at a given point? Did one find searched information? What kind of in-
formation was gained when looking at a particular area? Is one looking at expected
point of regard? Finding answers to those and other questions is an important task
in many fields of interests like psychology, medicine, business, advertising or soft-
ware developing. This need is reflected in current research areas, among which the
Katarzyna Hare˙
Silesian University of Technology, Institute of Informatics, Gliwice, Poland e-mail:
Paweł Kasprowski
Silesian University of Technology, Institute of Informatics, Gliwice, Poland e-mail:
This is a pre-print. The original version was published by
Springer International Publishing Switzerland,
Information Sciences and Systems 2014, pp 97-104, DOI 10.1007/978-3-319-09465-6_11
2 Katarzyna Hare˙
zlak and Paweł Kasprowski
cognizance of an eye movement signal has a significant place, because information
hidden in this signal can be a valuable source of knowledge.
Studies conducted in this field resulted in distinguishing few components of the
signal. Its fundamental unit is a fixation when point-of-gaze remains within a small
area for a given time. Fixations are interlaced with saccades - quick movements
made to reach another point-of-regard [1, 5, 6, 10, 15]. The example interpretations
of fixations and saccades features, in terms of their usability in a cognitive process of
human behaviors, were presented in the [9]. These guidelines show how important
is a precise separation of these parts from an eye movement signal.
However, deeper analysis of a fixation reveals, within it, another types of move-
ment: tremors, microsaccades and drifts [6]. A quality of measurements is also an
important issue. For this reason the fixation cannot be thought as one point and some
additional measures have to be involved like e.g. a size of spatial dispersion between
points in the fixation. Additionally, an eye movement signal differs in a character-
istic for a particular subject and task, which makes the identification of fixations a
complex task, still being solved by researches.
There are several algorithms developed for identifying fixations and saccades.
Among them Dispersion-Based and Velocity-Based algorithms are the most popu-
lar ones [11, 12]. The first group of methods identifies fixations analyzing distances
of the consecutive points. Group of points satisfying the condition defined by a given
dispersion threshold is treated as a fixation. This threshold can refer to various mea-
sures - a distance between any two points, the largest distance between points in a
fixation or the largest distance from the center of a fixation to one of constituting its
points [11, 12]. The most often analyzed algorithms in this group are I-DT (Disper-
sion Threshold Identification) and MST (Minimum Spanning Tree Identification)
[4, 7, 11, 12].
The studies of points velocity entitled development of algorithms separating fix-
ations and saccades points, taking their point-to-point velocities into account. Well-
known representatives of these methods are I-VT (Velocity Threshold Identification)
and HHM (Hidden Markov Model Identification) methods [4, 7, 11, 12].
The common goal of these algorithms is to extract sets of fixations and saccades
facilitating the interpretation of eye movement signals. Fixations are usually con-
sidered to have a duration of at least 150 msec. [5, 11, 12], nevertheless discussions
concerning a fixation duration in terms of performed tasks can be found [10, 14].
However, there are other factors influencing the analyzed measure. The outcome de-
pends on input parameters, which are different kinds of thresholds. Various range of
these parameters values can lead to diversity of results i.e. a number and duration of
fixations and saccades. For this reason several experiments were conducted to check
these parameters impact on the obtained results [1, 2, 4, 11].
Although the aforementioned studies let for drawing interesting conclusions,
they did not exhaust the topic. The aim of the research is to continue the analy-
sis of the I-DT dispersion based algorithm, which is claimed to be robust one [1, 11]
taking fixation identification into account.
Evaluating quality of dispersion based fixation detection algorithm 3
2 Algorithm description
The presented studies involved, as a basis, classic I-DT algorithm [11], converting
eye movements signal into list of fixations in two steps. In the first step, each gaze
point is classified as a fixation (F) or a saccade (S). The point is considered to be a
part of a fixation when some amount of previous points (called window) are closer
than a predefined threshold. There are different possibilities how to measure the
distance [11, 12, 2] but the most common is the maximal distance between any two
points in the window.
The second step is a consolidation of F-points, laying one by one, into fixa-
tions. Fixations with length lower than another predefined threshold (called later
minLength) are removed. The output of the algorithm is a set of fixations. Every
fixation has four parameters: start,length and position with two values: x,y. Pa-
rameters xand yare typically calculated as mean of all points in a fixation although
there are other possibilities (like e.g. median).
The algorithm used in this research extended a classic I-DT algorithm by intro-
ducing an additional step that makes it more flexible for low quality data (similar to
[4]). The classic I-DT algorithm builds fixations list in step 2 based on minLength
threshold - that is forms fixations from fixation points courses that are longer than
that threshold. In our modified algorithm every course of F points is used to build
a fixation. In additional step 3 we calculate spatial and temporal distances between
every two neighboring fixations. If both distances are below the threshold the two
fixations are merged into one. Only after this step minLength threshold is applied to
every fixation.
Dispersion threshold algorithm
Input: list of (x,y) points
Step 1
Classify each point as a fixation (F) or a saccade (S) based on history:
Point is a fixation when max distance among window previous points is less than threshold
Step 2
Build fixations from groups of neighboring F points:
For every course of at least two subsequent F points
build fixation(start,length,x,y) where x and y are average values of x and y
among all points in a fixation.
Step 3
Merge neighboring fixations:
For every two subsequent fixations:
if temporal gap (saccade length) between them is less than tgapThres
and spatial gap (Euclidean distance between them) is less than sgapThres
then merge these two fixations into one.
Step 4
Remove too short fixations:
For every fixation:
if a fixation length is less than minLength remove the fixation.
Output/result: list of fixations
4 Katarzyna Hare˙
zlak and Paweł Kasprowski
The additional fixations merging step should make the algorithm more robust to
artifacts - sudden and unexpected changes of measured eye position due to imper-
fection of eye tracker algorithms and noise. Artifacts removal in an eye movement
signal is not simple, especially when one does not want to affect parts of a cor-
rect signal. Similar setup, but for I-VT algorithm and restricted only for trackloss
situation (when there is no data available) was presented in [7].
3 Quality evaluation
The main problem for every fixation detection algorithm is how to check whether
it works correctly. The ground truth is in most cases not possible to determine as
we don’t know the correct positions of fixations and saccades in an eye movement
signal. There are multiple ways to solve the problem. [7] and [13] compared fixa-
tions detected by their algorithm to fixations detected by manual inspection made
by experienced users. [2] used somewhat arguable similarity criterion: if different
algorithms give similar data the data is considered to be reliable. The other possible
criterion proposed in [16] is calculation of so called nearest neighbour index (NNI)
for every fixation.
In this study we used a specially designed stimulus with a point jumping over
a screen. The task of a person being tested was to follow, with eyes, the point,
which was changing its location in specific moments of time. The fact that fixation
placements were known, gave us opportunity to create an estimated correct sequence
of fixations (later called template sequence) and compare the results of the algorithm
to this sequence.
Experiment. There were altogether 24 participants taking part in the experi-
ments, for which 40 recordings were registered using the stimulus described earlier.
To obtain meaningful results there were only samples with accuracy error, estimated
during calibration step, lower than 1.5 deg. chosen. Every stimulus presentation con-
sisted of 21 points evenly distributed over the whole screen. The point was displayed
in every location for about 3 seconds. Eye movements were recorded with the eye
tracker using single web camera with USB 2.0 interface, the sampling frequency
was 20Hz.
Methodology. The algorithm presented in section 2 was used to produce se-
quences of fixations for every sample. The algorithm was started with different val-
ues of threshold, spatial gap (sgapThres) and temporal gap (tgapThres). Parameters
window and minLength were set to 5 as it seems to be a reasonable choice according
to the literature. Assessment of the obtained results - sequences of fixations gener-
ated by the algorithm with various values of three parameters - threshold,sgapThres
and tgapThres - was done using several metrics described in Table 1.
To provide the correctness of the algorithm, metrics described in Table 1, were
calculated for every set of parameters and compared to metrics calculated for tem-
plate fixation sequence. For that sequence the values were: AFN=21, AFD=71.29,
ASN=20 and ASA=19.2.
Evaluating quality of dispersion based fixation detection algorithm 5
AFN - AverageFixation Number Amount of fixations divided by a number of elements in an
analyzed set of values
AFD - AverageFixation Duration A value of summarized fixations length, measured in milli-
seconds, divided by a number of elements in an analyzed set
of values
ASN - AverageSaccades Number Amount of saccades divided by a number of elements
in an analyzed set of values
ASA - AverageSaccades Amplitude Sum of distances between every two fixations divided
by a number of elements in an analyzed set of values.
Table 1 Metrics used to describe sequence of fixations
Additionally, there were several metrics calculated that directly measured dif-
ference between a given sequence and the template sequence. These metrics are
presented in Table 2. More information about metrics used may be found in [3] and
FQnS - Fixation Quantitative Score Percentage of points included into a fixation, for which
distance from a stimulus position is less than third part of
a last saccade amplitude.
FQlS - Fixation Qualitative Score A value of summarized distances between calculated
fixations and stimuli positions divided by a number
of stimuli positions presented.
SQnS - Saccade Quantitative Score A value of summarized saccades amplitudes divided by
summarized distances between stimuli positions presented
LevDist - Levenstein Distance A Levenshtein distance between a calculated sequence of
fixations and the template sequence
Table 2 Metrics used to calculate the similarity to the template
4 Results
At the beginning of the studies an influence of one parameter - threshold - on the re-
sults provided by the algorithm was checked. Its value initially set to 0.5 degree, was
incremented by 0.05 up to the value of 10 degrees. The sgapThres and tgapThres
were constant with value 0. As a main metrics, assessing a quality of the result,
Levenstein distance (LevDist) was chosen. The analysis of its values revealed that
there are two threshold ranges having meaningful influence on them. Low thresh-
olds values caused splitting a fixation, when an amplitude of eyes trembling during
a fixation was higher than a given threshold. For the described studies it was a case
when threshold was lower than 2.5 deg, which can be observed in the Fig 1 (left
side). Above this value the stabilization of LevDist was noticed as far as the thresh-
old reached the second range, values higher than 8 degrees. Defining this parameter
6 Katarzyna Hare˙
zlak and Paweł Kasprowski
on such high level resulted in merging fixation and points of neighboring saccades,
which entitled increasing of LevDist values.
To check the correctness of these findings they were compared to values obtained
for FQnS metrics, which are presented in the Fig 1 (right side). The symmetric shape
of charts suggested the strong correlation between elements of both sets. This cor-
relation turned to be full one with the coefficient equal -0.967. Similar relationship
was found between LevDist and ANF metrics. In this case the correlation coefficient
was -0.981 The analysis of the obtained results regarded a duration of a fixation
Fig. 1 Charts presenting the average values of LevDist (left side) and of FQnS (right side) metrics
for various thresholds
as well. As it was mentioned earlier, the template of a fixation length was defined
as time period, when a stimulus was displayed in one position (3565 msec.). It is
well known problem that this value cannot be reached because of the fact that, when
a point on the screen changes its position, it takes some time for the human brain
to react and to initiate an eye movement. For this reason duration of a measured
fixation will never be ideal.
In Fig 2 it can be noticed that in case of low threshold values, fixations found
by the algorithm, feature by a short duration. It is another confirmation, that setting
threshold in this scope can result in splitting one fixation into few small ones. It can
also be observed that extending threshold to 10 degrees makes the values of AFD
metrics to be closer to the ideal value. However it cannot be assumed that for high
threshold better result are obtained. As it was discussed above, this occurrence is
the effect of attaching to a fixation points, which in fact do not belong to it. This
proves that the correctness of the found fixations should be ensured by more than
one metrics.
Fig. 2 Chart presenting the average values of AFD metric for various thresholds
Evaluating quality of dispersion based fixation detection algorithm 7
Last metrics taken during the studies into account were ASA and SQnS one. Com-
paring these two sets of values it turned out that their correlation is significant (with
coefficient equal = -0.542) but not in such high level like in earlier described cases.
Both of metrics involve an amplitude of saccades but using it in slightly different
way. First of them is strongly dependent on the number of fixations defined by the
algorithm, which determines the number of saccades (ASN). The second metric is
the ratio being the sum of found fixations amplitudes divided by the sum of saccades
amplitudes existing in the template.
These various approaches entitle the differences in the results. In case of small
threshold values a signal is divided into small fixations, between which saccades
with amplitudes smaller the expected are defined. The sum of this amplitudes di-
vided by the number of saccades provides low ASA value (Fig 3 (left side)). In case
of the second metric (SQnS) the sum of such determined amplitudes can be in fact
almost equal to the sum of bigger amplitudes of the smaller set of saccades. This is
why the SQnS value are almost stable, up to the threshold value of 8 degree, when
the problem of merging a fixation with some points of neighboring saccades occurs
(Fig 3 (right side)).
Fig. 3 Charts presenting the average values of ASA (left side) and SQnS (right side) metrics for
various thresholds
5 Conclusion
The main goal of the research presented in the paper was to check how various
parameters of well-known I-DT algorithm, used for extracting the set of fixations
from the eye movement signal can influence the obtained results. The experiments
based on the specially designed stimulus with a point jumping over a screen. The
known positions of the point were used as the reference template. Owing to that
it was possible to determine, which ranges of parameters values provide reliable
outcomes. It especially was visible for a main threshold parameter. The assessment
was supported by the usage of various metric. Convergent results of independently
calculated metrics values confirmed correctness of the algorithm. The results for the
other parameters were not so unambiguous and need further studies. The average
best LevDist for I-DT algorithm without additional merging step (i.e. tgapThres and
8 Katarzyna Hare˙
zlak and Paweł Kasprowski
sgapThres equal to 0) was 3.175. The average LevDist for the algorithm with the ad-
ditional step was 2.85. Merging algorithm improved results in 28% of cases. How-
ever, these results were achieved for different values of sgapThres and tgapThres
optimized separately for every file. It was impossible to find one universal set of
sgapThres and sgapThres values that on average for every file gave results better
than without merging step (sgapThres=0, tgapThres=0). Nevertheless, it was ob-
served that for the given threshold parameter value higher tgapThres thresholds im-
proved the LevDist decreasing its value.
1. Blignaut, P. Beelders, T.: The effect of fixational eye movements on fixation identification
with a dispersion-based fixation detection algorithm. Journal of Eye Movement Research,
2(5):4, 1–14 (2009)
2. Blignaut, P.: Fixation identification: The optimum threshold for a dispersion algorithm. At-
tention, Perception, Psychophysics,(71):4, 881–895 (2009)
3. Kasprowski, P., Komogortsev, O. V., Karpov, A.: First eye movement verification and identifi-
cation competition at BTAS 2012. In The IEEE Fifth International Conference on Biometrics:
Theory, Applications and Systems (BTAS 2012), 195–202 (2012)
4. Komogortsev, Oleg V., Gobert, D. V., Jayarathna, S., Koh, Gowda, S. M.: Standardization of
automated analyses of oculomotor fixation and saccadic behaviors. In IEEE Transactions on
Biomedical Engineering,57.11 2635–2645 (2010)
5. Manor B. R., Gordon E.: Defining the temporal threshold for ocular fixation in free-viewing
visuocognitive tasks. Journal of Neuroscience Methods, 128(1–2), 85-93 (2003)
6. Martinez-Conde, S., Macknik, S.L., Hubel, D.H.: The role of fixational eye movements in
visual perception. Nature Reviews Neuroscience, 5, 229–240 (2004)
7. Munn, S. M., Stefano, L., Pelz, J. B.: Fixation-identification in dynamic scenes: Comparing
an automated algorithm to manual coding. In Proceedings of the 5th symposium on Applied
perception in graphics and visualization, 33–42, ACM Press, (2008)
8. Nystrom, M., Holmqvist, K.: An adaptive algorithm for fixation, saccade, and glissade detec-
tion in eye-tracking data. Behavior Research Methods, 42(1), 188-204 (2010)
9. Poole, A. and Ball, L. J. Poole, A. and Ball, L. J. Eye Tracking in Human-Computer Interac-
tion and Usability Research: Current Status and Future Prospects. In Encyclopedia of Human
Computer Interaction, IGI Global, (2005)
10. Rayner, K: Eye movements in reading and information processing: 20 years of research. Psy-
chol Bull, 124(3), 372-422 (1998)
11. Salvucci, D. D., Goldberg, J. H.: Identifying fixations and saccades in eye-tracking protocols.
In Proceedings of the 2000 Symposiumon Eye Tracking Research and Applications, 71–78,
NY: ACM Press (2000)
12. Shic, F., Chawarska, K., Scassellati, B.: The incomplete fixation measure. In Proceedings of
the 2008 Symposium on Eye Tracking Research and Applications, 111–114, NY: ACM Press
13. Tafaj, E., Kasneci, G., Rosenstiel, W., and Bogdan, M.: Bayesian online clustering of eye
movement data. In Proceedings of the Symposium on Eye Tracking Research and Applica-
tions, ETRA 12, 285-288, NY: ACM Press (2012)
14. van der Lans, R., Wedel, M., Pieters, R.: Defining Eye-Fixation Sequences Across Individuals
and Tasks: The Binocular-Individual Threshold (BIT) Algorithm. Behavior Research Meth-
ods, 43(1) 239–257 (2011)
15. Veneriz G., Piuyz P., Federighi P., Rosiniz F., Federicoz A., Rufa A.: Eye Fixations Identifi-
cation based on Statistical Analysis - Case study.In Cognitive Information Processing (CIP),
2010 2nd International Workshop on, 446–451, (2010)
Evaluating quality of dispersion based fixation detection algorithm 9
16. Camilli, M., et al. ASTEF: A simple tool for examining fixations. Behavior research methods
40(2) 373–382 (2008)
... The recordings were divided into two types of events: fixations and saccades by means of the Dispersion Threshold Identification (IDT) algorithm [Salvucci and Goldberg 2000], [Hareżlak and Kasprowski 2014]. To classify a point as a fixation we used the dispersion window of 50 ms and the dispersion of 1 degree of visual angle in size. ...
Conference Paper
Full-text available
The aim of this research was to compare visual patterns while examining radiographs in groups of people with different levels and different types of expertise. Introducing the latter comparative base is the original contribution of these studies. The residents and specialists were trained in medical diagnosing of X-Rays and for these two groups it was possible to compare visual patterns between observers with different level of the same expertise type. On the other hand, the radiographers who took part in the examination - due to specific of their daily work - had experience in reading and evaluating X-Rays quality and were not trained in diagnosing. Involving this group created in our research the new opportunity to explore eye movements obtained when examining X-Ray for both medical diagnosing and quality assessment purposes, which may be treated as different types of expertise. We found that, despite the low diagnosing performance, the radiographers eye movement characteristics were more similar to the specialists than eye movement characteristics of the residents. It may be inferred that people with different type of expertise, yet after gaining a certain level of experience (or practise), may develop similar visual patterns which is the original conclusion of the research.
... Eye movement analysis is usually based on a fixationssaccades sequence extracted from a registered signal. It has been shown that such a sequence structure is sensitive to the fixation detection algorithm settings ( (Shic, Scassellati, & Chawarska, 2008), (Hareżlak & Kasprowski, 2014)), and it is difficult to visually check, if the settings used are adequate. It became possible to present the detailed characteristics of fixations and saccades in 2D space on a single plot by means of the GSSP. ...
Full-text available
Eye tracking has become a valuable way for extending knowledge of human behavior based on visual patterns. One of the most important elements of such an analysis is the presentation of obtained results, which proves to be a challenging task. Traditional visualization techniques such as scan-paths or heat maps may reveal interesting information, nonetheless many useful features are still not visible, especially when temporal characteristics of eye movement is taken into account. This paper introduces a technique called gaze self-similarity plot (GSSP) that may be applied to visualize both spatial and temporal eye movement features on the single two-dimensional plot. The technique is an extension of the idea of recurrence plots, commonly used in time series analysis. The paper presents the basic concepts of the proposed approach (two types of GSSP) complemented with some examples of what kind of information may be disclosed and finally showing areas of the GSSP possible applications.
... The first one (I-DT) identifies fixations as groups of consecutive points within a dispersion defined by a chosen threshold. The second algorithm (I-VT) classifies each point as a fixation or saccade based on a velocity threshold: if the point-to-point velocity is below the defined threshold, it becomes a fixation point, otherwise it is classified as a saccade [7,17]. ...
Conference Paper
There is much research indicating that eye tracking methods are a promising approach which can be used in revealing experts’ visual patterns and acquiring information regarding their subconscious behaviour while making decisions in professional tasks. The studies presented in this paper extend the aforementioned investigations and were aimed at checking the possibility of differentiating experts and laymen based on their eye movement characteristics. For this purpose, an experiment in the radiology field was chosen. The studies revealed not only significant differences between visual patterns of the analysed groups but also demonstrated that distinguishing experts from novices based on their eye movements is feasible. The classification performance was high and, dependent on the method applied for defining the test set, amounted to 85% or 93% correctly-classified subjects. The investigation concerning the possibility of recognizing who was performing the experiment task—an expert or layman—showed that dependent on the radiology image explored—the performance in the majority of cases was between 79% and 93%.
... Some of them are devoted to the analysis of the eye movement signal in terms of its features extraction and their quantification [19][20][21][22]. In others works, methods for the selection of eye movement components and mapping them to points of regard, may be found [11,25]. ...
Conference Paper
The eye movement analysis undertaken in many research is conducted to better understand the biology of the brain and oculomotor system functioning. The studies presented in this paper considered eye movement signal as an output of a nonlinear dynamic system and are concentrated on determining the chaotic behaviour existence. The system nature was examined during a fixation, one of key components of eye movement signal, taking its vertical velocity into account. The results were compared with those obtained in the case of the horizontal direction. This comparison showed that both variables provide the similar representation of the underlying dynamics. In both cases, the analysis revealed the chaotic nature of eye movement for the first 200 ms, just after a stimulus position change. Subsequently, the signal characteristic tended to be the convergent one, however, in some cases, depending on a part of the fixation duration the chaotic behaviour was still observable.
... A group of points for which these distances are smaller than a predefined dispersion threshold is treated as a fixation. The most often analyzed algorithms in this group are I-DT (Dispersion Threshold Identification) (Salvucci and Goldberg, 2000;Shic et al., 2008;Hareżlak and Kasprowski, 2014). ...
The performance and quality of medical procedures and treatments are inextricably linked to technological development. The application of more advanced techniques provides the opportunity to gain wider knowledge and deeper understanding of the human body and mind functioning. The eye tracking methods used to register eye movement to find the direction and targets of a person's gaze are well in line with the nature of the topic. By providing methods for capturing and processing images of the eye it has become possible not only to reveal abnormalities in eye functioning but also to conduct cognitive studies focused on learning about peoples’ emotions and intentions. The usefulness of the application of eye tracking technology in medicine was proved in many research studies. The aim of this paper is to give an insight into those studies and the way they utilize eye imaging in medical applications. These studies were differentiated taking their purpose and experimental paradigms into account. Additionally, methods for eye movement visualization and metrics for its quantifying were presented. Apart from presenting the state of the art, the aim of the paper was also to point out possible applications of eye tracking in medicine that have not been exhaustively investigated yet, and are going to be a perspective long-term direction of research.
... [10],[6]), and it is difficult to visually check if the settings used are relevant. Using the GSSP plot it is possible to see the detailed characteristics of fixations and saccades in 2D space on a single plot. ...
Conference Paper
Full-text available
Eye tracking becomes more and more important way to analyze human behavior. However, a proper analysis of data obtained from an eye tracker occurs to be a challenging task. Traditional visualiza-tion techniques like scanpaths or heat maps may reveal interesting information, however much of useful information is still not visible , especially when the temporal characteristics of eye movement is taken into account. This paper introduces a technique called gaze self-similarity plot (GSSP) that may be applied to visualize both spatial and temporal eye movement features on one two dimensional plot. The technique is an extension of the idea of recurrence plots, commonly used in time series analysis. The paper introduces the basic concepts of the proposed approach complemented with some examples explaining what kind of information may be revealed and areas of the GSSP applications.
... The subsequent steps convert this preliminary list of fixations into the final list using different techniques for fixation merging and removing. All details of the algorithm are presented in [5]. The value of threshold parameter (Th) started from 0.2 deg. ...
Conference Paper
Full-text available
Eye movement may be regarded as a new promising modality for human computer interfaces. With the growing popularity of cheap and easy to use eye trackers, gaze data may become a popular way to enter information and to control computer interfaces. However, properly working gaze contingent interface requires intelligent methods for processing data obtained from an eye tracker. They should reflect users' intentions regardless of a quality of the signal obtained from an eye tracker. The paper presents the results of an experiment during which algorithms processing eye movement data while 4-digits PIN was entered with eyes were checked for both calibrated and non-calibrated users.
Full-text available
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic nature. Nonlinear time series analysis methods were used for this purpose. Two time series features were studied: fractal dimension and entropy, by utilising the embedding theory. The methods were applied to the data collected during the experiment with "jumping point" stimulus. Eye movements were registered by means of the Jazz-novo eye tracker. One thousand three hundred and ninety two (1392) time series were defined, based on the horizontal velocity of eye movements registered during imposed, prolonged fixations. In order to conduct detailed analysis of the signal and identify differences contributing to the observed patterns of behaviour in time scale, fractal dimension and entropy were evaluated in various time series intervals. The influence of the noise contained in the data and the impact of the utilized filter on the obtained results were also studied. The low pass filter was used for the purpose of noise reduction with a 50 Hz cut-off frequency, estimated by means of the Fourier transform and all concerned methods were applied to time series before and after noise reduction. These studies provided some premises, which allow perceiving eye movements as observed chaotic data: characteristic of a space-time separation plot, low and non-integer time series dimension, and the time series entropy characteristic for chaotic systems.
Eye movement is one of the key biological signals through which further analysis may reveal substantial information enabling greater understanding the biology of the brain and its mechanisms. Several methods for such signal processing have been developed, however new solutions are being continuously sought. This paper presents analysis of one of the main eye movement components – fixation – by usage of nonlinear time series methods. This analysis, aimed at determining the existence of chaotic behaviour, indicated by many biological systems, was based on an experiment utilising ’jumping point’ stimulus. 29 stimuli were used – presented for 3 secs in different screen positions. 24 subjects participated in the experiment consisting of two sessions conducted with a two–month interval; thus the experimental dataset included 48 recordings. The first derivative of the horizontal positions of eye movement coordinates registered during a fixation served as the time series used for reconstruction of eye movement dynamics. The analysis was performed by means of the Largest Lyapunov Exponent. Its values were studied in various time scopes of a fixation duration. A positive averaged value of this exponent, indicating chaotic behaviour, was observed for the first 200 points in the case of all studied time series. In the remining of the analysed scopes negative average exponent values were shown, however, for a number of users the eye movement signal behaviour was changing from convergent to chaotic and conversely.
Full-text available
Gaze data of 31 participants of a memory recall experiment was analyzed and the I-DT dispersion based algorithm of Salvucci and Goldberg (2000) was used to identify fixations. It was found that individuals differ considerably with regard to the stability of eye gaze and that fixational eye movements affect the accuracy of fixation identification and the optimum dispersion threshold. It was also found that fixation radius and the distance between the points in a fixation that are the furthest apart are the most reliable metrics for a dispersion-based fixation identification algorithm. Finally, it is argued that the correct setting of dispersion threshold is of utmost importance, especially if the participants are not homogeneous with regard to gaze stability.
Conference Paper
Full-text available
This paper presents the results of the first eye movement verification and identification competition. The work provides background, discusses previous research, and describes the datasets and methods used in the competition. The results highlight the importance of very careful eye positional data capture to ensure meaningfulness of identification outcomes. The discussion about the metrics and scores that can assist in evaluation of the captured data quality is provided. Best identification results varied in the range from 58.6% to 97.7% depending on the dataset and methods employed for the identification. Additionally, this work discusses possible future directions of research in the eye movement-based biometrics domain.
Full-text available
Eye-movement tracking is a method that is increasingly being employed to study usability issues in HCI contexts. The objectives of the present chapter are threefold. First, we introduce the reader to the basics of eye-movement technology, and also present key aspects of practical guidance to those who might be interested in using eye tracking in HCI research, whether in usability-evaluation studies, or for capturing people's eye movements as an input mechanism to drive system interaction. Second, we examine various ways in which eye movements can be systematically measured to examine interface usability. We illustrate the advantages of a range of different eye-movement metrics with reference to state-of-the-art usability research. Third, we discuss the various opportunities for eye-movement studies in future HCI research, and detail some of the challenges that need to be overcome to enable effective application of the technique in studying the complexities of advanced interactive-system use.
Conference Paper
Full-text available
Eye movement is the most simple and repetitive movement that enable humans to interact with the environment. The common daily activities, such as watching television or reading a book, involve this natural activity which consists of rapidly shifting our gaze from one region to another. The identification of the main components of eye movement during visual exploration such as fixations and saccades, is the objective of the analysis of eye movements in various contexts ranging from basic neuro sciences and visual sciences to virtual reality interactions and robotics. However, many of the algorithms that detect fixations present a number of problems. In this article, we present a new fixation identification algorithm based on the analysis of variance and F-test. We present the new algorithm and we compare it with the common fixations algorithm based on dispersion. To demonstrate the performance of our approach we tested the algorithm in a group of healthy subjects.
Full-text available
In an effort toward standardization, this paper evaluates the performance of five eye-movement classification algorithms in terms of their assessment of oculomotor fixation and saccadic behavior. The results indicate that performance of these five commonly used algorithms vary dramatically, even in the case of a simple stimulus-evoked task using a single, common threshold value. The important contributions of this paper are: evaluation and comparison of performance of five algorithms to classify specific oculomotor behavior; introduction and comparison of new standardized scores to provide more reliable classification performance; logic for a reasonable threshold-value selection for any eye-movement classification algorithm based on the standardized scores; and logic for establishing a criterion-based baseline for performance comparison between any eye-movement classification algorithms. Proposed techniques enable efficient and objective clinical applications providing means to assure meaningful automated eye-movement classification.
Conference Paper
Full-text available
Video-based eye trackers produce an output video showing where a subject is looking, the subject's point-of-regard (POR), for each frame of a video of the scene. Fixation-identification algorithms simplify the long list of POR data into a more manageable set of data, especially for further analysis, by grouping PORs into fixations. Most current fixation-identification algorithms assume that the POR data are defined in static two-dimensional scene images and only use these raw POR data to identify fixations. The applicability of these algorithms to gaze data in dynamic scene videos is largely unexplored. We implemented a simple velocity-based, duration-sensitive fixation-identification algorithm and compared its performance to results obtained by three experienced users manually coding the eye tracking data displayed within the scene video such that these manual coders had knowledge of the scene motion. We performed this comparison for eye tracking data collected during two different tasks involving different types of scene motion. These two tasks included a subject walking around a building for about 100 seconds (Task 1) and a seated subject viewing a computer animation (approximately 90 seconds long, Task 2). It took our manual coders on average 75 minutes (stdev = 28) and 80 minutes (17) to code results from the first and second tasks, respectively. The automatic fixation-identification algorithm, implemented in MATLAB and run on an Apple 2.16 GHz MacBook, produced results in 0.26 seconds for Task 1 and 0.21 seconds for Task 2. For the first task (walking), the average percent difference among the three human manual coders was 9% (3.5) and the average percent difference between the automatically generated results and the three coders was 11% (2.0). For the second task (animation), the average percent difference among the three human coders was 4% (0.75) and the average percent difference between the automatically generated results and the three coders was 5% (0.9).
Conference Paper
Full-text available
The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.
Conference Paper
Full-text available
In this paper we evaluate several of the most popular algorithms for segmenting fixations from saccades by testing these algorithms on the scanning patterns of toddlers. We show that by changing the pa- rameters of these algorithms we change the reported fixation dura- tions in a systematic fashion. However, we also show how choices in analysis can lead to very different interpretations of the same eye-tracking data. Methods for reconciling the disparate results of different algorithms as well as suggestions for the use of fixation identification algorithms in analysis, are presented. CR Categories: J.4 (Computer Applications): Social and Behav- ioral Sciences—Psychology
Full-text available
In human factors and ergonomics research, the analysis of eye movements has gained popularity as a method for obtaining information concerning the operator's cognitive strategies and for drawing inferences about the cognitive state of an individual. For example, recent studies have shown that the distribution of eye fixations is sensitive to variations in mental workload---dispersed when workload is high, and clustered when workload is low. Spatial statistics algorithms can be used to obtain information about the type of distribution and can be applied over fixations recorded during small epochs of time to assess online changes in the level of mental load experienced by the individuals. In order to ease the computation of the statistical index and to encourage research on the spatial properties of visual scanning, A Simple Tool for Examining Fixations has been developed. The software application implements functions for fixation visualization, management, and analysis, and includes a tool for fixation identification from raw gaze point data. Updated information can be obtained online at, where the installation package is freely downloadable.
The task of automatically tracking the visual attention in dynamic visual scenes is highly challenging. To approach it, we propose a Bayesian online learning algorithm. As the visual scene changes and new objects appear, based on a mixture model, the algorithm can identify and tell visual saccades (transitions) from visual fixation clusters (regions of interest). The approach is evaluated on real-world data, collected from eye-tracking experiments in driving sessions.