Conference PaperPDF Available

Study on Participant-controlled Eye Tracker Calibration Procedure

Authors:

Abstract and Figures

The analysis of an eye movement signal, which can reveal o lot of information about the way human brain works, has recently attracted the attention of many researchers. The basis for such studies is data returned by specialized devices called eye-trackers. The first step of their usage is a calibration process, allowing to reflect an eye position to a point of regard. The main research problem analyzed in this paper is to check whether and how the chosen calibration scenario influences the calibration result (calibration errors). Based on this analysis of possible scenarios, a new user-controlled calibration procedure was developed. It was checked and compared with a classic approach during pilot studies using the Eye Tribe system as an eye-tracker device. The results obtained for both methods were examined in terms of provided accuracy.
Content may be subject to copyright.
This is a pre-print. The original version is available at ACM Digital Libray http://dl.acm.org/citation.cfm?id=2666646
doi>10.1145/2666642.2666646
Study on Participant-controlled Eye Tracker Calibration
Procedure
Pawel Kasprowski
Silesian University of Technology
Akademicka 16, 44-100 Gliwice
pawel.kasprowski@polsl.pl
Katarzyna Harężlak
Silesian University of Technology
Akademicka 16, 44-100 Gliwice
katarzyna.harezlak@polsl.pl
ABSTRACT
The analysis of an eye movement signal, which can reveal o lot of
information about the way human brain works, has recently
attracted the attention of many researchers. The basis for such
studies is data returned by specialized devices called eye-trackers.
The first step of their usage is a calibration process, allowing to
reflect an eye position to a point of regard. The main research
problem analyzed in this paper is to check whether and how the
chosen calibration scenario influences the calibration result
(calibration errors). Based on this analysis of possible scenarios, a
new user-controlled calibration procedure was developed. It was
checked and compared with a classic approach during pilot
studies using the Eye Tribe system as an eye-tracker device. The
results obtained for both methods were examined in terms of
provided accuracy.
Categories and Subject Descriptors
H.1.2 [User/Machine Systems]: Human Factors.
General Terms
Algorithms, Measurement, Design, Human Factors.
Keywords
Eye movements, calibration.
1. INTRODUCTION
Eye movements have been intensively studied for over 100 years
because the way that people move their eyes may reveal a lot of
information about their emotions, intensions and experience [1].
To obtain information about eye movement the device commonly
named an ‘eye tracker’ should be used. Eye gaze data obtained
from the eye tracker may be used for different purposes [2].
However, interpretation of the eye tracker output, in order to be
useful, requires some data processing steps. The first and one of
the most important is a calibration that aims at providing an
ability to determine where exactly a person is looking at (a so
called gaze point) based on information about an eye position. A
well preformed calibration process plays significant role in many
applications allowing to learn if a subject is looking at a particular
point of regard (PoR). It is especially important for gaze
contingent interfaces and for the area of interest (AOIs) analysis.
The main research problem analyzed in this paper is to check
whether and how a novel participant-controlled calibration
scenario influences the calibration results (calibration errors). The
novel scenario is compared, during a conducted experiment, with
the classic one.
The paper is organized as follows: section 2 describes possible
calibration procedures, section 3 presents participant-controlled
calibration procedure developed by authors, section 4 explains
how the procedure was tested and compared to the classic one.
Section 5 is a summary with further work suggestions.
2. CALIBRATION PROCEDURES
There are various types of eye trackers available for use in the
eye movement analysis, however the most commonly used
devices are based on image processing technics involving infrared
light. In eye trackers of that type, an estimated center of a pupil in
conjunction with a corneal reflection (a glint) of an infrared light
source are used to determine a position of an eye [3].
Due to the fact that this position is defined with respect to an eye
tracker coordinate system, data related to an observed scene has to
be evaluated. It requires some preliminary steps to be undertaken,
which can be considered as training phase, during which an eye
tracking system is taught how to interpret results provided by a
device. This phase is a calibration process consisting of a
registration of user’s eye movement signal while he/she is looking
at stimulus with known screen coordinates. Based on eye center -
corneal reflection vector and a mapping function built during
calibration, it is possible to estimate coordinates within an area of
interest. Differences in people’s eyes construction as well as in
various features of their movement entail a necessity of defining
such function for each subject independently.
Calibration procedure is determined by a few elements. The first
issue, which has to be taken into consideration, is the type and
layout of stimuli presented on a screen during calibration. The
most commonly used stimulus is a point jumping around a screen.
A set of calibration point’s positions can differ in a number and
order [4]. Every calibration scenario with a point jumping on the
screen starts with instructing a participant to follow the point with
eyes. In the simplest scenario the point of regard is displayed in
every chosen location for a previously defined duration. After
presenting the point in a new place and waiting for several
milliseconds for an eye reaction (due to saccadic latency) a set of
eye position samples is registered. A subsequent calibration
algorithm processes this data with assumption that a person was
looking at specific point on a screen when the point was
displayed.
There are two possible sources of errors: an eye tracker and an
oculomotor system originated. The first one may happen when
eye tracker’s image recognition algorithm falsely identifies an eye
center or a corneal reflection positions. To reduce that error it is
necessary to collect eye positions for some considerable number
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. Copyrights for components of this work
owned by others than ACM must be honored. Abstracting with credit is
permitted. To copy otherwise, or republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee. Request permissions
from permissions@acm.org.
GazeIn’14, November 16, 2014, Istanbul, Turkey.
Copyright © 2014 ACM 978-1-4503-0125-1/14/11…$15.00.
http://dx.doi.org/10.1145/2666642.2666646
DOI STRING FROM ACM FORM CONFIRMATION.
This is a pre-print. The original version is available at ACM Digital Libray http://dl.acm.org/citation.cfm?id=2666646
doi>10.1145/2666642.2666646
of subsequent eye images to be able to exclude outliers (wrong
data). The second source of errors may be caused by the fact that a
human oculomotor system is not accustomed to long fixations
(average fixation lengths are 100-300 msec. [5]). The eye forced
to stare at the same point for longer time may feel uncomfortable.
That effect may cause involuntary eye movements to more
interesting parts of the scene which may spoil the whole
process.
Because of the problems described above there were also
calibration scenarios proposed that take into account the direct
feedback about eye movements to decide whether there is enough
data for the point displayed available. In [6] calibration
procedures were differentiated in terms of a way, in which a
verification of this eye stability is performed. Three groups (1)
system-controlled calibration, (2) operator-controlled calibration
and (3) participant-controlled calibration were distinguished.
First of them involves automatic signal processing to assess
whether a gaze point matches a calibration point. An alternative to
calibration performed in such manner is the second of the
possibilities mentioned above an operator involvement when
operator’s task is to analyse images of a participant’s eye and
decide whether it is properly directed to a target point. The last of
the aforementioned types of calibration procedure called
participant-controlled - assumes that decision whether a
calibration point was reached by a participant’s gaze is made on
his or her own. Participant’s responsibility is to confirm this
moment by mouse clicking, which entails an appearance of a
subsequent stimulus. It seems to be a promising method as a
person involved in experiments knows best when and where he or
she looks at.
In each of the described calibration procedures, in order to assess
the quality of the results, the error Edeg is used as the main
measure. The error represents the distance between accurate
positions the displayed points and their locations obtained from a
specific model. This factor, expressed in degrees, is calculated
according to the equitation 1:
𝐸𝑑𝑒𝑔 =1
𝑛(𝑥𝑖 𝑥̂𝑖)2+(𝑦𝑖𝑦̂𝑖)2
𝑖
where yi, xi represent observed values, 𝑥̂𝑖; 𝑦̂𝑖 represent values
calculated by a model and n is the number of points.
Existence of various calibration scenarios proves that obtaining
satisfactory solution is not an easy task. This fact justifies the
search for new solutions, sometimes comprising of various
already existing ones. The next section describes such a new
scenario, which is expected to improve a calibration realization as
well as its results.
3. DEVELOPED METHOD DESCRIPTION
The best way to obtain high quality data is the use of such
calibration scenario, which is able to attract participant’s eye gaze
to the specific point for some time. The point on the screen should
be “interesting” to a person during the time when a gaze point is
measured. In such case no special supervision is necessary as it
can be expected that the person will look at the point even without
any special instructions. This fact was taken into account during
the designed experiment.
The inspiration for such studies were tests performed in [6].
However, experiments developed in the presented research
differed from those described in that paper. During the tests
presented in [6] participants were supposed to accept calibration
point and then move their eyes to get another target. So the
certainty that user looked at a given point was obtained at the end
of the point presentation and the number of samples obtained for
every point was very low. Such scenario may work well for high-
end tower mounted eye trackers (as it was the case of [6]) because
eye tracker’s errors are very low in this case. For cheap remote
eye trackers measurement errors are higher, so more samples are
necessary to obtain reliable eye position information.
The procedure examined in the presented studies worked as
follows: There were nine points displayed on a screen. At first
participants had to click the middle point with mouse. When they
clicked it, the point changed into a spinning arrow (Fig. 1). After
1200 msec. the arrow stopped with arrowhead pointed towards the
next point that should be clicked. When participants clicked the
next point, it in turn changed into a spinning arrow for 1200 msec.
which then showed the subsequent point.
Spinning arrow was supposed to attract attention of a participants
because they had to look at it to obtain information about a
subsequent point position. Information about a gaze point position
was recorded only when the arrow was spinning. After the arrow
had stopped and pointed to another target, no registration of an
eye position had been done until a participant clicked the newly
indicated point. Because of that fact, users could freely regulate
speed of their acting and click the next point when they were
ready to do it.
A commonly known fact that people always look at points they
click with a mouse served as motivation to develop experiments in
such way [7]. Therefore, the only problem to solve was how to
keep their focus in this place. The spinning arrow seemed to be a
good idea.
Figure 1. Layout of points in the participant-controlled
calibration procedure using spinning arrows
The experiments conducted in the research used The Eye Tribe -
an eye tracking system with frequency 60 Hz. Each experiment
performed in such environment consisted of one of two
calibration scenarios. The first one was the most common classic
calibration procedure with constant time of the display of stimuli.
The second one was the modified participant-controlled procedure
described above. In both cases nine points, evenly distributed over
the screen were used (Fig. 1). The time for points presentation in
the classic solution was set to 1500 msec., yet first 300 msec.
were ignored as a period necessary to stabilize an eye position in a
new location (due to saccadic latency). Hence, similarly to the
arrow-based scenario, only data from the remaining 1200 msec.
was taken into account for further analysis. There were 40
participants involved in the experiments, although some of them
took part in more than one sessions, so overall 142 calibrations
This is a pre-print. The original version is available at ACM Digital Libray http://dl.acm.org/citation.cfm?id=2666646
doi>10.1145/2666642.2666646
were performed. In this number 73 used the classical procedure
and 69 used the arrow based one.
4. RESULTS
The first step in the results analysis was to check whether it is any
difference in outcome obtained for both used scenarios. Values of
mean errors (Eq. 1) were used for this purpose. The average
values and standard deviations were calculated for each scenario.
The results presented in Table 1 show that the proposed method
obtained better accuracy than the classic one.
Table 1. Errors calculated for both calibration scenarios
Procedure
Number
Avg
StdDev
Classic
73
0,32
0,26
Arrow spinning
69
0,26
0,23
Figure 2. Distributions of calibration errors for classic and
arrow-based procedures
The distributions of error values for both calibration types were
not normal and characterized by long right tails (see Fig 2).
Therefore, these values have been log transformed, owing to
which the normally distributed values for both procedures were
obtained (with p<0,05 in Shapiro-Wilk normality test). As a
result it was possible to check a significance of the received
results using a paired Student test. The outcome of that
comparison (p>0.05) showed that differences in calibration errors
cannot be treated as significant. However, it may be observed that
usage of the spinning arrow participant-controlled calibration
tends to improve calibration quality.
The errors obtained for both calibrations featured by high
deviation as there were participants for whom results were
significantly worse than for others. Therefore, it was decided to
compare the results of both calibrations collected for the same
person. The fact that there were 59 trials during which participants
were calibrated twice, using both methods, was utilized in this
analysis.
It occurred that for 36 cases out of 59 (61%) the arrow based
calibration gave lower errors. The usage of one-sample Z-test with
p<0.05, made it possible to reject the null hypothesis that this
result was obtained by coincidence and the classic procedure is in
reality better than the arrow based one.
5. SUMMARY
The paper presents a pilot study checking whether a specially
designed participant-controlled calibration procedure may give
results better than a traditional one, which is used in many
practical applications. The new approach proved to give better
results (i.e. lower calibration errors) for a considerable number of
samples. However, a statistical significance of this results was not
fully proven and the subject requires further research.
Figure 3. The number of pairs for which the arrow-based
procedure occurred to give lower errors and the number of
pairs for which the classic procedure gave lower errors
The arrow-based procedure presented in this paper seems to be
more challenging for users - as it requires some feedback by using
a mouse and clicking specific points on the screen comparing to
a classic procedure when a user just follows a point with his/her
eyes. However, it is authors belief that this kind of procedure is
more natural for users as they are used to communicate with a
computer using a mouse. Moreover, it is less stressful, because
users may choose a moment when they click the next point by
themselves and do not need to find a special “rhythm” of glimpses
as it is in the case of classic jumping point procedure.
The experiments with usage of the spinning arrows calibration are
planned to be continued. The analysis to what extent the increase
in knowledge of this procedure, gained by repeating the
experiment, can influence the time and accuracy of the calibration
will be conducted. Additionally, other types of participant
controlled scenarios are planned to be designed and evaluated.
6. REFERENCES
[1] Duchowski, Andrew. Eye tracking methodology: Theory and
practice. Vol. 373. Springer, 2007.
[2] Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R.,
Jarodzka,H., & van de Weijer, J. Eye tracking: A
comprehensive guide to methods and measures. Oxford:
Oxford University Press, 2011.
[3] Hansen, D.W. and Ji, Q. 2010. In the eye of the beholder: A
survey of models for eyes and gaze. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 32(3), 478-500
[4] Kasprowski P., Harężlak K., Stasch M.: Guidelines for the
Eye Tracker Calibration Using Points of Regard. Information
Technologies in Biomedicine. Volume 4.Advances in
Intelligent Systems and Computing. Springer International
Publishing, Vol. 284, pp. 225-236, 2014
[5] Salvucci, D. D., & Goldberg, J. H. (2000). Identifying
fixations and saccades in eye-tracking protocols. In
Proceedings of the 2000 Symposiumon Eye Tracking
Research and Applications (pp. 71-78).
[6] Nyström M, Andersson R, Holmqvist K, van de Weijer J.:
The influence of calibration method and eye physiology on
eyetracking data quality. Behav Res Methods. 2013
Mar;45(1):272-88. doi: 10.3758/s13428-012-0247-4.
[7] Hornof, A. J., & Halverson, T. Cleaning up systematic error
in eye-tracking data by using required fixation locations.
0
5
10
15
20
25
30
0 0,2 0,4 0,6 0,8 1 1,2 1,4 1,6
Number of occurrences
Calibration error in degrees
arrow
classic
This is a pre-print. The original version is available at ACM Digital Libray http://dl.acm.org/citation.cfm?id=2666646
doi>10.1145/2666642.2666646
Behavior Research Methods, Instruments, & Computers, 34(4), 592-604, 2002.
... These issues have resulted in the search for alternative calibration methods. There were many other scenarios tested, including: a click-based calibration [14], smooth pursuit calibration [15], vestibulo-ocular reflex (VOR)-based calibration [16] and others. However, all these methods share the same burden: they take time, are not very convenient for users and their results lose accuracy with time. ...
Article
Full-text available
Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user-input device. Classic calibration methods taking time and imposing unnatural behavior on eyes must be replaced by intelligent methods that are able to calibrate the signal without conscious cooperation by the user. Such an implicit calibration requires some knowledge about the stimulus a user is looking at and takes into account this information to predict probable gaze targets. This paper describes a possible method to perform implicit calibration: it starts with finding probable fixation targets (PFTs), then it uses these targets to build a mapping-probable gaze path. Various algorithms that may be used for finding PFTs and mappings are presented in the paper and errors are calculated using two datasets registered with two different types of eye trackers. The results show that although for now the implicit calibration provides results worse than the classic one, it may be comparable with it and sufficient for some applications.
... This phenomenon is known as the "bright pupil" effect [13]. Calibration is one of the most important procedure to figure out where exactly a person is looking at (gaze point) based on information of a pupil position [14]. The system utilizes socket to communicate with the eye tracker server and obtain gaze points on the screen captured by the eye tracker. ...
Conference Paper
Full-text available
Eye movement data may be used for many various purposes. In most cases it is utilized to estimate a gaze point - that is a place where a person is looking at. Most devices registering eye movements, called eye trackers, return information about relative position of an eye, without information about a gaze point. To obtain this information, it is necessary to build a function that maps output from an eye tracker to horizontal and vertical coordinates of a gaze point. Usually eye movement is recorded when a user tracks a group of stimuli being a set of points displayed on a screen. The paper analyzes possible scenarios of such stimulus presentation and discuses an influence of usage of five different regression functions and two different head mounted eye trackers on the results.
Conference Paper
Full-text available
The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.
Article
Full-text available
Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.
Article
Full-text available
In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, or explicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number of implicit RFIs--screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a recalibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant's unique error signature can be used to reduce the systematic error in the eye movement data collected for that participant.
Article
Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (Eds.) (2011). Eye tracking: a comprehensive guide to methods and measures, Oxford, UK: Oxford University Press.
Article
Recording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants' eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.
Book
to the Human Visual System (HVS).- Visual Attention.- Neurological Substrate of the HVS.- Visual Psychophysics.- Taxonomy and Models of Eye Movements.- Eye Tracking Systems.- Eye Tracking Techniques.- Head-Mounted System Hardware Installation.- Head-Mounted System Software Development.- Head-Mounted System Calibration.- Table-Mounted System Hardware Installation.- Table-Mounted System Software Development.- Table-Mounted System Calibration.- Eye Movement Analysis.- Eye Tracking Methodology.- Experimental Design.- Suggested Empirical Guidelines.- Case Studies.- Eye Tracking Applications.- Diversity and Types of Eye Tracking Applications.- Neuroscience and Psychology.- Industrial Engineering and Human Factors.- Marketing/Advertising.- Computer Science.- Conclusion.
Book
Despite the availability of cheap, fast, accurate and usable eye trackers, there is still little information available on how to develop, implement and use these systems. This second edition of Andrew Duchowski's successful guide to these systems contains significant additional material on the topic and fills this gap in the market with this accessible and comprehensive introduction. Opening with useful background information, including an introduction to the human visual system and key issues in visual perception and eye movement, the second part surveys eye-tracking devices and provides a detailed introduction to the technical requirements necessary for installing a system and developing an application program. The book focuses on video-based, corneal-reflection eye trackers - the most widely available and affordable type of system, before closing with a look at a number of interesting and challenging applications in human factors, collaborative systems, virtual reality, marketing and advertising.
Guidelines for the Eye Tracker Calibration Using Points of RegardAdvances in Intelligent Systems and Computing
  • P Kasprowski
  • K Harężlak
  • M Stasch
Kasprowski P., Harężlak K., Stasch M.: Guidelines for the Eye Tracker Calibration Using Points of Regard. Information Technologies in Biomedicine. Volume 4.Advances in Intelligent Systems and Computing. Springer International Publishing, Vol. 284, pp. 225-236, 2014