Content uploaded by Pawel Kasprowski
Author content
All content in this area was uploaded by Pawel Kasprowski
Content may be subject to copyright.
This is a pre-print.
The final version of this paper was published in Proceedings of SPIE, Volume 5779
Enhancing eye movement based biometric identification method by using
voting classifiers
Paweł Kasprowski
1
, Józef Ober
1,2
1
Institute of Computer Science, Silesian University of Technology, 44-100 Gliwice, Poland
2
Institute of Theoretical and Applied Informatics, Polish Academy of Science, Gliwice, Poland
kasprowski@polsl.pl
Abstract. Eye movements contain a lot of information about human being. The way the eyes are moving is
very complicated and eye movement patterns has been subject of studies for over 100 years. However,
surprisingly, eye movement based identification is a quite new idea presented for the first time during the
Biometrics’2003 Conference in London [1]. The method has several significant advantages: compiles behavioral
and physiological properties of human body, it is difficult to forge and it is affordable – with a number of ready-
to-use eye registering devices (so called eye trackers). The paper introduces the methodology and presents test
results.
Introduction
Using eyes to perform biometric human identification has a long tradition including well-
established iris pattern recognition algorithms [2] and retina scanning. However, there are
only few researches concerning identification based on eye movement characteristic [3][4]. It
is a bit surprising because that method has several important advantages.
Firstly, it compiles physiological (muscles) and behavioral (brain) aspects. The most
popular biometric methods like fingerprint verification or iris recognition are based mostly on
physiological properties of human body. Therefore, what is needed for proper identification,
is only a “body” of a person who is to be identified. It makes possible to identify an
unconscious or - in some methods - even a dead person.
Moreover, physiological properties may be forged. Preparing models of a finger or even
retina (using special holograms) is technically possible. As eye movement based identification
uses information which is produced mostly by brain (so far impossible to be imitated), forging
this kind of information seems to be much more difficult.
Although it has not been studied in that paper, it seems possible to perform a covert
identification, i.e. identification of a person unaware of that process (for instance using hidden
cameras).
Last but not least, there are many easy to use eye-tracking devices nowadays, so
performing identification by means of that technique is not very expensive. For instance a
very fast and accurate OBER2 [5] eye tracking system was used in the present work. It
measures eye movements with a very high precision using infrared reflection and the
production costs are comparable to fingerprint scanners.
Experiment
To prove that eye movements may be used for human identification an experiment had to
be provided. The experiment was divided into two stages:
1) Gathering samples of human eye movements from different persons.
2) Processing of samples obtained in the previous step to extract individual features.
This is a pre-print.
The final version of this paper was published in Proceedings of SPIE, Volume 5779
The process of data gathering consists of series of tests on different subjects (persons).
Each test is a registration of eye movements of the subject for the specified period of time
with the OBER2 system. The result of the single test is a sample, which is then used in the
second stage of the experiment.
There are three possibilities how to perform the single test:
- Registering eye movement only without information about an observed image.
- Registering both eye movements and the observed scene.
- Generating scene and registering eye movements as the answer for it.
The first solution is very simple to conduct even without any cooperation from the person
being identified. Eye movements may be measured during normal activity of that person
without any information about the observed image. However as eye movements are strongly
correlated with the scene, analysis of the data may be difficult.
The second solution gives a lot more data to analyze, yet it also has several serious
drawbacks. First of all, the testing system is more complicated. An additional camera recorder
is needed, which registers the scene the examined person is looking at. Furthermore, we need
to implement special algorithms to synchronize visual data with eye movement signal. A lot
more capacity is also needed for data storing.
In the third solution the testing system consists of the OBER2 eye tracker and PC computer
for both data storing and controlling the monitor, which produces a visual signal (Fig. 1). The
OBER2 system registers answers for that signal produced by the subject’s eyes. However, we
should be aware of the fact that the monitor screen is only a part of the image that eyes see, so
not the whole input is measured. Furthermore, the input may consist of non-visual signals.
Sudden loud sounds may, for instance, cause rapid eye movements.
Fig. 1 Schema of the system generating stimulation on a computer display and registering eye
movements as the answer to that stimulation.
As the last methodology gives influence on ‘input’ of the examined subject it seems to be
the most interesting from researcher point of view. Therefore all tests described in this work
were performed using a stimulation displayed on the monitor with the system architecture
presented on Fig. 1.
This is a pre-print.
The final version of this paper was published in Proceedings of SPIE, Volume 5779
The stimulation, which has been chosen, was a ‘jumping point’ stimulation with the same
points order for every experiment. There were nine different point placements defined on the
screen, one in the middle and eight on the edges, creating 3 x 3 matrix. The point flashed in
one placement in a given moment. The stimulation began and ended with a point in the
middle of the screen. During the stimulation, point’s placement changed in specified intervals.
The main problem in developing stimulation is to make it short and informative. Those
properties are as if on two opposite poles, so a ‘golden mean’ must be found. It may be
assumed that gathering one sample could not last longer than 10 seconds. Longer stimulations
would be impractical when considering usage in real world. To be informative, experiment
should consist of as many point position changes as possible. However, moving a point too
quickly makes it impossible for eyes to follow it. Experiments and literature [6] confirmed
that the reaction time for change of stimulation is about 100-200 ms. After that time eyes start
a saccade, which moves fovea to the new gaze point. The saccade is very fast and lasts not
longer than 10-20 ms. After a saccade, the brain analyses a new position of the eyes and, if
necessary, tries to correct it. So very often about 50 ms after the first saccade the next saccade
happens. It can be called it a ‘calibration’ saccade. Therefore to register whole reaction for
point change it was necessary to provide an interval between point locations change as more
than 300 ms.
The stimulation, which has been developed and used during all tests, consists of eleven
point position changes giving twelve consecutive point positions. First point appears in the
middle of the screen and the person should look at it with eyes positioned direct ahead. After
1600 ms the point in the middle disappears and for 20 ms a screen is blank. In that time eyes
are in instable state waiting for another point of interest. That moment is uncomfortable for
eyes because there is no point to look at. Then the point appears in the upper right corner. The
flashing point on the blank screen attracts eyes attention even without person’s will. The
‘jumps’ of the point continue until the last point position in the middle of the screen is
reached.
a. 1600 ms
e. 550 ms
i. 550 ms
b. 550 ms
f. 550 ms
j. 550 ms
c. 550 ms
g. 550 ms
k. 550 ms
d. 550 ms
h. 550 ms
l. 1100 ms
Fig. 2 Visual description of stimulation steps (a-l).
Testing
The data obtained in the sample gathering phase has to be transformed to create a vector of
attributes, which will be used for identification process. Each feature should give some
information about a person who was the subject of the experiment. That information may be
This is a pre-print.
The final version of this paper was published in Proceedings of SPIE, Volume 5779
understandable – for instance “his dominant eye is left” or “his eyes are flickering with
frequency 10 Hz”, but the meaning of the feature may be hidden also, giving only the value.
The main problem is how to extract a set of features that have values for different samples of
the same person (inner-class samples) as similar as possible and that have values for different
person’s samples as different as possible.
As it was mentioned earlier, identification based on eye movement analysis is a brand new
technique. The main disadvantage of that is that one cannot use already published algorithms
and just try to improve it with one’s own methods. Therefore, we could only try to use
methods, which have been successfully used with similar problems like:
- Methods used for analyzing eye movement data.
- General methods used in signal processing and classification.
First stage was preprocessing of samples with an normalization algorithm. The
preprocessed samples were included into a dataset containing over 1000 samples obtained
from 47 persons. The next stage was calculation of different samples characteristics like
Fourier transform or average velocity directions and storing it in separate datasets.
Each dataset consisted of vectors of attributes calculated from samples. Most of these
attributes were completely useless for classification. Such irrelevant attributes are not only
making classification more complex and time consuming task but also can disturb it and
increase identification errors. Therefore the next task was extraction of relevant attributes
from the attributes vectors. There was Principal Component Analysis (PCA) technique used
for linear conversion of attributes to create a new attributes explaining most of the dataset
variance.
The data were tested only with authorization tests – that is all samples were always divided
into positive (belonging to the specified person) and negative (belonging to different persons).
The testing process randomly created a training set taking 20 positive and 80 negative
samples from the whole dataset. After preparing PCA for the training set, the classification
models for different numbers of the most significant attributes were created using SVM [7]
and C45 Decision Tree [8] techniques. All samples not chosen to the training set were then
used for data testing. There were separate tests for each conversion, number of the most
significant PCA attributes and classification algorithm performed. Therefore there were 75
results created for each testing phase. These results were than used to create the final result in
a simple voting classifiers algorithm.
Results
There were two kinds of experiments performed. In the first, the test was performed only
once giving positive or negative result. The average, best and worst error rates are presented
in the table below.
Table 1. Average error rates for “one trial” experiment
FAR FRR
worst 3,19 26,94
average 2,31 14,53
best 0,56 3,93
In the second experiment two subsequent trials made on the same user one by one were
used as the one test. If both trials failed the user was rejected, if one or both succeeded the
user was accepted. Such scenario obviously increased FAR error because it was easer for not
honest person to mislead system but FRR error was significantly lower.
This is a pre-print.
The final version of this paper was published in Proceedings of SPIE, Volume 5779
Table 2. Average error rates for “one of two trials” experiment
FAR FRR
worst 7,13 21,45
average 4,73 10,49
best 1,82 3,44
Summary
The idea presented here seems to be interesting but the results obtained are far from
perfection. However the results are showing that eye movements are indeed giving some
information about human’s identification and they are encouraging for future effort.
Literature
[1] Kasprowski, P., Ober, J.: Eye movement tracking for human identification. 6th World
Conference BIOMETRICS’2003, London (2003)
[2] Daugman, J.G.: High Confidence Visual Recognition of Persons by a Test of Statistical
Independence, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
15, no. 11 (1993)
[3] Kasprowski P., Ober J.: With the flick of an eye. Biometrics Technology Today ISSN
0969-4765, Volume 12, Issue 3, Elsevier Science (March 2004)
[4] Kasprowski, P., Ober, J.: Eye Movement in Biometrics. Proceedings of Biometric
Authentication Workshop, European Conference on Computer Vision in Prague 2004,
LNCS 3087, Springer-Verlag, Berlin (2004)
[5] Ober, J., Hajda, J., Loska, J., Jamnicki, M.: Application of Eye Movement Measuring
System OBER2 to Medicine and Technology. Proceedings of SPIE, Infrared
Technology and applications, Orlando, USA, 3061(1) (1997)
[6] Hung, G. K.: Models of Oculomotor Control, World Scientific Publishing Co. (2001)
[7] Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, Inc. New York (1998)
[8] Quinlan, J. R.: C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann
(1993)