Content uploaded by Yu Sun
Author content
All content in this area was uploaded by Yu Sun on Mar 01, 2018
Content may be subject to copyright.
Measuring Vigilance Decrement using Computer Vision Assisted Eye
Tracking in Dynamic Naturalistic Environments.
Indu P. Bodala1,Student Member, IEEE, Nida I. Abbasi2,Student Member, IEEE, Yu Sun3,
Member, IEEE, Anastasios Bezerianos4,Senior Member, IEEE, Hasan Al-Nashash5,Senior Member, IEEE
and Nitish V. Thakor6,Fellow, IEEE
Abstract— Eye tracking offers a practical solution for mon-
itoring cognitive performance in real world tasks. However,
eye tracking in dynamic environments is difficult due to high
spatial and temporal variation of stimuli, needing further and
thorough investigation. In this paper, we study the possibility
of developing a novel computer vision assisted eye tracking
analysis by using fixations. Eye movement data is obtained
from a long duration naturalistic driving experiment. Source
invariant feature transform (SIFT) algorithm was implemented
using VLFeat toolbox to identify multiple areas of interest
(AOIs). A new measure called ‘fixation score’ was defined to
understand the dynamics of fixation position between the target
AOI and the non target AOIs. Fixation score is maximum when
the subjects focus on the target AOI and diminishes when they
gaze at the non-target AOIs. Statistically significant negative
correlation was found between fixation score and reaction time
data (r=−0.2253 and p<0.05). This implies that with vigilance
decrement, the fixation score decreases due to visual attention
shifting away from the target objects resulting in an increase
in the reaction time.
I. INTRODUCTION
Measuring vigilance decrement with time has always been
a challenge [1], [2]. The physiological or behavioural vari-
ables that are used for this purpose must be sensitive enough
to measure the changes in individual subject behaviour
with time and robust enough to apply them across various
tasks and applications [3]. Eye tracking provides a practical
solution that can be applicable in natural environments to
measure changes in vigilance. Eye movement data can pro-
vide rich information about the dynamics of perception and
attention in both temporal and spatial domains. For example,
*This work was supported by the National University of Singapore for
Cognitive Engineering Group at Singapore Institute for Neurotechnology
(SINAPSE) under Grant R-719-001-102-232.
1Indu P. Bodala is a Ph.D. student in the faculty of NUS Graduate
School of Integrative Science and Engineering (NGS), National University
of Singapore, Singapore. indu11@u.nus.edu
2Nida I. Abbasi is Master’s student in the department of
Biomedical Engineering, National University of Singapore, Singapore.
nida.itratabbasi@u.nus.edu
3Yu Sun is a senior research fellow in the Cognitive Engineering
group at the Singapore Institute for Neurotechnology (SINAPSE), National
University of Singapore, Singapore. kissalladin@gmail.com
4Anastasios Bezerianos is the head of Cognitive Engineering group at the
Singapore Institute for Neurotechnology (SINAPSE), National University of
Singapore (NUS), Singapore. tassos.bezerianos@nus.edu.sg
5Hasan Al-Nashash is a professor in the department of Electrical Engi-
neering, American University of Sharjah, UAE. hnashash@aus.edu
6Nitish V. Thakor is the director of the Singapore Institute of Neurotech-
nology (SINAPSE), National University of Singapore (NUS), Singapore.
eletnv@nus.edu.sg
duration of fixations or saccades can reflect the amount of
attention towards target stimuli [4], [5], [6]. Similarly, the
frequency of eye blinks was used as a measure of workload
experienced in a task [7], [8]. Other complex measures such
as gaze path, fixation heat map and analysis of eye movement
clusters were developed to provide intuitive insights about
viewing patterns of participants [9], [10].
However, analysis of the eye movement data collected in
a naturalistic environment for long duration is a difficult
problem. Tasks conducted in non-conventional, real world
scenarios like driving or surveillance always include many
necessary and unnecessary stimuli [8]. Real world tasks com-
prise dynamic stimuli where the areas of interest constantly
change unlike static stimuli usually used in the laboratory
settings to study cognitive performance. Hence, it is very
important to develop methods to analyze data collected in
such environments to enable interventions that can measure
and enhance performance in everyday tasks.
In this study, we designed a driving experiment where the
subjects performed a specific braking task in an immersive
virtual environment. The task is designed as close as possible
to the naturalistic driving conditions. In the braking task,
subjects were asked to follow a ‘lead car’ on a country road.
In addition to the lead car which acted as the target stimulus,
there were several non-target stimuli such as oncoming traffic
in the next lane, curvy roads and rich scenery with moun-
tains, grass and houses to the side of the road. Some of these
non-target stimuli may serve as distractors (for example,
scenery) while some may still have significance (for example,
oncoming traffic) with respect to the task objective. Eye
movement data of the participants was collected throughout
the task which lasted for about 25 minutes.
To analyse the eye movement data, we adopted areas-of-
interest (AOI) analysis approach. Analysis of AOIs is very
useful in understanding the dynamics of visual attention
in the environments with stimuli distributed and varying
spatially. Metrics such as dwell time, AOI hit and first return
were used by researchers to understand visual perception,
attention shifts and the relation to performance [11], [12].
Since the stimulus in our experiment was a dynamic scene,
the spatial locations of the AOIs varied. Hence we used
a computer vision algorithm to obtain object locations in
each frame of the video stimulus. Also, these AOIs were
developed based on the semantic characteristics of the objects
defined with respect to the task objectives. Fixations were
(a)
(b)
(c)
(d)
Figure 1. (a) An arbitrary frame with the corresponding fixation overlaid. (b) Binary mask for the lead car with mask value
m1, (c) Binary mask for the oncoming traffic with mask value m2 and (d) Binary mask for the control deck and the mirror
with mask value m3 with the corresponding fixation overlaid were obtained from each frame. In this example, m1 is 1 and
m2 and m3 are 0.
then analyzed based on their spatial location and to which
AOIs they belonged.
In the following section, the details of the experiment
task, data collection, identifying dynamic AOIs and analysis
of fixations are discussed in detail. The results of the data
analysis are shown in section 3. The implications of the
findings and the future directions are discussed in section
4 followed by concluding remarks in section 5.
II. METHODS
A. Participants
Six healthy subject aged 20-35 years, with normal or
corrected-to-normal vision and with no previous history of
nervous or psychiatric disorders were recruited from the
National University of Singapore (NUS) to participate in
the study. All the subjects possessed a valid driving license.
The study was approved by the Institutional Review Board
(IRB) of NUS. Written informed consent was obtained from
each participant before the beginning of the experiment. All
participants received monetary compensation for their time
upon the completion of the experiment.
B. Experimental task
The driving experiment was designed using an immersive,
virtual environment based driving simulator where the sub-
jects performed a specific braking task. All the experiments
were conducted at the same time of the day during afternoon.
Subjects were instructed to drive continuously for 25 mins on
a country road on a sunny day setting where they followed
a lead car without over-taking it (Figure 1.(a)). When the
lead car would brake at infrequent intervals, the subjects
responded by braking to avoid a crash. A trial was defined
as the interval from the start of braking by the lead car to the
start of braking by the subject for which the reaction time
(RT) was calculated. The inter-trial interval varied randomly
between 45-75 seconds. The speed control was automatically
handled by the car in order to minimize variability between
subjects’ driving habits. However, the subjects controlled the
steering to make the car stay in the given lane. The subjects
were strictly instructed to avoid any crashes with the lead
car and the oncoming traffic.
C. Eye tracker data acquisition and preprocessing
While the subjects were performing the driving task, eye
movement data was collected using Tobii TX300 eye tracker
in a standalone external video setup at a sampling rate of 300
Hz. The subjects’ eyes were calibrated to the setup before
starting the experiment. Tobii Studio software was used to
simultaneously capture stimulus video and eye movements
for further analysis. Raw data obtained from the experiment
was classified into fixations and saccades using the I-VT
fixation filter available in Tobii studio. The algorithm for eye
movement classification was explained in [13]. I-VT fixation
filter classifies eye movement data based on velocity of the
directional shifts of the eye. If the velocity of a particular
sample is above a certain threshold, then it is classified
as a saccade. Otherwise it is classified as a fixation. The
velocity threshold was set to be 300/sec [14]. In addition to
classifying the samples, the I-VT filter algorithm also had a
gap fill-in function to extrapolate missing data samples and
noise reduction before classifying eye movement data. It will
also look for broken fixations and merge them and/or discard
short fixations after the eye movement classification. Markers
to indicate the start of the trial (braking of lead car) and the
response (braking by the subject) were introduced into the
recordings. Gaze data and video segments corresponding to
each trial were extracted using these markers.
D. Obtaining AOIs and fixation scores
1) Object detection using SIFT algorithm: The video seg-
ments corresponding to each trial were divided into frames
at the rate of 5 Hz. Each frame was then processed to
identify AOIs that fall into predefined semantic categories.
The objects present in the experiment were broadly divided
into four semantic categories namely lead car, traffic, scenery
and mirror and control deck. We used source invariant
feature transform (SIFT) algorithm from VLFeat toolbox
for MATLAB to identify the lead car and the oncoming
traffic in each frame [15]. SIFT algorithm comprises a feature
detector and feature descriptor. SIFT detector extracts a
number of frames (attributed regions) from an image in a
way that is consistent with variations in the illumination,
viewpoint and other viewing conditions of the given template
(corresponding to the object to be identified). SIFT descriptor
associates a signature to the regions which will be used in
identifying their appearance in a compact and robust manner.
We manually checked each frame for correct object identifi-
cation and manually annotated those with wrongly identified
objects. It is to be noted that the algorithm identified the
objects accurately in 94% of the total number of frames.
Binary masks, where 1s were assigned to the identified object
and 0s to the background, were generated to denote the
identified objects (Figure 1.(b), (c)). Since the coordinates of
the car’s mirror and the control deck were constant in each
frame, we used only one binary mask to represent them for
all the frames (Figure 1.(d)). The rest of the scene is marked
as scenery. Mask value for each mask in (m1,m2,m3as
explained in Figure 1) is defined as the pixel value of each
mask (0 or 1) at the fixation position.
2) Fixation scores: Amongst the above defined semantic
categories, the highest weightage was given to the lead
car (s1=1), since it is the target, followed by the traffic
(s2=0.5), the mirror and the control deck (s3=0.25). Each
fixation position obtained from the preprocessed gaze data
were compared against the binary mask to identify the AOIs
which contained it. Fix score is defined for a fixation as the
average of the product of semantic weights of the AOIs that
contained the fixation as given in (e1). It is to be noted that
if the fixation belongs to the scenery, the fix score will be 0
as all the mask values will be 0. The average of Fix score
for all the fixations in a particular trial is taken as the ‘trial
fixation score’ as given in (e2).
Fix score =∑(si×mi)
∑(mi)(e1)
where i=1,2,3.
Tr ial Fixat ion Score =∑(Fix score j)
n(e2)
where j=1,..,nand nis the number of fixations in that
trial.
III. RESULTS
In this section, we present the results from the analysis
of the reaction time and eye tracking data. The 25 mins
reaction time data is divided into two parts. The first 10 mins
data is defined as ‘baseline’ where the vigilance decrement is
assumed to be minimum. The last 15 mins data is assumed
to reflect higher vigilance decrement. ANOVA analysis is
performed for the average reaction time for all the subjects
for baseline vs. vigilance decrement data. Figure 2 shows
that there is a statistically significant (p<0.05) increase
in the reaction time from the baseline (mean =0.994sand
std dev =0.122s) to the last 15 mins data (mean =1.325s
and std dev =0.24s) indicating that the driving task success-
fully simulated vigilance decrement among subjects.
Figure 2. ANOVA analysis of the average reaction time for
all the subjects for the first 10 mins (baseline) and the last
15 mins (vigilance decrement). p−val ue <0.05
Fixation scores are higher when the subjects focus more
on the lead car (target) than on the non-targets. Correlation
analysis between the fixation score and reaction time across
all trials for all the subjects is conducted. Figure 3 shows
statistically significant negative correlation between fixation
score and the reaction time (r=−0.2253 and p<0.05).
This agrees with our assumption that the increase in reaction
time is due to the subjects gazing at AOIs which are
not significant for task objectives. This implies that, with
vigilance decrement, the visual attention wanders away from
the target objects to non-target objects.
IV. DISCUSSION
A. Computer vision assisted eye tracking
In this paper, we discussed about the importance of devel-
oping eye tracking metrics that can accommodate dynami-
cally varying visual environments for assessing performance
Figure 3. Correlation analysis between Trial fixation score
and Reaction time. Correlation coefficient (r) =−0.2253 and
p−value <0.05.
of subjects during naturalistic tasks. Eye trackers can be
easily integrated into natural environments to assess the
cognitive performance of individuals. However, there are
very few studies that investigated dynamic AOIs in natural
stimuli [16]. Although these techniques are incorporated into
modern commercially available softwares, the accuracy of
these techniques is limited for stimuli such as in this study
due to various confounding factors such as low contrast be-
tween object and background, multiple objects of interest and
sudden shifts in the position of the cars due to sharp curves in
the road. Hence, it is necessary to identify sophisticated and
robust computer vision algorithms that can identify objects
in dynamic environments with higher accuracy and can learn
semantic importance depending on the task objectives.
B. Measuring vigilance decrement in dynamic environments
With decrement in vigilance, subjects’ focus wanders away
from the task objectives [17]. This will result in the gaze data
shifting from target objects to distractors and a decrease in
the efficiency of the task relevant visual trajectories. Hence,
the decrease in fixation score is followed by an increase in
the reaction time. Even though the correlation is significant,
the lower value of correlation coefficient may be due to
low sample size or any possible outliers. This approach
has facilitated us to understand the dynamics of attention
across multiple dynamic AOIs which is a common scenario
in naturalistic tasks. Other AOI related features such as dwell
time, transition between AOIs or time to return target AOIs
can be further explored to develop a real time monitoring
system. Also, the sample size will be improved in the future
studies.
V. CONCLUSION
In this paper, we developed a novel approach to study
eye movements in naturalistic environments. Computer vi-
sion algorithms were used to detect dynamically changing
AOIs. Fixation score metric is developed to account for
the gaze shifts between target and non-target AOIs. Statis-
tically significant negative correlation was found between
fixation score and reaction time indicating that fixation score
decreases with vigilance decrement. This research is very
useful in developing computer vision assisted eye tracking
systems that can monitor cognitive performance in dynamic
naturalistic environments.
ACKNOWLEDGMENT
We would like to express our sincere gratitude to Dr. Quin,
Asst. Professor, Singapore University of Technology and
Design (SUTD) for allowing us to borrow the eye tracker and
data analysis software. We also thank Rohan Ghosh, graduate
student SINAPSE, NUS for his suggestions regarding SIFT
algorithm and Mohammed Sharif, FYP student, ECE, NUS
for helping with data analysis.
REFERENCES
[1] Martel, A., D¨
ahne, S., & Blankertz, B. (2014). EEG predictors of
covert vigilant attention. Journal of neural engineering, 11(3), 035009.
[2] Dong, Y., Hu, Z., Uchimura, K., & Murayama, N. (2011). Driver
inattention monitoring system for intelligent vehicles: A review. IEEE
transactions on intelligent transportation systems, 12(2), 596-614.
[3] Berka, C., Levendowski, D. J., Lumicao, M. N., Yau, A., Davis,
G., Zivkovic, V. T., Olmstead, R.E., Tremoulet, P.D. & Craven, P.L.
(2007). EEG correlates of task engagement and mental workload in
vigilance, learning, and memory tasks. Aviation, space, and environ-
mental medicine, 78(5), B231-B244.
[4] Di Stasi, L. L., Catena, A., Canas, J. J., Macknik, S. L., & Martinez-
Conde, S. (2013). Saccadic velocity as an arousal index in naturalistic
tasks. Neuroscience and biobehavioral reviews, 37(5), 968-975.
[5] Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2004). The
role of fixational eye movements in visual perception. Nature reviews
neuroscience, 5(3), 229-240.
[6] Bodala, I. P., Ke, Y., Mir, H., Thakor, N. V., & Al-Nashash, H. (2014).
Cognitive workload estimation due to vague visual stimuli using
saccadic eye movements. In Engineering in medicine and biology
society (EMBC), 2014 36th Annual international conference of the
IEEE, pp. 2993-2996. IEEE.
[7] McIntire, L. K., McKinley, R. A., Goodyear, C., & McIntire, J. P.
(2014). Detection of vigilance performance using eye blinks. Applied
ergonomics, 45(2), 354-362.
[8] Bodala, I. P., Li, J., Thakor, N. V., & Al-Nashash, H. (2016). EEG
and eye tracking demonstrate vigilance enhancement with challenge
integration. Frontiers in human neuroscience, 10.
[9] Caldara, R., & Miellet, S. (2011). iMap: a novel method for statistical
fixation mapping of eye movement data. Behavior research methods,
43(3), 864-878.
[10] Istance, H., & Hyrskykari, A. (2011). Gaze-aware systems and atten-
tive applications. Gaze interaction and applications of eye tracking:
Advances in assistive technologies, 175.
[11] Goldberg, J. H., Stimson, M. J., Lewenstein, M., Scott, N., &
Wichansky, A. M. (2002). Eye tracking in web search tasks: Design
implications. In proceedings of the 2002 symposium on Eye tracking
research and applications, pp. 51-58. ACM.
[12] Riby, D., & Hancock, P. J. (2009). Looking at movies and cartoons:
eye tracking evidence from Williams syndrome and autism. Journal of
intellectual disability research, 53(2), 169-181.
[13] Olsen, A. (2012). The tobii i-vt fixation filter. Tobii Technology.
[14] Olsen, A., & Matos, R. (2012). Identifying parameter values for an
I-VT fixation filter suitable for handling data sampled with various
sampling frequencies. In proceedings of the symposium on Eye
tracking research and applications, pp. 317-320. ACM.
[15] Vedaldi, A., & Fulkerson, B. (2010). VLFeat: An open and portable
library of computer vision algorithms. In proceedings of the 18th ACM
international conference on multimedia, pp. 1469-1472. ACM.
[16] Papenmeier, F., & Huff, M. (2010). DynAOI: A tool for matching
eye-movement data with dynamic areas of interest in animations and
movies. Behavior research methods, 42(1), 179-187.
[17] Thomson, D. R., Besner, D., & Smilek, D. (2015). A resource-control
account of sustained attention: Evidence from mind-wandering and
vigilance paradigms. Perspectives on psychological science, 10(1), 82-
96.