Content uploaded by Shabnam Haghzare
Author content
All content in this area was uploaded by Shabnam Haghzare on Sep 24, 2020
Content may be subject to copyright.
Towards A Framework of Detecting Mode Confusion in
Automated Driving: Examples of Data from Older Drivers
Shabnam Haghzare
Institute of Biomaterials and
Biomedical Engineering, University of
Toronto, Toronto, ON, Canada
Shabnam.Haghzare@mail.utoronto.ca
Jennifer Campos, Ph.D.
The KITE Research Institute –
University Health Network, Toronto,
ON, Canada
Jennifer.Campos@uhn.ca
Alex Mihailidis, Ph.D., P.Eng.
Department of Occupational Science
and Occupational Therapy, University
of Toronto, Toronto, ON, Canada
Alex.Mihailidis@utoronto.ca
ABSTRACT
A driver’s confusion about the dynamic operating modes of an
Automated Vehicle (AV), and thereby their confusion about their
driving responsibilities can compromise safety. To be able to detect
drivers’ mode confusion in AVs, we expand on a previous theoretical
model of mode confusion and operationalize it by rst dening
the possible operating modes within an AV. Consequently, using
these AV modes as dierent classes, we then propose a classication
framework that can potentially detect a driver’s mode confusion by
classifying the driver’s perceived AV mode using measures of their
gaze behavior. The potential applicability of this novel framework
is demonstrated by a classication algorithm that can distinguish
between drivers’ gaze behavior measures during two AV modes
of fully-automated and non-automated driving with 93% average
accuracy. The dataset was collected from older drivers (65
+
), who,
due to changes in sensory and/or cognitive abilities can be more
susceptible to mode confusion.
CCS CONCEPTS
•Human-centered computing →
Collaborative and social com-
puting; Collaborative and social computing theory, concepts and
paradigms; Computer supported cooperative work; Human com-
puter interaction (HCI); Interaction paradigms; Collaborative inter-
action; Human computer interaction (HCI); HCI theory, concepts
and models.
KEYWORDS
Automated Vehicles, Mode Confusion, Gaze Behavior, Classication,
Driver Monitoring
ACM Reference Format:
Shabnam Haghzare, Jennifer Campos, Ph.D., and Alex Mihailidis, Ph.D.,
P.Eng.. 2020. Towards A Framework of Detecting Mode Confusion in Auto-
mated Driving: Examples of Data from Older Drivers. In 12th International
Conference on Automotive User Interfaces and Interactive Vehicular Applica-
tions (AutomotiveUI ’20 Adjunct), September 21, 22, 2020, Virtual Event, DC,
USA. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3409251.
3411709
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
AutomotiveUI ’20 Adjunct, September 21, 22, 2020, Virtual Event, DC, USA
©2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8066-9/20/09.
https://doi.org/10.1145/3409251.3411709
1 INTRODUCTION: MODE CONFUSION IN
AUTOMATED DRIVING
The safety of Automated Vehicles (AV) in which the driving respon-
sibilities are shared between the driver and the AV depends heavily
on the eective cooperation between the two [
9
]. In non-automated
driving, the driver is responsible for tasks that are temporally and
hierarchically dependent on each other; lower level operational
tasks (steering and speed control), mid-level tactical tasks (object
and event detection and vehicle maneuvering), and higher level
strategic tasks (navigation) [
16
,
17
]; all of which require drivers’
constant monitoring. Given the interdependencies of the driving
tasks, the eective cooperation between the driver and the AV is
contingent on the driver’s accurate understanding of their new
responsibilities in automated driving.
However, the distribution of responsibilities between the driver
and the AV is not necessarily a zero-sum allocation of the non-
automated driving tasks. This is because vehicle automation does
not necessarily lessen the responsibilities of the driver; rather, it
changes the nature of the driver’s responsibilities [
3
,
7
]. The na-
ture of such new responsibilities depends heavily on (a) the tasks
that the AV is able to execute (automation scope), (b) the degree
to which the AV is automating the driving task in its scope (au-
tomation degree) [
19
], and (c) the driving conditions during which
the AV is able to execute the tasks in its scope to its specied de-
gree (automation operational limit) [
7
].The Levels of Automation
(LoA) taxonomy by the Society of Automotive Engineers (SAE) or
US National Highway Trac Safety Administration (NHSTA) pro-
vides a general guideline around dierent AV functionalities [
2
,
6
].
However, this taxonomy does not capture the variable and more
nuanced driving responsibilities in AVs of the same LoA [
20
]. In
addition, the terms used for branding commercially-available AVs
and the public’s limited understanding of their specic function-
alities can contribute to an unsafe miscalibration between the dri-
vers’ perceived responsibilities versus their actual responsibilities
[1, 18, 21]
Furthermore, most AVs still have an operational limit. When this
limit is reached (e.g. in response to varying environmental condi-
tions), the vehicle automation may transition to a non-automated
mode. Alternatively, the automation system may “gracefully de-
grade”, i.e., gradually narrow its scope or lower its degree of control
[
13
]. Therefore, even an AV that is designed to operate with a max-
imum scope and degree can have multiple and varying modes of
operation in response to changing driving conditions. The AV’s
dynamic operating mode has, in practice, led to driver’s confusion
or lack of awareness about AV’s current operating mode [
8
]. The
4
AutomotiveUI ’20 Adjunct, September 21, 22, 2020, Virtual Event, DC, USA Shabnam Haghzare et al.
rst AV-related fatal crash reported by NHSTA is speculated to have
been caused by the driver’s confusion about the AV’s operating
mode [
10
,
14
]. Thus, to detect driver’s mode confusion in AVs, it
becomes necessary to view AV functionalities as dynamic modes
that are each characterized by a distinct set of scope, degree, and
operational limits. This is because, for a safe AV-driver coopera-
tion, the drivers should have an accurate understanding of their
responsibilities in each of the dierent AV modes. In this paper,
we propose a framework that views AV functionalities as dynamic
modes (Section 2) and propose a classication approach (Section 3)
that can potentially detect instances of mode confusion. In Section
3.1, we present preliminary results of applying this framework on
a dataset collected from older drivers, who, extrapolating from lit-
erature on non-automated driving [
4
,
5
], can be more susceptible
to a lack of situational awareness, and therefore mode confusion
during automated driving due to potential age-related declines in
cognition.
2 MODELING MODE CONFUSION IN
AUTOMATED DRIVING
A recent theoretical model of driver’s mode confusion [
14
] de-
nes it as discrepancies between the driver’s perception of the
current AV mode and the true AV mode. This framework presents
a Hidden Markov Model (HMM) where the observed states are the
true AV modes and the hidden states are the driver’s perceived AV
mode.
To operationalize this theoretical model [
14
] in a way that practi-
cally detects mode confusion in AVs, and to generalize the model to
AVs of all LoAs, we present a framework of AV operation as a Finite-
State Markov Chain (F-SMC). Each possible state
(Si,i∈ {0, . ., M})
corresponds to an AV operating mode with a distinct combination of
scopes, degrees, and operational limits (Equations 1-3), where
Scope
is the combination of the driving tasks that the AV can perform
and
Deдree
species whether the AV is merely aiding the driver
with the tasks in its Scope or fully automating these tasks. Each
of the states can have an
Oper ational Limit
dened as the set of
environmental/road conditions in which the AV can safely perform
the tasks in its
Scopei×Deдreei
. However, due to the uncertain-
ties around the conditions that give rise to AV failures, the set of
such driving conditions is often not well-dened. This uncertainty
around state
Oper ational Limits
lends itself to the probabilistic
transitions in the model, in that, if the
Oper ational Limits
were
well-dened, the transitions as a result of reaching them would
have been deterministic, and the model could have consequently
been reduced to a Finite-State Machine.
Si=Scopei×Deдreei×Operat ional Limiti(1)
Scopei∈P{Lonдitudinal Control ,Lateral Control,Monitor inд}
(2)
Deдreei∈{N one,DecisionAid,ActionImplementation :
Assist ance,Action Implement ation :Ful l }(3)
Corresponding to the nite number of AV modes, the model
has a nite number of states, and
SM
indicates the ideal operat-
ing state with the widest scope and highest degree. The num-
ber of states/modes of an AV will therefore depend on both
M
and on how gradual the transitions to the non-automated
state (
S0
) are planned. For instance, an AV that, in response
to reaching the state operational limit, degrades its state grad-
ually by one will have
M+
1number of states (Figure 1a).
Whereas an AV can also be designed to abruptly transition
from an ideal state with all possible tasks in the
Scope
are fully
automated to a state where none of the tasks are automated
(Figure 1b).
This framework captures the dynamic states of an AV in which,
due to underspecied operational limits of the states, the transi-
tions between states are probabilistic. However, once the AV has
transitioned to an arbitrary state, that state is deterministic and
known. Therefore, to detect driver’s mode confusion, only the dri-
ver’s perceived AV state needs to be inferred. As such, an instance
of mode confusion can be detected if the inferred state is incon-
gruent with AV’s true and deterministic state. With the hypothesis
that drivers’ perceived AV states are associated with their moni-
toring behavior, we propose using gaze behavior measures to infer
the driver’s perceived AV state. Thus, morphing the theoretical
HMM model [
14
] into a practical problem where the observations
are features of drivers’ monitoring behavior and the hidden states
are one out of all possible states of an AV that correspond to the
driver’s perceived AV state. Depending on the number of states
in an AV (e.g., Min Figure 1a), this problem can be framed as
an
M
-class classication problem. In this setting, the objective
is to classify gaze behavior measures into one of the possible
M
classes. In this paper, we consider a 2-class classication problem
with the two classes corresponding to the AV states described in
Figure 1b.
3 USING GAZE BEHAVIOR TO CLASSIFY
DRIVER’S PERCEIVED AV STATE
Gaze behavior measures such as blinking, xations, and saccades
have long been successfully applied to indirectly measure driver’s
mental workload and monitoring behavior [
12
,
15
]. In this study, we
investigated the use of xation and saccade measures to distinguish
between the drivers’ monitoring behavior in fully-automated versus
non-automated driving where the drivers were aware of the current
state of the AV and were explicitly ensured that the AV operated
with no risks of failure.
3.1 Data Description
Gaze behavior data was collected from 33 older adults (65
+
)
while driving in an immersive, full-eld-of-view driving simula-
tor (DriverLab) using SmartEye Pro, a remote eye-tracking system.
Each driver completed six
∼
8-min driving scenarios – three fully-
automated
(Sf ul ly−aut o )
and three non-automated (
Snon −aut o
)
[
11
]. Participants were aware of the AV mode in each scenario,
hence the assumption that their perceived AV state corresponds
to the AV’s true state. After excluding the data from 16 unreli-
able scenarios, the average duration and the number of saccades
and xations were calculated for the rest of the scenarios, re-
sulting in 182 samples,
{Fd,Sd}182
d=1
with the scaled feature vec-
tor,
Fd=[F1d,F2d,F3d,F4d]
and the associated state,
Sd∈
{Snon −aut o ,Sf ull y−aut o }
as class labels for each sample/scenario
d.
5
Towards A Framework of Detecting Mode Confusion in Automated Driving: Examples of Data
from Older Drivers AutomotiveUI ’20 Adjunct, September 21, 22, 2020, Virtual Event, DC, USA
Figure 1: Trellis diagram of the mode/state sequence in the F-SMC model of two AVs. (a): An AV designed to ideally operate in
SM. (b): An AV with two states of fully-automated and non-automated.
Table 1: The results of the Gaussian Process Classier on dierent set of features.
Number of Features Features
Accuracy(Mean
±
SD)
AUC*(Mean ±SD)
F1-Score
*
(Mean
±
SD)
4 F1 x F2 x F3 x F4 0.92 ±0.05 0.95 ±0.5 0.92 ±0.04
3 F1 x F2 x F3 0.93 ±0.03 0.95 ±0.05 0.93 ±0.02
F1 x F2 x F4 0.93 ±0.03 0.95 ±0.05 0.93 ±0.02
F1 x F3 x F4 0.85 ±0.04 0.91 ±0.03 0.85 ±0.04
F2 x F3 x F4 0.69 ±0.06 0.78 ±0.07 0.69 ±0.07
2 F1 x F2 0.80 ±0.04 0.88 ±0.04 0.79 ±0.04
F1 x F3 0.86 ±0.03 0.91 ±0.04 0.86 ±0.03
F1 x F4 0.86 ±0.03 0.91 ±0.04 0.86 ±0.03
F2 x F3 0.68 ±0.05 0.77 ±0.01 0.69 ±0.06
F2 x F4 0.68 ±0.05 0.77 ±0.07 0.69 ±0.06
F3 x F4 0.53 ±0.05 0.60 ±0.10 0.34 ±0.24
1 F1 (Average Saccade
Duration)
0.81 ±0.07 0.89 ±0.05 0.79 ±0.07
F2 (Average Fixation
Duration)
0.58 ±0.03 0.58 ±0.06 0.59 ±0.05
F3 (Number of
Saccades)
0.53 ±0.04 0.57 ±0.08 0.35 ±0.20
F4 (Number of
Fixations)
0.53 ±0.04 0.57 ±0.08 0.35 ±0.20
*AUC =Area Under the receiver operating characteristics Curve; *F1-Score: 2 x (P r e ci s io n×Re c al l )
(P r ec is i on+R ec al l )
3.2 Preliminary Results
A Gaussian Process classier with a Radial Basis Function kernel
[
22
] led to the best accuracy for classifying the gaze behavior fea-
tures into two classes of non-automated and fully-automated. Table
1 summarizes the results of a 5-fold cross-validation on dierent
feature sets (
⊂ Fd
) with all samples of a single individual in one
of the folds. As per Table 1, the average duration of saccade (F1)
successfully distinguished the two classes with the addition of other
features increasing the classication performance.
4 CONCLUSION AND FUTURE WORK
In this paper, we have presented a framework that can potentially
detect drivers’ mode confusion in an AV with two operating modes.
In this framework, gaze behavior measures were used to success-
fully classify drivers’ perception of the current AV mode into one
of the two possible modes. Consequently, mode confusion can be
detected if the classied drivers’ perceived AV mode is incongruent
with the AV’s true and deterministic operating mode. The current
6
AutomotiveUI ’20 Adjunct, September 21, 22, 2020, Virtual Event, DC, USA Shabnam Haghzare et al.
work has two major limitations. First, is the scalability of it to mul-
tiple AV modes where drivers’ monitoring behavior may not be
as distinct as the two extreme modes of the used dataset. Second,
the reported classications are based on the entire
∼
8-min driving
scenario, whereas, to avoid unsafe consequences of mode confusion,
drivers’ mode confusion should be detected within a shorter time-
frame. Future work will utilize the current dataset to investigate
the use of other gaze behavior features that can classify the shorter
instances of gaze behavior data into dierent AV states.
ACKNOWLEDGMENTS
We thank Katherine Bak (Toronto Rehabilitation Institute, Univer-
sity of Toronto) for her contributions to collecting the data used in
this work. This work was supported by Canadian Institute of Health
Research (CIHR), AGE-WELL Graduate Award in technology and
aging, and Vector Institute Postgraduate Aliate Award.
REFERENCES
[1]
Hillary Abraham, Bobbie Seppelt, Bruce Mehler, and Bryan Reimer. 2017. What’s
in a Name: Vehicle Technology Branding & Consumer Expectations for Au-
tomation. In Proceedings of the 9th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (AutomotiveUI ’17).ACM Press,
New York, NY, USA, 226–234. DOI: https://doi.org/10.1145/3122986.3123018
[2]
National Highway Trac Safety Administration. 2013. Preliminary statement of
policy concerning automated vehicles. Washington, DC, 1-14.
[3]
Lisanne Bainbridge. 1983. Ironies of automation. In Analysis, design and evalua-
tion of man–machine systems, 129-135.
[4]
Cheryl Actor Bolstad. 2001. Situation awareness: Does it change with age?. In
Proceedings of the human factors and ergonomics society annual meeting, vol.
45, no. 4, pp. 272-276. Sage CA, Los Angeles, CA. DOI: https://doi.org/10.1177/
154193120104500401
[5]
Ryan J. Caserta, and Lise Abrams. 2007. The relevance of situation awareness
in older adults’ cognitive functioning: A review. European Review of Aging and
Physical Activity 4, no. 1, 3-13. DOI: https://doi.org/10.1007/s11556-007- 0018-x
[6]
SAE On-Road Automated Vehicle Standards Committee. 2018. Taxonomy and
denitions for terms related to driving automation systems for on-road motor
vehicles. SAE International: Warrendale, PA, USA. DOI: https://doi.org/10.4271/
j3016_201401
[7]
Sidney W. A. Dekker and David D. Woods. 2002. MABA-MABA or abracadabra?
Progress on human–automation co-ordination. Cognition, Technology & Work
4, no. 4, 240-244. DOI: https://doi.org/10.1007/s101110200022
[8]
Mica R. Endsley. 2017. Autonomous driving systems: A preliminary naturalistic
study of the Tesla Model S. Journal of Cognitive Engineering and Decision Making
11, no. 3, 225-238. DOI: https://doi.org/10.1177/1555343417695197
[9]
Frank Ole Flemisch, Frank Ole, Klaus Bengler, Heiner Bubb, Hermann Winner,
and Ralph Bruder. 2014. Towards cooperative guidance and control of highly
automated vehicles: H-Mode and Conduct-by-Wire. Ergonomics 57, no. 3, 343-360.
DOI: https://doi.org/10.1080/00140139.2013.869355
[10]
Kareem Habib and S. Ridella. 2017. Automatic vehicle control systems. National
Highway Trac Safety Administration, 13.
[11]
Shabnam Haghzare, Katherine Bak, Jennifer Campos, and Alex Mihailidis. 2019.
Factors inuencing older adults’ acceptance of fully automated vehicles. In Pro-
ceedings of the 11th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications: Adjunct Proceedings (AutomotiveUI ’19).ACM
Press, New York,N Y, USA, 135–139. DOI: https://doi.org/10.1145/3349263.3351520
[12]
Tobias Hecht, Anna Feldhütter, Jonas Radlmayr, Yasuhiko Nakano, Yoshikuni
Miki, Corbinian Henle, and Klaus Bengler. 2018. A review of driver state monitor-
ing systems in the context of automated driving. In Congress of the International Er-
gonomics Association. 398-408. Springer, Cham. DOI: https://doi.org/10.1007/978-
3-319- 96074-6_43
[13]
Tasuku Ishigooka, Satoshi Otsuka, Kazuyoshi Serizawa, Ryo Tsuchiya, and Fumio
Narisawa. 2019. Graceful Degradation Design Process for Autonomous Driving
System. In International Conference on Computer Safety, Reliability, and Security,
19-34. Springer, Cham. DOI: https://doi.org/10.1007/978-3-030- 26601-1_2
[14]
Christian P. Janssen, Linda Ng Boyle, Andrew L. Kun, Wendy Ju, and Lewis L.
Chuang. 2019. A hidden markov framework to capture human–machine interac-
tion in automated vehicles. International Journal of Human–Computer Interac-
tion 35, no. 11, 947-955. DOI: https://doi.org/10.1080/10447318.2018.1561789
[15]
Gerhard Marquart, Christopher Cabrall, and Joost de Winter. 2015. Review of
eye-related measures of drivers’ mental workload. Procedia Manufacturing 3,
2854-2861. DOI: https://doi.org/10.1016/j.promfg.2015.07.783
[16]
Natasha Merat, Bobbie Seppelt, Tyron Louw, Johan Engström, John D. Lee, Emma
Johansson, Charles A. Green et al. 2019. The “out-of-the-loop” concept in auto-
mated driving: Proposed denition, measures and implications. Cognition, Tech-
nology & Work 21, no. 1, 87-98. DOI: https://doi.org/10.1007/s10111-018-0525- 8
[17]
John A. Michon. 1985. A critical view of driver behavior models: what do we
know, what should we do?. In Human behavior and trac safety, pp. 485-524.
Springer, Boston, MA. DOI: https://doi.org/10.1007/978-1-4613- 2173-6_19
[18]
Micheal A. Nees. 2018. Drivers’ perceptions of functionality implied by terms
used to describe automation in vehicles. In Proceedings of the Human Factors
and Ergonomics Society Annual Meeting, vol. 62, no. 1, 1893-1897. Sage CA, Los
Angeles, CA. DOI: https://doi.org/10.1177/1541931218621430
[19]
Raja Parasuraman, Thomas B. Sheridan, and Christopher D. Wickens. 2000. A
model for types and levels of human interaction with automation. IEEE Transac-
tions on systems, man, and cybernetics-Part A: Systems and Humans 30, no. 3,
286-297. DOI: https://doi.org/10.1109/3468.844354
[20]
Bobbie Seppelt, Bryan Reimer, Luca Russo, Bruce Mehler, Jake Fisher, and David
Friedman. 2018. Towards a Human-Centric Taxonomy of Automation Types.
White Paper. DOI: https://doi.org/10.17077/drivingassessment.1723
[21]
Eric R. Teoh. 2020. What’s in a name? Drivers’ perceptions of the use of ve SAE
Level 2 driving automation systems. Journal of safety research 72, 145-151. DOI:
https://doi.org/10.1016/j.jsr.2019.11.005
[22]
Carl E. Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for
Machine Learning. the MIT Press. DOI: https://doi.org/10.7551/mitpress/3206.
001.0001
7