ArticlePDF Available

Abstract and Figures

Drivers rely on a variety of cues from different modalities while steering, but which exact cues are most important and how these different cues are used is still mostly unclear. The goal of our research project is to increase understanding of driver steering behavior; through a measuring and modeling approach we aim to extend the validity of McRuer et al.'s crossover model for compensatory tracking to curve driving tasks. As part of this larger research project, this paper first analyzes the four main differences between compensatory tracking and curve driving: (1) pursuit and preview, (2) viewing perspective, (3) multiple feedback cues, and (4) boundary-avoidance strategies due to available lane width. Second, this paper introduces multiloop system identification as a method for explicitly disentangling the driver's simultaneous responses to various cues, which is subsequently applied to two sets of human-in-the-loop experimental data from a preview tracking and a curve driving experiment. The results suggest that recent human modeling advances for preview tracking can be extended to curve driving, by including the human's adaptation to viewing perspective, multiple feedback cues, and lane width. Such a model's physically interpretable parameters promise to provide unmatched insights into between-driver steering variations, and facilitate the systematic design of novel individualized driver support systems.
Content may be subject to copyright.
DSC 2017 EuropeVR Van der El et al.
Measuring and Modeling Driver Steering Behavior:
From Compensatory Tracking to Curve Driving
Kasper van der El1, Daan M. Pool1, and Max Mulder1
(1) Delft University of Technology, Faculty of Aerospace Engineering, Control and Simulation section, Kluyver-
weg 1, 2629HS Delft, The Netherlands, e-mail: {k.vanderel, d.m.pool, m.mulder}@tudelft.nl
Abstract - Drivers rely on a variety of cues from different modalities while steering, but which exact cues are
most important and how these different cues are used is still mostly unclear. The goal of our research project is
to increase understanding of driver steering behavior; through a measuring and modeling approach we aim to
extend the validity of McRuer et al.’s crossover model for compensatory tracking to curve driving tasks. As part of
this larger research project, this paper first analyzes the four main differences between compensatory tracking and
curve driving: 1) pursuit and preview, 2) viewing perspective, 3) multiple feedback cues, and 4) boundary-avoidance
strategies due to available lane width. Second, this paper introduces multiloop system identification as a method
for explicitly disentangling the driver’s simultaneous responses to various cues, which is subsequently applied to
two sets of human-in-the-loop experimental data from a preview tracking and a curve driving experiment. The
results suggest that recent human modeling advances for preview tracking can be extended to curve driving, by
including the human’s adaptation to viewing perspective, multiple feedback cues, and lane width. Such a model’s
physically interpretable parameters promise to provide unmatched insights into between-driver steering variations,
and facilitate the systematic design of novel individualized driver support systems.
Keywords: curve driving, compensatory tracking, driver modeling, preview, system identification
Introduction
Today, driving is still a manual control task that re-
quires continuous attention and control from the hu-
man driver. Drivers manipulate the gas pedal, brakes,
and gears to change the vehicle’s forward velocity
(longitudinal control), and they use the steering wheel
to negotiate curves, change lanes, and supress dis-
turbances like wind gusts (lateral control). To ef-
fectively design individualized systems for autono-
mous driving or driver assistance, as currently pur-
sued [Abb11, Sal13, Gor15], it is essential to un-
derstand driver control behavior. However, humans
exhibit an extremely versatile set of control skills,
and it is safe to say that, today, many aspects of
driver control behavior are still poorly understood.
Even for lateral steering control in isolation (i.e., at
constant forward velocity), a wide variety of plau-
sible theories exist about drivers’ use of preview,
motion feedback, and path prediction. This is re-
flected by the fundamental differences in available
control-theoretic models of driver steering behavior
[McR77, Mac81, Hes90, Odh06, Sen09, Boe16].
Ideally, it would be desirable to have a universal mo-
del for driver steering behavior, similar to McRuer
et al.’s crossover model for compensatory tracking
tasks [McR67]. The crossover model has inputs and
control dynamics that resemble those of the actual
human. Its physically interpretable parameters can
be intuitively adapted, or explicitly estimated from ex-
perimental data, to predict human behavior in new
situations, to design human-machine interfaces, to
quantify human skill, and to explain observed beha-
vior. Unfortunately, the crossover model is only appli-
cable to the extremely limited single-axis, visual com-
pensatory tracking task (error-minimization). Drivers
likely adopt a complex internal control organization,
integrating a variety of cues from different modalities.
Moreover, opposed to continuous error-minimization,
driving is a boundary-avoidance task where, in prin-
ciple, any lateral position in-between the lane mar-
kings can be considered acceptable [McR77].
A fundamental issue in understanding and modeling
driver behavior is to determine which combination of
cues, or even sensory modalities (e.g., visual, vesti-
bular, proprioceptive) guide steering. Four fruitful ap-
proaches are: 1) eye-tracking to determine the dri-
ver’s visual focus of attention [Lan94, Kan09]; 2) re-
moval of cues (e.g., visual occlusion) in a simulator
environment to measure driver use of the remaining
cues in isolation [Don78, Lan95]; 3) theoretical as-
sessment to rank the usefulness of available cues
using control theory [Wei70] and visual field geome-
try [Wan00]; and 4) directly measuring the driver’s
control dynamics (i.e., input-output relation) using
system identification [McR75, Ste11]. All these me-
thods have their own strengths, but only multiloop
system identification allows for unambiguously disen-
tangling the driver’s simultaneous lumped response
to various cues, while also most directly providing
an experimentally validated mathematical model. To
date, multiloop system identification has never been
applied to study driver steering.
The goal of our research project is to obtain the much
needed fundamental insight into driver steering be-
havior, using a combination of all the four mentioned
approaches. We aim to quantify these new insights
-1-
K. van der El, D. M. Pool, and M. Mulder,
“Measuring and Modeling Driver Steering Behavior: From Compensatory Tracking to Curve Driving,”
Transportation Research Part F: Psychology and Behaviour , 2017.
DOI: 10.1016/j.trf.2017.09.011
Measuring and Modeling Driver Steering Behavior DSC 2017 EuropeVR
in a structurally-isomorphic model that extends the
validity of McRuer et al.’s crossover model to curve
driving tasks. As part of this larger research project,
in this paper we will explain the differences between
compensatory tracking and curve driving, and de-
monstrate the strength of multiloop system identifi-
cation for studying driver steering behavior.
First, we review McRuer et al.s crossover model, to-
gether with the system identification techniques that
were used to obtain that model. Second, we explain
how we plan to move from compensatory tracking
to curve driving tasks, by stepwise introducing pre-
view, perspective viewing, visual rotational cues, op-
tic flow, vestibular motion, and two lane boundaries
(opposed to line-tracking). Next, we introduce a mul-
tiloop system identification technique, which is re-
quired to separately measure the multiple, simulta-
neously present human responses in these more ela-
borate tasks. Finally, we present experimental data
from two tasks with various preview times, to de-
monstrate the new, fundamental insight that our ap-
proach can provide about driving. The first task invol-
ved preview tracking, and the second task involved
full field-of-view visual curve driving.
Measuring and Modeling
Compensatory Tracking
Behavior
The Crossover Model
In compensatory tracking tasks, only a single task-
specific, instantaneous error is available to the hu-
man, for example representing the difference bet-
ween a vehicle’s desired and actual lateral position.
When the desired trajectory is unpredictable, humans
can only adopt a single-loop control organization,
known as compensatory tracking behavior [McR67],
see Fig. 1 and Fig. 2. In compensatory tasks, the
human’s control dynamics can be approximated with
a simple linear time-invariant model; nonlinear and
time-varying contributions are relatively small, and
are accounted for by a remnant “signal” (nin Fig. 1).
The crossover model is given by [McR67]:
Hoe()Hce(jω) = ωc
ej ωτe,(1)
and states that the human and vehicle dynamics
(Hoeand Hce, respectively) combined resemble an
integrator with a time delay τearound the crosso-
ver frequency ωc. A set of “Verbal Adjustment Ru-
les” quantifies the adaptation of the crossover mo-
del’s variables, τeand ωc, to task variables like the
vehicle dynamics and the forcing functions’ band-
width [McR67]. From Eq. 1 it follows that the human’s
control dynamics in the crossover region are:
Hoe() = Ke
1 + TL,ejω
1 + Tl,ejω e τe,(2)
with Kehumans’ control gain, and TL,e and Tl,e their
lead and lag equalization time constants, respecti-
vely, which are adapted to achieve the crossover mo-
del’s integrator dynamics around the crossover fre-
quency. Extensions of the crossover model to lo-
wer and higher frequency ranges typically include
a separate model for the neuromuscular system dy-
namics [McR68, Hes80]. The model parameters are
physically interpretable, which facilitates their intuitive
adaptation to predict behavior in new situations. Mo-
reover, the crossover model provides explicit quan-
titative insights into human adaptation and skill de-
velopment. Since its development in the 1960’s, the
crossover model has become an essential tool in the
research, design, and evaluation of human-machine
systems (e.g., see [McR69, Hes90, Poo16]).
System Identification
Measuring the human’s control dynamics in compen-
satory tracking tasks is relatively straightforward, be-
cause the human is organized as a single input (the
visual error) and single output (the steering wheel
rotations) controller. McRuer et al. [McR67] used
an instrumental variable, frequency-domain system
identification method to estimate the linear part of the
human’s control dynamics. This method relies on a
multisine external input signal (or forcing function),
the instrumental variable, which consists of a limited
number N(typically around 10) sine waves:
ft(t) =
N
X
i=1
Aisin(ωit+φi),(3)
with Aithe amplitude, ωithe frequency, and φithe
phase of the ith sinusoid. ftcorresponds to the desi-
red trajectory forcing function in Fig. 1, which can be
thought of as the road’s trajectory to be followed in
driving tasks. Alternatively, it is also possible to use a
multisine disturbance signal fd, which may resemble
wind gusts. At the input frequencies ωi, remnant is
negligibly small compared to the human’s response
to the forcing function, and the human’s linear control
dynamics can be approximated with:
ˆ
Hoe(i) = Sftu(i)
Sfte(i),(4)
with Sthe cross-power spectral density estimate of
the respective subscripted signals. The Nestimated
Fourier coefficients ˆ
Hoe(i)allowed for an explicit
look into the Hoeblock in Fig. 1, and enabled McRuer
et al. [McR67] to propose the crossover model.
From Compensatory Tracking to
Curve Driving
McRuer et al.’s [McR67] single-axis, visual compen-
satory tracking task is equivalent to a driving task
from which only the current lateral position error with
respect to the road’s center-line is perceived by the
driver. Clearly, drivers may additionally respond to
many other cues while negotiating curves. In our re-
search project we will stepwise introduce elements
from a curve driving task into the compensatory tra-
cking task, which is schematically shown in Fig. 2.
The four main differences between compensatory
tracking and curve driving will be discussed in detail
in this section: 1) pursuit and preview, 2) perspective
viewing, 3) multiple feedback cues, and 4) boundary
avoidance.
Step 1: Pursuit and Preview
In contrast with compensatory tracking tasks, dri-
vers that negotiate curves perceive cues that contain
-2-
DSC 2017 EuropeVR Van der El et al.
Hoe
Hot
Hox
+
+ +
+ +
+
pursuit pathways
+
compensatory loop
steer wheel
state, u
remnant, n
error, e
desired
trajectory, ftvehicle
dynamics
disturbances, fd
vehicle state, x
driver
Figure 1: Driver-vehicle control diagram that illustrates the possible driver responses, based on [McR67]. The single-loop
compensatory control organization is shown in black, while additional pursuit pathways are shown in gray. The driver’s possible
(proprioceptive) pursuit response on the steering wheel state uis not shown, because it is not considered in this paper.
e
τp
Step 1:
pursuit and
preview
t
τp
Step 2:
linear
perspective
t
State-of-the-Art:
compensatory
tracking
Step 3:
multiple feedback
cues
Step 4:
boundary
avoidance
Figure 2: Stepwise introduction of elements from a curve driving task (far right) into a compensatory tracking task (far left).
information about the desired trajectory ftand the
vehicle states x. Drivers can directly respond to
these signals, which is reflected by the Hotand
Hoxblocks in Fig. 1, and which is known as pur-
suit tracking [All79, Hes81]. Moreover, drivers can ty-
pically preview the road for some part ahead, yiel-
ding information about the future desired trajectory
ft([t, t +τp]), up to a certain preview time τp. The
additional information allows for an extremely wide
variety of acceptable steering behaviors, which is an
important reason why driver behavior is still poorly
understood.
First, with preview, drivers can anticipate the de-
sired trajectory, which allows them to compensate
for both their own response delays and other lags,
like those of the vehicle dynamics [Ito75, El17]. In
fact, with sufficient preview, drivers follow a desired
trajectory nearly perfectly [McL73, Mil76]. However,
how drivers exactly use preview has long remained
unclear, which is reflected by the many fundamen-
tally different ways in which driver models incorpo-
rate preview. Well-known driver models use either
one [McR77, Don78, Mac81], two [Sal04, Sal13], or
many [Mac81, Odh06] points from the previewed tra-
jectory ahead as input, together with any function
(e.g., lateral position, heading, or curvature) of that
desired trajectory.
Second, in pursuit tasks, drivers can also predict their
vehicle’s trajectory, because they have knowledge of
both the vehicle’s states and their own control in-
puts [Mac81, Odh06]. Similar as for driver use of pre-
view, it is yet unclear if, and how, drivers predict their
vehicle’s trajectory, which is again reflected by the
many different prediction mechanisms incorporated
in current driver models. Proposed driver prediction
mechanisms range from simple linear extrapolation
[Kon68, Wei70, Hes90] to elaborate optimization of
the driver’s own control inputs over a certain future
time span, using a model of the vehicle’s dynamics
[Mac81, Odh06].
In the first step of our research project, we investigate
pursuit and preview control behavior in laboratory tra-
cking tasks that closely resemble compensatory tra-
cking (see Fig. 2, Step 1). A plan-view of the pre-
viewed trajectory is shown together with the vehicle’s
lateral position. Using multiloop system identification
(explained in the next section), we estimate the hu-
man’s Hotand Hoxblocks; Hotshows how humans
use preview, while Hoxreveals if and how humans
predict the vehicle’s trajectory. Experimental results
of this task were recently published in [El16b, El17],
and will be reviewed in the final section of this paper.
Step 2: Perspective Viewing
The viewing perspective in normal driving tasks dif-
fers markedly from the plan-view preview tracking
task (Step 1). In driving, linear perspective introduces
anonlinear mapping between the visual cues on the
one hand, and the vehicle states and the desired tra-
jectory on the other hand; a plan-view display (ortho-
graphic projection) only involves a linear scaling, or
“gain”. This has two important consequences. First,
due to linear perspective the previewed trajectory
in driving tasks appears smaller with increasing dis-
tance ahead (see Fig. 2, Step 2), such that tracking
errors close ahead are visually emphasized. It has
never been explicitly investigated if and how linear
perspective evokes adaptations in human preview
control behavior, because this first requires a better
understanding of human preview control (Step 1).
Second, while the vehicle state (lateral position) is ex-
plicitly visible on the display in the plan-view tracking
-3-
Measuring and Modeling Driver Steering Behavior DSC 2017 EuropeVR
wheel
rotations, u
desired
trajectory, ft
vehicle
dynamics
disturbances, fd
vehicle
state, x
driver
perspective
cues, Φ
display /
perspective
Figure 3: Illustration of steering with perspective viewing.
tasks, a driver’s perspective view only shows this in-
formation implicitly, through the scenery ahead (like
in Step 3 in Fig. 2). Drivers must cognitively recons-
truct the vehicle’s lateral position relative to the road
using the perspective visual cues from the scenery
ahead, or, alternatively, directly use certain perspec-
tive visual cues to control their vehicle. For example,
a straight road’s perspective splay angle is directly re-
lated to the vehicle’s lateral deviation from the center-
line [Don78, Mul05]. For small deviations this rela-
tion is approximately proportional, so the splay angle
simply replaces the explicit lateral position cue that
is shown on the pursuit display in Step 1. For large
deviations, or on curved roads, the relation between
visual cues and the vehicle’s states is strongly nonli-
near [Mul04].
In perspective tasks, the assumption in Fig. 1 that
drivers respond directly to the vehicle states and the
desired trajectory is thus not necessarily valid. Ins-
tead, cues from the perspective visual scene are the
input to the human, and these are related to the
vehicle states by a (nonlinear) perspective transfor-
mation [Mul04, Mul05], see Fig. 3. Multiloop system
identification can still be applied to estimate the Hot
and Hoxblocks in Fig. 1, but yields the lumped dyna-
mics of the human and the perspective transforma-
tion together. The estimated lumped dynamics may
reveal which perspective cues are used by the hu-
man, as was shown in piloting tasks [Swe99]. Addi-
tional measurements (e.g., eye-tracking) can provide
supporting evidence for the actual inputs and control
organization adopted by the driver.
In the second step of our research project, we will
only investigate the effects of linear perspective on
human use of preview information. To do so, we will
perform the same preview tracking task as in Step 1,
but with a perspectively transformed previewed tra-
jectory (see Fig. 2, Step 2). In our research project
we will thus not pinpoint which perspective cues (like
splay angle) are actually used by the human; instead,
we will consider the lumped human and perspective
transformation dynamics together, essentially assu-
ming that humans have direct knowledge of the ve-
hicle states.
Step 3: Multiple Feedback Cues
The tasks discussed in Steps 1 and 2 involved only
visual lateral position feedback. Indeed, lateral po-
sition in the lane is a likely cue that guides stee-
ring during curve driving [Wei70, Lan95]. However,
most road vehicles have dynamics – from steering
input to lateral position – that consist of more than
two integrators [Raj11], such that continuous stabili-
zing control is required from the human, through lead
wheel
rotations
desired trajectory, ftdisturbances, fd
acceleration (vestibular)
path / heading (visual)
outer-loop response
(lateral position)
vehicle
dynamics
inner-loop
response
lateral position
driver
Figure 4: Illustration of a multiloop control organization.
equalization [McR67]. Weir and McRuer [Wei70] sho-
wed that the human can, and will, close an additio-
nal inner-loop to ease the (lead equalization) requi-
rements on the lateral-position outer-loop (see Fig. 4
for an illustration). Any cue that includes information
about the vehicle’s lateral velocity (i.e., lead on the
lateral position) or acceleration can be used as inner-
loop. Vestibular, proprioceptive, auditory and rotatio-
nal visual cues (e.g., path/heading angle and rate)
all contain such lead information. Note that none of
these cues are present in the tasks in Steps 1 and 2.
With multiloop system identification (see the next
section), the driver’s inner- and outer-loop control dy-
namics can in theory be explicitly measured and di-
sentangled, but this has never been done to date. As
such, in curve driving tasks, the exact roles of visual
cues like path and heading angle, and of non-visual
cues like motion feedback, are still poorly unders-
tood. In the third step of our research project, we in-
vestigate three situations that possibly evoke drivers
to close additional inner loops: 1) presence of physi-
cal motion feedback in the lateral position, plan-view
preview tracking task from Step 1; 2) introduce “ca-
mera” rotations that correspond to the vehicle’s hea-
ding changes, yielding visual heading and path cues
(see Fig. 2, Step 3); 3) increase the strength of the
path cues by increasing the visual flow (i.e., the tex-
ture density) to a level similar as in real driving tasks.
Step 4: Boundary Avoidance
The tasks up to Step 3 all required the human to fol-
low a well-defined signal, which is called tracking. Dri-
vers do not typically aim to continuously keep their
vehicle on the lane’s center-line, but instead steer
only when the vehicle laterally approaches the roa-
d’s edges [God86, Boe16]. This is called boundary
avoidance, and is known to evoke less aggressive
and intermittent (or “satisficing”) driver steering be-
havior [McR77, Boe16]. In the final step of our re-
search project we will extend the multiloop, perspec-
tive preview tracking task from Step 3 to a boundary-
avoidance, curve driving task (see Fig. 2, Step 4.
Due to drivers’ possibly intermittent steering beha-
vior, multiloop system identification (which assumes
time-invariant behavior) alone may not suffice to re-
veal all the subtle differences between tracking and
boundary-avoidance behavior. We intent to perform
additional time-domain analyses, and to take advan-
tage of recent advances in the modeling of intermit-
tent human steering behavior [Mar17].
-4-
DSC 2017 EuropeVR Van der El et al.
Multiloop System Identification
The introduction of elements from curve driving tasks
allows humans to respond to multiple cues, or si-
gnals, instead of the single error signal in compen-
satory tracking tasks. To separately estimate the dy-
namics of multiple, simultaneously active human res-
ponse blocks, the single-loop system identification
technique, used by McRuer et al. [McR67] to derive
the crossover model, has been extended to multiloop
applications [Sta67, Paa98]. The maximum number
of human response blocks that can be estimated is
equal to the number of uncorrelated external forcing
functions. For example, to estimate both the human’s
Hotand Hoxpursuit blocks, two forcing functions are
needed. Realistic forcing functions can be a desired
trajectory ft(e.g., a winding road) and disturbances
fd(e.g., wind gusts). The correlation between the
driver’s steering output and each uncorrelated for-
cing function then allows for disentangling the two
driver response blocks. Two forcing functions can be
constructed to be uncorrelated by using multisines
(see Eq. 3) with mutually exclusive frequencies com-
ponents ωi[Sta67, Paa98].
Consider the scheme in Fig. 1, but without the dri-
ver’s possibly active Hoeresponse (at the end of this
section we explain why this simplification poses no
assumption on the actual driver’s behavior). The re-
sulting control diagram is given in Fig. 5. Neglecting
the human remnant at the multisine forcing function
input frequencies, we can write:
U(i) = Hot(jωi)Ft(jωi)Hox(i)X(jωi),(5)
with capitals indicating the Fourier transform of the
respective signals. A second equation is needed to
solve Eq. 5 for the two unknown dynamics Hot(i)
and Hox(i). First, evaluate Eq. 5 only at the desi-
red trajectory’s input frequencies, ωt. Then, interpo-
late the signals U(d),Ft(jωd), and X(jωd)in the
frequency domain from the neighboring disturbance
signal input frequencies ωdto ωt, yielding ˜
U(t),
˜
Ft(t), and ˜
X(t), to obtain the following set of
equations:
U(t)
˜
U(t)=Ft(jωt)X(jωt)
˜
Ft(t)˜
X(t)Hot(jωt)
Hox(t),(6)
which can be solved for Hot(t)and Hox(t). Si-
milarly, after interpolating all signals from ωtto ωd,
Eq. 6 can also be evaluated at the disturbance signal
input frequencies to obtain Hot(d)and Hox(jωd).
Example multiloop system identification results are
shown in Fig. 5 and will be discussed in the next sec-
tion.
There are three situations in which not all driver res-
ponse pathways can be disentangled with multiloop
system identification. First, because the number of
meaningful forcing functions that can be defined is
limited, the number of driver response blocks that
can be separated is also limited. Second, blocks that
have the same input can never be disentangled; for
example, a simultaneous visual and vestibular res-
ponse to (derivatives of) the vehicle’s lateral position
can only be estimated together, as a lumped res-
ponse. Finally, due to the interdependency between
e,ftand x(e=ftx), it is never possible to simulta-
neously estimate all three response blocks, Hot,Hox,
and Hoe. In any of these situations, more driver path-
ways are active than can be disentangled, and the
10-1 100101
10-1
100
101
near-viewpoint
far-viewpoint
ω, rad/s
|Hot|, -
10-1 100101
-360
-180
0
180
ω, rad/s
Hot, deg
10-1 100101
10-1
100
101
pursuit, non-par.
preview, non-par
ω, rad/s
|Hox|, -
10-1 100101
-360
-180
0
pursuit, model
preview, model
ω, rad/s
Hox, deg
ft(t)u(t)x(t)
n(t)fd(t)
+
+
vehicle
dynamics
+
Hox
Hot
driver
Figure 5: Illustration of estimated multiloop human
controller dynamics for a single subject in pursuit and
preview tracking tasks (gray/black markers), together with
the fitted preview model (solid lines), adapted from [El16b].
estimated driver dynamics will be lumped combina-
tions of all the actually active driver response blocks.
The active pathways that are not present in the iden-
tified model structure are not assumed to be absent,
but instead appear as “contamination” in the estima-
ted control dynamics. As we will see in the next sec-
tion, this limitation is not always problematic, because
the lumped estimate of the driver’s response dyna-
mics may reveal which modality, or pathway, was ac-
tive or dominant. Moreover, by our stepwise intro-
duction of driving-task elements into a compensa-
tory task, additional driver responses occur only gra-
dually, which facilitates the study of many separate
driver responses in isolation.
Results
In this section, we demonstrate the usefulness of
multiloop system identification for studying driver
steering behavior. First, we review results of a pursuit
and preview tracking experiment (Step 1), which were
recently published in [El16b]. Second, we present our
first data from a simulator-based curve driving expe-
riment (Step 4).
Preview Tracking (Step 1)
Only recently, multiloop system identification was ap-
plied for the very first time to measure the human’s
Hot()and Hox()control dynamics in pursuit and
preview tracking tasks [El16b]. Subjects were pre-
sented with the display in Fig. 2, Step 1 (10 cm outer
radius), on a screen directly in front of them, while
control inputs were given with a side stick. Tasks
involved 0 s (pursuit) and 1 s of preview, both of
-5-
Measuring and Modeling Driver Steering Behavior DSC 2017 EuropeVR
e(t)
ft(t+τn)
u(t)x(t)
n(t)
fd(t)
ft(t+τf)
f
t,f (t)
f
t,f
ft(t)
e(t)
+
+
+
+
+
x(t)
τ
Kn
1+Tl,nj ω
Kf1
1+Tl,f
equalization physical
limitations
human controller / driver
Hoe
delay and
neuromuscular
vehicle
dynamics
use of preview
“compensatory” model
near viewpoint
far viewpoint
Kf= far-viewpoint gain
Tl,f = far-viewpoint lag time-constant
τf= far-viewpoint look-ahead time
Hoe=Ke
1+TL,e
1+Tl,e
e= internal error
ft,f= filtered far-viewpoint
Kn= near-viewpoint gain
Tl,n = near-viewpoint lag time-constant
τn= near-viewpoint look-ahead time
Figure 6: Control diagram for preview tracking tasks, derived using multiloop system identification in [El16b].
which were repeated with gain, single- and double-
integrator vehicle dynamics. The desired trajectory
and disturbance signals had a bandwidth of 1.5 rad/s
and a highest frequency components of 16 rad/s.
Multiloop identification results for a single subject
are reproduced in Fig. 5. The observed dynamics in
each response block were first modeled separately
[El16b], after which common elements were regrou-
ped and the block diagram was rearranged to obtain
a novel model that reflects human controllers’ most
likely control organization (see Fig. 6).
This new model for preview tracking tasks extends
McRuer et al.’s model for compensatory tracking
tasks with two responses to the previewed trajec-
tory ahead. A far viewpoint, located τfs ahead (ty-
pically 0.6-2 s), provides a preshaped, smoothed tra-
jectory input to a “compensatory” error response.
The “error” eresponded to by the human is thus
not the true error, but a time advanced, cognitively
determined internal error signal. Humans use the
far-viewpoint response mechanism only to track the
low frequencies (i.e., slow changes) in the desired
trajectory, so the model includes a low-pass smoo-
thing filter, characterized by time constant Tl,f (typi-
cally 0-1 s). Gain Kf(typically 0.5-1.2) reflects the
human’s priority to track the previewed trajectory;
when Kf=0 the human completely ignores the de-
sired trajectory and focuses only on stabilizing the
vehicle, while high values of Kfindicate a high prio-
rity for trajectory-tracking. The near viewpoint, loca-
ted τns ahead (typically 0.1-0.9 s), is the input to an
open-loop feedforward response. Humans can use
this near-viewpoint response to better track the hi-
gher frequencies (quick changes) in the desired tra-
jectory [El17], which are not followed well with the
far-viewpoint response mechanism. However, not all
Table 1: Experimental preview times τpand the human’s
estimated far-viewpoint look-ahead times τf.
preview tracking curve driving
[El16a] [Ste11]
τp, s τf, s τp, s τf, s
0.00 0.05 0.36 0.03
0.25 0.18 0.72 0.82
0.50 0.38 1.08 1.14
1.00 1.01 7.20 1.50
subjects were found to apply a near-viewpoint res-
ponse, and the near-viewpoint response is less pro-
nounced when less preview is available, or when the
order of the vehicle dynamics increases [El17].
Following the development of this new preview mo-
del, we performed a second preview tracking expe-
riment to investigate how humans adapt their control
behavior to the preview time τp[El16a]. This expe-
riment was preformed only with integrator vehicle dy-
namics, and with six preview times between 0 and 1 s
(of which four are reproduced here, see Tab. 1). Fig. 7
shows the multiloop system identification results for
Hot()and Hox(), together with the least-squares
fit of the model to the measurement data. Higher pre-
view times clearly evoke more phase lead in the hu-
man’s response to the desired trajectory, which is
captured in the model mainly by the far-viewpoint
look-ahead time τf. Tab. 1 shows that the estima-
ted value of τfindeed increases when more preview
becomes available. The human subject kept the far
viewpoint approximately at the end-point of the pre-
viewed trajectory, regardless of the amount of pre-
view available. Note that the estimated far viewpoint
position is occasionally slightly beyond the available
preview limit, because the estimated values are affec-
ted by the noise in the system (i.e., human remnant).
Curve Driving (Step 4)
As a start to Step 4, we recently performed a first
curve driving experiment, also with various preview
times [Ste11]. The driving task was performed at
a constant forward velocity of 50 km/h, in a fixed-
base simulator with a 180 deg field-of-view visual
screen. Moreover, opposed to the preview tracking
task from Step 1, the task involved perspective vie-
wing, visual yaw rotational cues (i.e., path and hea-
ding), “bicycle model” vehicle dynamics, and two lane
edges (boundary avoidance); control inputs were gi-
ven with a steering wheel, and the highest frequency
component in the desired trajectory and disturbance
signals was 6.5 rad/s. Fig. 2, Step 4 shows the pre-
sented visuals.
Because, at this point, we lack understanding of hu-
man adaptation to the discussed differences bet-
ween our curve driving and preview tracking task, we
fit exactly the same preview tracking model to the
curve driving data. Note that the bicycle model ve-
hicle dynamics used in [Ste11], which approximate a
double integrator from steering wheel inputs to late-
ral position, required substantial lead equalization in
-6-
DSC 2017 EuropeVR Van der El et al.
10-1 10010110-1 10010 1
10-1
100
101
ω, rad/s
|Hot|, -
curve driving (CD)
preview tracking (PT)
10-1 10010110-1 10010 1
10-1
100
101
ω, rad/s
|Hox|, -
curve driving (CD)
preview tracking (PT)
10-1 10010110-1 10010 1
-180
0
180
360
ω, rad/s
Hot, deg
PT: τp=0.00 s, CD: τp=0.36 s
PT: τp=0.25 s, CD: τp=0.72 s
PT: τp=0.50 s, CD: τp=1.08 s
PT: τp=1.00 s, CD: τp=7.20 s
curve driving (CD)
preview tracking (PT)
10-1 10010110-1 10010 1
-360
-180
0
ω, rad/s
Hox, deg
curve driving (CD)
preview tracking (PT)
Figure 7: Estimated multiloop human control dynamics for a single subject, together with fits of the preview model, for a preview
tracking (PT) [El16b] and a curve driving (CD) task [Ste11].
the human’s internal error response Hoe()to ob-
tain integrator open-loop dynamics around crossover
[McR67, El16b]. The near-viewpoint response was
excluded from the model, as the desired-trajectory
forcing function did not contain the high-frequency
components at which the near-viewpoint response is
active in preview tracking tasks [El16b].
The estimated Hot()and Hox()dynamics in the
driving task are shown in Fig. 7, together with the
model fits. Longer preview times evoke a highly si-
milar adaptation of the Hot()response dynamics
as seen in preview tracking tasks; namely, more
phase lead and a lower response magnitude at the
higher input frequencies. More phase lead shows
that the subject better anticipates the desired tra-
jectory, while a lower response magnitude indicates
that more of the trajectory’s high frequencies are
ignored (i.e., trajectory smoothing or corner cutting).
Tab. 1 shows that the estimated value of τfincreases
with increasing preview time (similar as for preview
tracking), and stabilizes around 1.5 s when abun-
dant preview is available. This suggests that drivers
do not use preview information beyond 1.5 s ahead
(about 20 m at 50 km/h), which is consistent with the
control theoretical optimum [Mil76], empirical findings
that use occlusion [McL73, Lan95] and eye-tracking
data [Kon68, Lan94].
Fig. 7 also shows that the preview model does not
perfectly capture the shape of the estimated driver
dynamics. The estimated Hot()and Hox(j ω)dy-
namics in the driving task are likely a lumped com-
bination of multiple driver responses. While the mul-
tiloop system identification results do show exactly
how curve driving behavior differs from preview tra-
cking behavior, separate experiments are needed to
attribute these adaptations to the viewing perspective
(Step 2), additional feedback cues (Step 3), the lane
width (Step 4), or even other, more subtle differences
between the two tasks. Nonetheless, the effect of
preview time on driver behavior is already captured
quite well by the preview tracking model. The model’s
τfparameter, which reflects the human’s look-ahead
time, allows for unique quantitative insight into driver
adaptation, as well as a direct comparison to tracking
data. We expect that extending the preview model to
curve driving tasks will further add to this insight.
Conclusions
In this paper, we presented an approach to bring
the applicability of the crossover model for human
compensatory tracking behavior to curve driving
tasks. Differences between compensatory tracking
and curve driving were divided into four main catego-
ries: 1) pursuit and preview, 2) viewing perspective, 3)
multiple feedback cues, and 4) boundary avoidance.
Multiloop system identification was shown to be a
valid method to separately measure multiple, simul-
taneously present human responses, which recently
led to the extension of the crossover model to pur-
suit and preview tracking tasks. The preview tracking
model provides new insight into driver adaptation to
the preview time in curve driving tasks, but, in its cur-
rent form, does not fully capture driver steering dyna-
mics. We aim to extend the preview model to curve
-7-
Measuring and Modeling Driver Steering Behavior DSC 2017 EuropeVR
driving in future work, by studying human adaption to
the viewing perspective, multiple feedback cues, and
boundary avoidance. This new model’s physically in-
terpretable parameters can yield unmatched insights
into between-driver steering variations, and facilitate
the systematic design of novel individualized driver
support systems.
D. A. Abbink, M. Mulder, F. C. T. van der Helm, M. Mulder and
E. R. Boer, Measuring Neuromuscular Control Dynamics Du-
ring Car Following With Continuous Haptic Feedback,IEEE
Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics,
vol. 41(5): 1239–1249, 2011.
R. W. Allen and D. T. McRuer, The Man/Machine Control
Interface–Pursuit Control,Automatica, vol. 15(6): 683–686,
1979.
E. R. Boer, Satisficing Curve Negotiation: Explaining Drivers’
Situated Lateral Position Variability, in Proceedings of the 13th
IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design and Eva-
luation of Man-Machine Systems, Kyoto, Japan, 2016.
E. Donges, A Two-Level Model of Driver Steering Behavior,Hu-
man Factors, vol. 20(6): 691–707, 1978.
K. van der El, S. Barendswaard, D. M. Pool and M. Mulder, Effects
of Preview Time on Human Control Behavior in Rate Tracking
Tasks, in Proceedings of the 13th IFAC/IFIP/IFORS/IEA Sympo-
sium on Analysis, Design and Evaluation of Man-Machine Sys-
tems, Kyoto, Japan, 2016.
K. van der El, D. M. Pool, H. J. Damveld, M. M. van Paassen and
M. Mulder, An Empirical Human Controller Model for Preview
Tracking Tasks,IEEE Trans. on Cybernetics, vol. 46(11): 2609–
2621, 2016.
K. van der El, D. M. Pool, M. M. van Paassen and M. Mulder,
Effects of Preview on Human Control Behavior in Tracking
Tasks with Various Controlled Elements,IEEE Trans. on Cy-
bernetics, 2017, online preprint available.
H. Godthelp, Vehicle Control During Curve Driving,Human Fac-
tors, vol. 28(2): 211–221, 1986.
T. Gordon and M. Lidberg, Automated Driving and Autono-
mous Functions on Road Vehicles,Vehicle System Dynamics,
vol. 53(7): 958–994, 2015.
R. A. Hess, Structural Model of the Adaptive Human Pilot,Jour-
nal of Guidance, Control, and Dynamics, vol. 3(5): 416–423, 1980.
R. A. Hess, Pursuit Tracking and Higher Levels of Skill De-
velopment in the Human Pilot,IEEE Trans. Systems, Man, and
Cybernetics, vol. 11(4): 262–273, 1981.
R. A. Hess and A. Modjtahedzadeh, A Control Theoretic Model
of Driver Steering Behavior,IEEE Control Systems Magazine,
vol. 10(5): 3–8, 1990.
K. Ito and M. Ito, Tracking Behavior of Human Operators in
Preview Control Systems,Electrical Eng. in Japan, vol. 95(1):
120–127, 1975, (Transl,: D.K. Ronbunshi, Vol. 95C, No. 2, Feb.
1975, pp 30-36).
F. I. Kandil, A. Rotter and M. Lappe, Driving is Smoother and
More Stable When Using the Tangent Point,Journal of Vision,
vol. 9(1): 1–11, 2009.
M. Kondo and A. Ajimine, Driver’s Sight Point and Dynamics of
the Driver-Vehicle-System Related to It, in Proc. SAE Automo-
tive Eng. Congr., Detroit, MI, 1968.
M. F. Land and D. N. Lee, Where we Look When we Steer,Na-
ture, vol. 369: 742 – 744, 1994.
M. F. Land and J. Horwood, Which Parts of the Road Guide
Steering?,Nature, vol. 377: 339 – 340, 1995.
C. C. MacAdam, Application of an Optimal Preview Control for
Simulation of Closed-Loop Automobile Driving,IEEE Trans.
Systems, Man, and Cybernetics, vol. 11(6): 393–399, 1981.
G. Markkula, E. R. Boer, R. Romano and N. Merat, Sustained
Sensorimotor Control as Intermittent Decisions about Predic-
tion Errors - Computational Framework and Application to
Ground Vehicle Steering,CoRR, 2017.
J. R. McLean and E. R. Hoffmann, The Effects of Restricted Pre-
view on Driver Steering Control and Performance,Human Fac-
tors, vol. 15(4): 421–430, 1973.
D. T. McRuer and H. R. Jex, A Review of Quasi-Linear Pilot Mo-
dels,IEEE Trans. Human Factors in Electronics, vol. 8(3): 231–
249, 1967.
D. T. McRuer, R. E. Magdaleno and G. P. Moore, A Neuromus-
cular Actuation System Model,IEEE Trans. Man-Machine Sys-
tems, vol. 9(3): 61–71, 1968.
D. T. McRuer and D. H. Weir, Theory of Manual Vehicular
Control,Ergonomics, vol. 12(4): 599 – 633, 1969.
D. T. McRuer, D. H. Weir, H. R. Jex, R. E. Magdaleno and R. W. Al-
len, Measurement of Driver-Vehicle Multiloop Response Pro-
perties with a Single Disturbance Input,IEEE Transactions on
Systems, Man, and Cybernetics, vol. 5(5): 490–497, 1975.
D. T. McRuer, R. W. Allen, D. H. Weir and R. H. Klein, New Results
in Driver Steering Control Models,Human Factors: The Journal
of the Human Factors and Ergonomics Society, vol. 19(4): 381–
397, 1977.
R. A. Miller, On the Finite Preview Problem in Manual Control,
International Journal of Systems Science, vol. 7(6): 667–672,
1976.
M. Mulder, M. M. van Paassen and E. R. Boer, Exploring the
Roles of Information in the Control of Vehicular Locomotion:
From Kinematics and Dynamics to Cybernetics,Presence: Te-
leoperators and Virtual Environments, vol. 13(5): 535–548, 2004.
M. Mulder and J. A. Mulder, Cybernetic Analysis of Perspective
Flight-Path Display Dimensions,Journal of Guidance, Control,
and Dynamics, vol. 28(3): 398–411, 2005.
A. M. C. Odhams, Identification of Driver Steering and Speed
Control, Ph.D. thesis, University of Cambridge, 2006.
M. M. van Paassen and M. Mulder, Identification of Human Ope-
rator Control Behaviour in Multiple-Loop Tracking Tasks, in
Proc. 7th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design
and Evaluation of Man-Machine Systems, 515–520, Kyoto, Japan,
1998.
D. M. Pool, G. A. Harder and M. M. van Paassen, Effects of Si-
mulator Motion Feedback on Training of Skill-Based Control
Behavior,Journal of Guidance, Control, and Dynamics, vol. 39(4):
889–902, 2016.
R. Rajamani, Vehicle Dynamics and Control, Mechanical Engi-
neering Series, Springer Science & Business Media, 2011.
D. D. Salvucci and R. Gray, A Two-Point Visual Control Model
of Steering,Perception, vol. 33(10): 1233–1248, 2004.
L. Saleh, P. Chevrel, F. Claveau, J. F. Lafay and M. F., Shared
Steering Control Between a Driver and an Automation: Stabi-
lity in the Presence of Driver Behavior Uncertainty,IEEE Tran-
sactions on Intelligent Transportation Systems, vol. 14(2): 974–
983, 2013.
C. Sentouh, P. Chevrel, F. Mars and F. Claveau, A Sensorimo-
tor Driver Model for Steering Control, in Proc. 2009 IEEE Int.
Conf. Systems, Man, and Cybernetics, 2462–2467, San Antonio,
TX, 2009, ISBN 978-1-4244-2793-2.
R. L. Stapleford, D. T. McRuer and R. E. Magdaleno, Pilot Descri-
bing Function Measurements in a Multiloop Task,IEEE Trans.
Human Factors in Electronics, vol. 8(2): 113–125, 1967.
J. Steen, Investigating the Effect of Preview Distance on Driver
Steering Behavior using System Identification, Master’s thesis,
TUDelft, 2011.
B. T. Sweet, The Identification and Modeling of Visual Cue
Usage in Manual Control Task Experiments, Ph.D. thesis, De-
partment of Aeronautics and Astronautics, Stanford University,
Stanford, CA, 1999.
J. P. Wann and M. F. Land, Steering With or Without the Flow:
Is the Retrieval of Heading Necessary?,Trends in Cognitive
Sciences, vol. 4(8): 319–324, 2000.
D. H. Weir and D. T. McRuer, Dynamics of Driver Vehicle Stee-
ring Control,Automatica, vol. 6(1): 87–98, 1970.
-8-
... Many models focus on the visual receptors of the human, selecting environmental cues from the complex three-dimensional visual scene, with both a perception of road path geometry and optic-flow [62]. For example, most driver steering models are based on the hypothesis of parallel high-and low-frequency compensation [48,75,174], often coupled to dedicated "far" and "near" preview/tangent points [116,145,155], respectively. Driver steering models currently implemented in DAS are often, for practical reasons, simple -e.g., two-parameter (single preview point) -driver models [129,144]. ...
... The descriptiveness area is given as a grey block illustrated in Fig. 2.4. The descriptiveness area starts 1 second before the curvature begins and ends 1 second after the curvature ends because, for curve driving, preview/prediction time is usually around 1s [174] [31]. ...
... A VAF of 100% means that measured and modelled signals are identical, and the quality of fit (as given in Fig.2.1) is high. In principle, other metrics that measure quality of fit can be used; however, VAF is preferred as it is widely used in identification literature [19,174]. The Variance Accounted For (VAF) between the signals D Θ , D exp and the D Θ is used to obtain V AF Θ Θ and V AF exp Θ respectively. ...
Book
Full-text available
Road safety is still a challenging issue. In 2020, 1.35 million people have died as a result of traffic accidents, where the number one cause of death for young adults between the age of 5 and 29 is car accidents. In an attempt to improve road safety, the automotive industry has developed numerous types of Advanced Driver Assistance Systems (ADAS). These systems are in general effective in improving safety. However, these systems will only be used if and only if drivers perceive the assistance as intuitive and cooperative. It is recently found that 61% of drivers sometimes switch off the assistance, 23% feel that current assistance are annoying and bothersome, whereas only 21% find them helpful. A safe system that is not used has no safety benefits. A promising way to improve driver acceptance and to increase safety is to employ haptic shared control (HSC), which is an effective way of keeping drivers in the active control loop. Support in the form of HSC benefits situation awareness and ensures effective monitoring of the environment and automation. However, torque conflict resulting from opposing intentions of driver and automation is reported to be a bottleneck for drivers' acceptance of HSC. Particularly, such conflicts are found to be most debilitating in curves. With each driver having an individual driving style, with different preferences and skill levels, the current standard 'one-size-fits-all' assistance approach to HSC, and driver support in general, is not satisfactory for every individual. An effective approach to increase acceptance in ADAS, and a reliable way to align the automation to the driver's preferences, is through personalisation. Here, personalisation is generally defined as 'making something suitable for the needs and preferences of a particular person'. For HSC, personalisation can be effectively realised by adapting the system's adopted trajectory to that of the driver. Therefore, the personalisation of HSC requires a driver modelling approach that predicts an individual driver's behaviour. Before this thesis, the personalisation of HSC was attempted by adjusting the gains of a corrective feedback HSC, as though it were a driver steering model itself. What was missing was 1) a HSC that allows for personalisation, i.e., a framework where a personalisable reference trajectory is independent of the haptic controller and, 2) a computational driver model or a data-driven driver classification approach that is able to describe individual drivers. When this thesis was started, a theoretical HSC concept, the 'Four-Design-Choice-Architecture' (FDCA) was introduced within our group. This promising concept was, however, not realised or implemented yet. As for modelling individual drivers, it was not known what type of driver steering and trajectory model(s) are suitable to generate personalised trajectories, if any, due to the lack of a standardised way to compare and evaluate the output performance of driver behaviour models with different structures and complexities. It was not known exactly how to achieve successful personalisation in curves, nor was the needed level of personalisation understood, i.e., adapting to the intricacies of each individual or adapting to a more general style. Moreover, whether personalisation in itself improves the acceptance of HSC systems, was still to be verified. These challenges are addressed in the four parts of this thesis: 1) Driver model assessment: The development of an assessment method and application on prominent control-theoretic driver models in the literature. This was done to gain an in-depth understanding of what is needed to model and describe individual drivers. 2) Driver trajectory classification: Understanding and categorising the types of individual driver trajectories present in the driving population. 3) Driver prepositioning: Understanding and modelling driver prepositioning behaviour, a behaviour found to be an essential, yet mostly overlooked aspect of curve-driving behaviour. 4) Application to Haptic Shared Control: Apply and evaluate personalised haptic shared control. This thesis has achieved it's highest level goal, which is to improve the acceptance of the haptic shared control driver support. This thesis provides an improved understanding and new insights into 1) how the novel FDC HSC has solved much of the acceptance issue put forward, and 2) an understanding of how to personalise with the FDC HSC. In terms of modelling tools and methods, this thesis has contributed with: 1) a model assessment procedure that can highlight the strengths and weaknesses of any control theoretic model, 2) a trajectory classifier, which can categorise different types of drivers, 3) a prepositioning path model, which, when combined with the Van Paassen control-theoretic driver model results in the first individual control-theoretic driver model, i.e., a model that can capture all main styles of individual driver behaviour and 4) the first personalisable HSC, where the developed modelling methods are applied to evaluate personalised haptic shared control. The findings and insights from this thesis have contributed to design guidelines and, can accelerate future research. Some examples include 1) using the individualised driver steering model, personalisation of ADAS can now be done in real-time, 2) using the developed trajectory classifier, explicit personalisation can be achieved, i.e., the driver can select the type of trajectory guidance he may want, and, 3) the driver trajectory modelling methods developed in this thesis can be used for the personalisation of path-planning in fully autonomous-vehicles.
... Users wish to be able to engage in activities that do not necessitate road observation. However, as shown in a multitude of previous studies (Turner and Griffin, 1999;Kuiper et al., 2018;Salter et al., 2019), motion sickness becomes a major constraint when taking the eyes off the road. Fortunately, there are conceivable ways of reducing sickness incidence. ...
... Within this recovery period, humans display "hypersensitivity" to new motion stimuli (Oman, 1990a). The modelling of individual dynamics is used widely in cybernetic research, one example being driver modelling (Barendswaard et al., 2017;Mars et al., 2011;Van Der El et al., 2017). This study aims to use a similar approach to motion sickness. ...
... These models facilitate improved predictions of human control behavior in pursuit and preview tracking tasks, beyond the fully linear, timeinvariant framework. Moreover, the presented data and models are a new step towards understanding nonlinear and time-varying control behavior in practical pursuit and preview control tasks, such as driving steering behavior (Steen et al., 2011;Van der El et al., 2019a). ...
... These models facilitate improved predictions of human control behavior in pursuit and preview tracking tasks, beyond the fully linear, timeinvariant framework. Moreover, the presented data and models are a new step towards understanding nonlinear and time-varying control behavior in practical pursuit and preview control tasks, such as driving steering behavior (Steen et al., 2011;Van der El et al., 2019a). Fig. 1a shows the preview tracking display that was studied in the experiments discussed in Van der El et al. (2018a,b, 2019b. ...
Conference Paper
Full-text available
In manual pursuit and preview tracking tasks, humans apply feedforward control to exploit available information of the target trajectory to follow. While the human’s linear, time-invariant dynamics in such tasks are well-understood and have been modeled in the quasi-linear framework, the remaining nonlinear and time-invariant control behavior, the human remnant, is typically ignored. This paper extends the current state-of-the-art theories of human remnant, which are applicable to compensatory tracking tasks only, to the more common and relevant pursuit and preview tracking tasks. Data are presented from three human-in-the-loop tracking experiments. The ratio of the remnant relative to the linear control output is quantified in the frequency domain, and remnant spectra are computed and modeled. The results show that the injected remnant is identical in compensatory, pursuit, and preview tasks, regardless of the task’s controlled element dynamics, preview time, and target trajectory bandwidth. The presented remnant data and models can be used together with already available linear, time-invariant models, to better predict characteristics of human control behavior in pursuit and preview tracking tasks, enabling the design of human assistance systems.
... Within this recovery period, humans display "hypersensitivity" to new motion stimuli (Oman 1990). The modeling of individual dynamics is used widely in cybernetic research, one example being driver modeling (Barendswaard et al. 2017;Mars et al. 2011;Van Der El et al. 2017). This study aims to use a similar approach to motion sickness. ...
Article
Full-text available
We investigated and modeled the temporal evolution of motion sickness in a highly dynamic sickening drive. Slalom maneuvers were performed in a passenger vehicle, resulting in lateral accelerations of 0.4 g at 0.2 Hz, to which participants were subjected as passengers for up to 30 min. Subjective motion sickness was recorded throughout the sickening drive using the MISC scale. In addition, physiological and postural responses were evaluated by recording head roll, galvanic skin response (GSR) and electrocardiography (ECG). Experiment 1 compared external vision (normal view through front and side car windows) to internal vision (obscured view through front and side windows). Experiment 2 tested hypersensitivity with a second exposure a few minutes after the first drive and tested repeatability of individuals’ sickness responses by measuring these two exposures three times in three successive sessions. An adapted form of Oman’s model of nausea was used to quantify sickness development, repeatability, and motion sickness hypersensitivity at an individual level. Internal vision was more sickening compared to external vision with a higher mean MISC (4.2 vs. 2.3), a higher MISC rate (0.59 vs. 0.10 min⁻¹) and more dropouts (66% vs. 33%) for whom the experiment was terminated due to reaching a MISC level of 7 (moderate nausea). The adapted Oman model successfully captured the development of sickness, with a mean model error, including the decay during rest and hypersensitivity upon further exposure, of 11.3%. Importantly, we note that knowledge of an individuals’ previous motion sickness response to sickening stimuli increases individual modeling accuracy by a factor of 2 when compared to group-based modeling, indicating individual repeatability. Head roll did not vary significantly with motion sickness. ECG varied slightly with motion sickness and time. GSR clearly varied with motion sickness, where the tonic and phasic GSR increased 42.5% and 90%, respectively, above baseline at high MISC levels, but GSR also increased in time independent of motion sickness, accompanied with substantial scatter.
Article
Objective A human steering model for teleoperated driving is extended to capture the human steering behavior in haptic shared control of autonomy-enabled Unmanned Ground Vehicles (UGVs). Background Prior studies presented human steering models for teleoperation of a passenger-sized Unmanned Ground Vehicle, where a human is fully in charge of driving. However, these models are not applicable when a human needs to interact with autonomy in haptic shared control of autonomy-enabled UGVs. How a human operator reacts to the presence of autonomy needs to be studied and mathematically encapsulated in a module to capture the collaboration between human and autonomy. Method Human subject tests are conducted to collect data in haptic shared control for model development and validation. The ACT-R architecture and two-point steering model used in the previous literature are adopted to predict the operator’s desired steering angle. A torque conversion module is developed to convert the steering command from the ACT-R model to human torque input, thus enabling haptic shared control with autonomy. A parameterization strategy is described to find the set of model parameters that optimize the haptic shared control performance in terms of minimum average lane keeping error (ALKE). Results The model predicts the minimum ALKE human subjects achieve in shared control. Conclusions The extended model can successfully predict the best haptic shared control performance as measured by ALKE. Application This model can be used in place of human operators, enabling fully simulation-based engineering, in the development and evaluation of haptic shared control technologies for autonomy-enabled UGVs, including control negotiation strategies and autonomy capabilities.
Article
Vehicle control by humans is possible because the central nervous system is capable of using visual information to produce complex sensorimotor actions. Drivers must monitor errors and initiate steering corrections of appropriate magnitude and timing to maintain a safe lane position. The perceptual mechanisms determining how a driver processes visual information and initiates steering corrections remain unclear. Previous research suggests 2 potential alternative mechanisms for responding to errors: (a) perceptual evidence (error) satisficing fixed constant thresholds (Threshold), or (b) the integration of perceptual evidence over time (Accumulator). To distinguish between these mechanisms, an experiment was conducted using a computer-generated steering correction paradigm. Drivers (N = 20) steered toward an intermittently appearing "road-line" that varied in position and orientation with respect to the driver's position and trajectory. One key prediction from a Threshold framework is a fixed absolute error response across conditions regardless of the rate of error development, whereas the Accumulator framework predicts that drivers would respond to larger absolute errors when the error signal develops at a faster rate. Results were consistent with an Accumulator framework; thus we propose that models of steering should integrate perceived control error over time in order to accurately capture human perceptual performance. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Article
Objective: This paper extends a prior human operator model to capture human steering performance in the teleoperation of unmanned ground vehicles (UGVs) in path-following scenarios with varying speed. Background: A prior study presented a human operator model to predict human steering performance in the teleoperation of a passenger-sized UGV at constant speeds. To enable applications to varying speed scenarios, the model needs to be extended to incorporate speed control and be able to predict human performance under the effect of accelerations/decelerations and various time delays induced by the teleoperation setting. A strategy is also needed to parameterize the model without human subject data for a truly predictive capability. Method: This paper adopts the ACT-R cognitive architecture and two-point steering model used in the previous work, and extends the model by incorporating a far-point speed control model to allow for varying speed. A parameterization strategy is proposed to find a robust set of parameters for each time delay to maximize steering performance. Human subject experiments are conducted to validate the model. Results: Results show that the parameterized model can predict both the trend of average lane keeping error and its lowest value for human subjects under different time delays. Conclusions: The proposed model successfully extends the prior computational model to predict human steering behavior in a teleoperated UGV with varying speed. Application: This computational model can be used to substitute for human operators in the process of development and testing of teleoperated UGV technologies and allows fully simulation-based development and studies.
Conference Paper
Full-text available
The understanding of human responses to visual information in car driving tasks requires the use of system identification tools that put constraints on the design of data collection experiments. Most importantly, multisine perturbation signals are required, including a multisine road geometry, to separately identify the different driver steering responses in the frequency domain. It is as of yet unclear, however, to what extent drivers steer differently along such multisine roads than they do for real roads. This paper presents a method for approximating real-world road geometries with multisine signals, and applies it to a stretch of road used in an earlier investigation into driver steering. In addition, a human-in-the-loop experiment is performed to collect driver steering data for both the realistic real-world road and its multisine approximation. Overall, the analysis of driver performance metrics and driver identification data shows that drivers adopt equivalent control behaviour when steering along both roads. Hence, the use of such multisine approximations allows for the realization of realistic roads and driver behaviour in car driving experiments, in addition to supporting the application of quantitative driver identification techniques for data analysis.
Conference Paper
When taking a curve, drivers follow their own unique trajectory. Most driver style classifiers in literature are based on inertial inputs, denoting whether a given driver is aggressive or calm. However, this does not give any indication of a drivers trajectory style, i.e. whether a driver is curve cutting. To fill this void, this paper introduces a novel rule based classifier that categorises seven different trajectory styles. The classifier is applied to data from a fixed-base driving simulator study in which 45 subjects drove on three roads, comprising three different velocities: 25, 50 and 80 km/h, with three corresponding radii: 20, 80 and 204 m. The results show that some classes are more prevalent than others, with biased outer curve negotiation performed by a majority of the subjects and with no drivers classified as centerline drivers. The proposed trajectory classifier is shown to exhibit high levels of consistency, with 93% of drivers exhibiting consistent trajectory classes for at least 66% of the right curves driven and 84% exhibits consistent trajectory classes for atleast 66% of the left curves driven. Where this consistency indicates a potential for generalising the classification results to other curves. Additionally, this classifier can be used to adapt trajectory-driven advanced driver assistance systems, thereby serving as an alternative to driver modelling.
Article
Full-text available
This paper investigates how humans use a previewed target trajectory for control in tracking tasks with various controlled element dynamics. The human's hypothesized "near" and "far" control mechanisms are first analyzed offline in simulations with a quasi-linear model. Second, human control behavior is quantified by fitting the same model to measurements from a human-in-the-loop experiment, where subjects tracked identical target trajectories with a pursuit and a preview display, each with gain, single-, and double-integrator controlled element dynamics. Results show that target-tracking performance improves with preview, primarily due to the far-viewpoint response, which allows humans to cancel their own and the controlled element's lags, without additional control activity. The near-viewpoint response yields better target tracking at higher frequencies, but requires substantially more control activity. The control-theoretic approach adopted in this paper provides unique quantitative insights into human use of preview, which can help to explain human behavior observed in other preview control tasks, like driving.
Article
Full-text available
A conceptual and computational framework is proposed for modelling of human sensorimotor control, and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency, and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical observations from a driving simulator provide support for some of the framework assumptions: It is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise steering adjustments, than as continuous control. Furthermore, the amplitudes of individual steering adjustments are well predicted by a compound visual cue signalling steering error, and even better so if also adjusting for predictions of how the same cue is affected by previous control. Finally, evidence accumulation is shown to explain observed covariability between inter-adjustment durations and adjustment amplitudes, seemingly better so than the type of threshold mechanisms that are typically assumed in existing models of intermittent control.
Conference Paper
Full-text available
In many practical control tasks, human controllers (HC) can preview the trajectory they must follow in the near future. This paper investigates the effects of the length of previewed target trajectory, or preview time, on HC behavior in rate tracking tasks. To do so, a human-in- the-loop experiment was performed, consisting of a combined target-tracking and disturbance- rejection task. Between conditions the preview time was varied between 0, 0.1, 0.25 0.5 0.75 or 1 s, capturing the complete human control-behavioral adaptation from zero- to full-preview tasks, where the performance remains constant. The measurements were analyzed by fitting a HC model for preview tracking tasks to the data. Results show that optimal performance is attained when the displayed preview time is higher than 0.5 s. When the preview time increases, subjects exhibit more phase lead in their target response dynamics. They respond to a single point on the target ahead when the preview time is below 0.5 s and generally to two different points when more preview is displayed. As the model tightly fits to the measurement data, its validity is extended to different preview times.
Article
Full-text available
This paper presents the results of a quasi-transfer-of-training experiment performed in the SIMONA Research Simulator at Delft University of Technology. The goal of the experiment was to quantify the effects of simulator motion feedback on the training of skill-based human operator control behavior using multi-modal human operator modeling techniques. In the experiment 24 task-naive participants, divided over two groups, were trained in performing a skill-based compensatory pitch tracking task. The first group was trained in a fixed-base setting and transferred to a moving-base condition; the second group trained with motion feedback and then transferred to the fixed-base condition. The group that received initial moving-base training showed quick adaptation of their control behavior upon transfer to the fixed-base setting and limited further learning. The group that trained in the fixed-base condition showed only limited transfer of their learned control strategy to the moving-base setting. After transfer this group initially continued to rely exclusively on visual feedback, as indicated by very low identified motion response gains, and required an amount of moving-base training identical to the other group to develop multi-modal control behavior. These results suggest that motion feedback is required for effective initial simulator-based training of skill-based manual control.
Article
Full-text available
Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.
Conference Paper
Methods for identifying human control behaviour in compensatory and pursuit tracking tasks have been used extensively in the past. These methods are still very valuable, for example to study the effect of experiment conditions on control behaviour. In studies at the Faculty of Aerospace Engineering these techniques were used for studying control behaviour in multiple-loop tasks, in which multiple transfer functions for the operator were to be estimated. New in these studies is that, in these multi-loop tasks, analytical expressions were derived for the bias and variance of the estimates. This paper revises the technique, putting an emphasis on experimental set-up with modern equipment, the choice of test signals and the calculation of bias and variance of the estimates.
Article
In recent years, road vehicle automation has become an important and popular topic for research and development in both academic and industrial spheres. New developments have received extensive coverage in the popular press, and it may be said that the topic has captured the public imagination. Indeed, the topic has generated interest across a wide range of academic, industry and governmental communities, well beyond vehicle engineering; these include computer science, transportation, urban planning, legal, social science and psychology. While this follows a similar surge of interest - and subsequent hiatus - of Automated Highway Systems in the 1990s, the current level of interest is substantially greater, and current expectations are high. It is common to frame the new technologies under the banner of ‘self-driving cars’ - robotic systems potentially taking over the entire role of the human driver, a capability that does not fully exist at present. However, this single vision leads one to ignore the existing range of automated systems that are both feasible and useful. Recent developments are underpinned by substantial and long-term trends in ‘computerisation’ of the automobile, with developments in sensors, actuators and control technologies to spur the new developments in both industry and academia. In this paper, we review the evolution of the intelligent vehicle and the supporting technologies with a focus on the progress and key challenges for vehicle system dynamics. A number of relevant themes around driving automation are explored in this article, with special focus on those most relevant to the underlying vehicle system dynamics. One conclusion is that increased precision is needed in sensing and controlling vehicle motions, a trend that can mimic that of the aerospace industry, and similarly benefit from increased use of redundant by-wire actuators.