Conference PaperPDF Available

Looming Auditory Collision Warnings for Semi-Automated Driving: An ERP Study


Abstract and Figures

Looming sounds can be an ideal warning notification for emergency braking. This agrees with studies that have consistently demonstrated preferential brain processing for looming stimuli. This study investigates and demonstrates that looming sounds can similarly benefit emergency braking in managing a vehicle with adaptive cruise control (ACC). Specifically, looming auditory notifications induced the faster emergency braking times relative to a static auditory notification. Next, we compare the event-related potential (ERP) evoked by a looming notification, relative to its static equivalent. Looming notifications evoke a smaller fronto-central N2 amplitude than their static equivalents. Thus, we infer that looming sounds are consistent with the visual experience of an approaching collision and, hence, induced a corresponding performance benefit. Subjective ratings indicate no significant differences in the perceived workload across the notification conditions. Overall, this work suggests that auditory warnings should have congruent physical properties with the visual events that they warn for.
Content may be subject to copyright.
Looming Auditory Collision Warnings
for Semi-Automated Driving: An ERP Study
Marie Lahmer1, Christiane Glatz2, Verena C. Seibold3, Lewis L. Chuang2,4
1Dept. Cognitive Science, University of Tübingen, Tübingen, Germany,
2Max Planck Institute for Biological Cybernetics, Tübingen, Germany,
3Dept. Psychology, University of Tübingen, Tübingen, Germany,
4LMU Munich, Munich, Germany,
Looming sounds can be an ideal warning notification for emer-
gency braking. This agrees with studies that have consistently
demonstrated preferential brain processing for looming stim-
uli. This study investigates and demonstrates that looming
sounds can similarly benefit emergency braking in managing
a vehicle with adaptive cruise control (ACC). Specifically,
looming auditory notifications induced the faster emergency
braking times relative to a static auditory notification. Next,
we compare the event-related potential (ERP) evoked by a
looming notification, relative to its static equivalent. Looming
notifications evoke a smaller fronto-central N2 amplitude than
their static equivalents. Thus, we infer that looming sounds
are consistent with the visual experience of an approaching
collision and, hence, induced a corresponding performance
benefit. Subjective ratings indicate no significant differences
in the perceived workload across the notification conditions.
Overall, this work suggests that auditory warnings should have
congruent physical properties with the visual events that they
warn for.
ACM Classification Keywords
Human-centered computing
Human computer interaction
(HCI); Human-centered computingUser studies
Author Keywords
auditory notifications; looming; braking; adaptive-cruise
control; EEG; event-related potential; N2
Recent years have witnessed astounding progress in driver
assistance systems. Nonetheless, they can occassionally fail.
This is inevitable. In recent news, an autonomous vehicle
failed to recognize a crossing pedestrian, resulting in a fatal
accident that might have been arguably prevented by driver
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
AutomotiveUI ’18 September 23–25, 2018, Toronto, ON, Canada
© 2018 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-5946-7/18/09.
intervention [24]. This and other examples of automation
failure readily demonstrate the continuing relevance of an at-
tentive driver to compensate for deficits in technology through
timely intervention [40]. Given our proclivity to over-rely on
automation [32], suitably designed notifications ought to be
appropriately issued by automation to engage our attention
whenever potentially dangerous situations arise. This raises
two related questions: “What are appropriate notifications?”,
and for “Which dangerous situations?”.
According to the NASS/GES database, rear-end collisions are
the most frequent incident of all vehicle accidents. For exam-
ple, in the United States, rear-end collisions make up 29% of
all police reported crashes [29]. Hence, there is undeniable
utility in focusing on notifications that allows vehicle users to
pre-empt head-on collisions with an approaching object. An
approaching object increases in visual size over time. This
first-order property is often termed “looming” and is believed
to provide an intuitive perceptual read-out of time-to-contact
that supports timely braking [23]. It has been demonstrated
that complementing visual looming with a looming sound can
speed up braking times during manual vehicle handling [13].
However, several questions remain that concern implemen-
tation of looming notifications for cuing emergency braking,
which the current study addresses.
The present study investigates the role of looming sounds in
cuing emergency braking. In particular, we targeted a scenario
whereby users experienced adaptive cruise control (ACC) that
is not coupled to an autonomous braking system, whilst trav-
eling on a straight-road with automatic lane-keeping. The
current work is distinct from Gray (2011; [13]) in several
ways. First, it evaluated how looming sounds influenced be-
havior with a semi-automated vehicle and not a fully manual
vehicle. Thus, it specifically evaluated the role of looming
sounds in alerting the driver to a head-on collision and not
in switching between different aspects of vehicle handling.
Second, it exposes the driver to many repetitions of a looming
notification, thus evaluating if previous findings were due to
the novelty of a looming notification or if this is an effect
that can be expected to persist over time. Finally, it employs
electroencephalography (EEG) to evaluate how looming no-
tifications are processed by the brain. The motivation is to
achieve a basic understanding of why such sounds might re-
Proceedings of the 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular
Applications (AutomotiveUI ’18), September 23–25, 2018, Toronto, Canada .
sult in faster braking times than other equivalent notifications
and, thus, propose recommendations for auditory notification
Automated vehicle handling
While automation can greatly alleviate the workload of its
users, it can also foster over-reliance [32]. Consequently,
users might be unprepared to react in time when automation
fails. Various aspects of vehicle handling are increasingly
automated, from the ubiquitous automatic gear transmission
to semi-autonomous vehicle control.
Adaptive cruise control (ACC) is a relatively familiar technol-
ogy that continues to confound its users. A vehicle with ACC
adapts its own speed to the vehicle in front of it, within certain
parameters. Nonetheless, it is unlikely to accelerate beyond
safe speed limits in pursuit of the lead vehicle nor is it always
coupled with an autonomous emergency braking system. Un-
fortunately, drivers are prone to misunderstand the function of
ACC and are especially susceptible to overestimating the reli-
ability of ACC systems [20]. Critically, the (misplaced) trust
that this engenders can result in diminished monitoring by the
user [27], particularly to respond to emergency situations.
Driver inattention, particularly in automated vehicle scenarios,
can result in substantially slower reactions. A prominent study
investigated the takeover times of automated vehicle users in
responding to potential collision objects and recommended
notifications for potential interventions to be presented 7 sec-
onds before the need to respond [12]. In other words, this
study implied that users of automated vehicles required up
to 7 seconds to understand the driving scene and to respond
appropriately. In stark contrast, manual drivers are typically
capable of responding to (and braking for) unexpected events
within 1.5 seconds [14].
Auditory notifications and the looming benefit
Slow vehicle takeover responses, especially for emergency
scenarios, could be mitigated by better designed notifications.
For example, notification warnings can be designed, not only
to alert vehicle users to the sudden need to takeover vehicle
handling but also, to provide response recommendations [4].
In the current work, we decided to focus on auditory notifica-
tions for vehicle takeover, specifically for eliciting emergency
braking responses (for a review on auditory displays, see [30]).
Auditory notifications are believed to hold several advantages
over other modalities. First, auditory notifications have been
shown to generate shorter reaction than visual or tactile no-
tifications, particularly for vehicle takeover warnings [35].
Second, auditory notifications have been found to be subje-
cively preferred by participants, over tactile notifications [1].
Third, potential collisions—the chosen scenario of this work—
already present a highly salient warning (or notification) in
the form of a rapidly looming visual object in the field of view
of the driver [23]. Another visual warning (e.g., presented via
the dashboard) could distract driver attention away from the
real danger [17].
Auditory notifications can be broadly dichotomized into ver-
bal and non-verbal. Non-verbal notifications can be fur-
ther divided into abstract sounds (e.g., earcons) and event-
representative sounds (e.g., auditory icons). Verbal warnings
and icons are often preferred over earcons because they are
believed to be intuitive and, hence, easier to interpret [8].
Nonetheless, verbal notifications can risk being masked by
or confused with real speech from in-vehicle or radio conver-
sations [31]. For our chosen scenario of emergency braking,
auditory icons (e.g., car horn, tires screech) could serve as
effective notification warnings. Nonetheless, such sounds can
elicit false positive responses—that is, responses from the user
even when the event that they indicate does not actually occur
The physical properties of auditory notifications can be di-
rectly manipulated to indicate an event such as an impending
collision. Specifically, looming sounds that rise exponentially
in intensity can readily communicate time-to-contact with an
approaching object and have been shown to result in faster
braking times when presented simultaneously with a lead ve-
hicle that brakes suddenly [13]. More importantly, looming
sound warnings do not elicit as many false positives as audi-
tory icons (i.e., car horn). Thus, looming sounds have two
distinct advantages as warnings for collisions. First, they are
intuitively understood by biological agents as potential threat
stimuli. For example, rhesus monkeys and young infants
demonstrate avoidance behavior when presented with looming
stimuli [39, 41]. Second, the low false positive rates associated
with looming warnings suggest that our responses to them are
not merely reactive but active instead—that is, looming sounds
do not merely elicit an avoidance response but rather provide
relevant time-to-contact information that allows us to respond
only when it is necessary.
For these reasons, looming sounds could be ideal warnings for
potential vehicle collisions. This is supported by Gray (2011;
[13]) who compared the effectiveness of different auditory
warnings in emergency braking situations. In this study, par-
ticipants performed a manual driving task, where they were
instructed to follow a lead car at a constant headway distance.
Occasionally, the lead car came to a full stop, and this event
was either preceded by an auditory warning or no warning.
Furthermore, the auditory warning was either a sound of con-
stant intensity or one of several sounds that could vary in
intensity, including a looming sound. Gray observed that
looming sounds not only resulted in faster braking, they were
also associated with the lowest rate of false positive braking,
suggesting that looming sounds are especially effective in
preventing rear-end collisions.
Nonetheless, several reasons could account for this finding and
several open questions remain. To begin, it is unclear whether
looming sounds resulted in faster emergency braking because
they promote the re-direction of user attention from another
task or if they heightened user arousal. On the one hand, loom-
ing notification warnings could have resulted in faster braking
because they facilitated a redirection of attention away from
other aspects of manual driving (e.g., lane-keeping) to emer-
gency braking. On the other hand, they could have directly
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
promoted the detection, recognition, and motor response to the
potential collision event itself. To discriminate between these
two explanations, the current study investigated whether loom-
ing sounds would continue to be an effective warning even
in a semi-automated vehicle scenario that involved minimal
vehicle handling.
In our study, the participant’s car followed a lead car with a
fixed distance with simulated ACC on a straight road, and no
lateral steering was required. In contrast to a manual-driving
task, this task setup requires minimal engagement of resources
in the driving task itself and eliminated the need to switch
attention between continuous vehicle control and emergency
Also, looming warnings could have resulted in the fastest
emergency braking times because of a novelty effect, which
could wear off with repeated presentations. The current study
presented auditory notifications more frequently than Gray
(2011; [13]), who only presented each auditory warning twice.
The current study
The current study was designed to understand the auditory
looming benefit on emergency braking. We had two major
goals: First, we investigated whether looming sounds cause
an effect on processing the potential collision event itself in
a scenario in which no attention switching from the driving
task is required and a novelty effect can be excluded. To
this end, we changed the manual driving scenario used by
Gray (2011; [13]) to a semi-automated driving scenario, in
which an ACC system ensured that the participants’ car had a
constant distance to the lead car, and participants only had to
monitor the driving scene and detect potential collision events.
Furthermore, we increased the number of trials to counteract
a potential novelty effect being caused by looming sounds.
Our second major aim was to evaluate how looming sounds
influence cognitive processing of the collisions in terms of
brain responses. To this end, we measured participants EEG /
ERP responses while they performed the task.
Besides identifying the primary reason for the benefit of loom-
ing warnings and measuring how it modulates cognitive pro-
cessing of collision events, we also explored how looming
as compared to constant sounds influence participants per-
ceived workload. To this end, we measured perceived work-
load by means of a questionnaire during our study. From our
viewpoint, this latter question is especially important from an
applied perspective. Specifically, drivers acceptance of loom-
ing sounds as warnings might be depend in a critical way on
whether they experience higher or lower workload when using
the warning.
The influence of looming sounds on brain responses
Besides identifying the primary reason for the benefit of loom-
ing warnings, we employed EEG/ERP methods to evaluate
how looming sounds influenced our ability to respond to colli-
sions in terms of brain responses. This would further elucidate
the role of looming sounds and allow for a better deployment
of such sounds. The EEG/ERP technique allows informa-
tion processing in the brain, in response to target events, to
be inferred from measurements of voltage potential changes
via scalp-affixed electrodes. This approach has been used to
evaluate how different auditory notifications support different
cognitive processes—for example, how verbal notifications
promoted better discrimination from background distractors
while auditory icons promoted better context-updating [11].
It has also been used to account for individual differences
in behavioral performance to auditory notifications presented
across different test environments [6]. Finally, EEG/ERP ac-
tivity has also been proposed as a potential input modality for
automobiles in order to support faster predictions of braking
intentions than manual braking [16].
EEG/ERP is particularly well-suited for this purpose, given
its high temporal resolution. It allows the researcher to evalu-
ate how the brain might process auditory stimuli even in the
absence of an explicit response[37, 38]. Furthermore, ERP
components can be associated with specific aspects of stimuli
processing, such as detection and context-updating[11]. In
other words, EEG/ERP methods grant us access to understand-
ing how notifications are processed prior to the production of
an explicit response.
In the current study, we investigated how looming auditory
warnings might influence the brain’s ability to process the
visual event of potential collision to motor preparation of the
braking response. To begin, if looming sounds promoted the
detection of the collision event, we would expect the ERP
response evoked by an auditory looming to differentiate from
a static warning within the first 200 ms after the sound onset.
Specifically, we would expect differences in the ERP ampli-
tudes of the P1 and N1 component [28, 43], which would
suggest better discrimination of the target event against the
auditory background.
Alternatively, auditory looming and static warnings might
discriminate for late ERP components (i.e., P2, N2, P3), which
are associated with higher-level cognitive processes such as
response-inhibition and context-updating. These components
usually manifest after the 200 ms of the sound presentation
onset. The P2 is believed to be influenced by the motion
direction of sounds, whereby larger P2s to sounds moving
towards a center position in front of the participants compared
to sounds moving away from the center to the left or right side
of the participants [10]. Depending on the specific task, the N2
component is usually related to response conflict or response
inhibition in case of an infrequent stimulus [9, 19], but it is
also sensitive to the congruency of audiovisual stimuli [26].
Finally, the P3 component is assumed to reflect attentional
processes: For example, it was found to be significantly larger
for attended compared to unattended infrequent stimuli [34].
Finally, the looming benefit to braking times could result from
an enhanced preparation of the motor response, reflected by
the “readiness potential” that is a slowly rising symmetric
negativity prior to the motor response itself (e.g., braking)
[25]. Related to the current work, activity related to this
component has been shown to be predictive of the intention
to perform emergency braking, approximately 130 ms earlier
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
than measuring brake pedal deflections directly—this loosely
translates to 3.66 m in reduced braking distance for a car
traveling at 100 km/hr[16].
Previous studies have demonstrated a benefit from using loom-
ing notification warnings over comparable warnings. The
reasons for this, in the context of vehicle handling, remain
unclear. The current work seeks to understand this benefit
in order to establish how it can be effectively implemented,
particularly in conjunction with automated driving systems.
To achieve this, we employ a different experiment design from
Gray (2011; [13]) that presented more repetitions of the notifi-
cation warnings and introduced simulated driving assistance.
Moreover, we employed EEG/ERP measurements to evalu-
ate how looming warnings might be processed by the brain,
differently from an equivalent static warning.
We seek to understand the benefit of looming warnings in
terms of brain responses, by measuring how it modulates
cognitive processing of collision events. Besides this, we
also investigated how looming warnings might influence our
participants’ perceived workload relative to the use of static
warnings. From our viewpoint, the latter question is espe-
cially important from an applied perspective. Specifically,
drivers acceptance of looming sounds as warnings might be
depend in a critical way on whether they experience higher
or lower workload when using the warning. Previous stud-
ies (e.g., [44]) have shown that looming sounds are already
perceived in early infancy, suggesting that they may be un-
derstood intuitively. From this perspective, looming sounds
should reduce workload, because they are easy to process. On
the other hand, looming sounds are characterized by a change
in intensity that may require additional processing resources
and, hence,increase arousal and perceived workload.
Our experimental hypotheses are formulated as follows, in
terms of our dependent variables:
1. Emergency braking times
H1: All auditory warnings will result in faster brak-
ing reactions compared when no auditory warning is
H2: Looming warnings will result in faster braking
reactions than static warnings.
H3: Looming warnings do not result in more false
positives than static warnings.
2. ERP components
H4: Early ERP components (i.e., before 200 ms) dis-
criminate for looming and static warnings, suggesting
differences in their detectability.
H5: Late ERP components (i.e., after 200 ms) dis-
criminate for looming and static warnings, suggesting
differences in their influence on higher-level cognitive
processes such as attention and evaluation / interpre-
tation of the sensory input within the current context
(e.g., in terms of the congruency of auditory and visual
H6: The “readiness potential” discriminates for loom-
ing and static warnings, suggesting differences in mo-
tor preparation for braking.
3. Perceived workload
H7: Looming warnings are perceived as inducing more
workload than static warnings, commensurate with
increased performance that result from recruiting more
H8: Looming warnings are perceived as inducing less
workload than static warnings, commensurate with
better performance that is achieved with less effort.
Figure 1. The experimental set-up: A participant is positioned in the
simulated cockpit of an automated vehicle with headphones and EEG
cap. Three horizontally-aligned displays present the simulation of the
vehicle scenario.
Experimental design
This experiment was a within-participants design with the in-
dependent variable of auditory warning (looming, static, or
no warning). In other words, participants were presented
with two possible auditory notifications or none whenever a
critical event could occur (i.e., sudden braking of the lead
vehicle). Of all the times when auditory notifications were
presented, 20% did not coincide with a critical event. These
catch trials were introduced to discourage participants from
responding to the auditory notifications alone. Dependent vari-
ables included behavioral responses (i.e., response times, false
alarms), neural responses (i.e., amplitudes of notification- and
response-evoked ERPs), and subjective ratings (i.e., DALI).
The full experiment consisted of four separable blocks with
intervening breaks.
Twenty participants (13f, 7m) with ages from 20 to 32 years
(mean=25 years) participated in this study. All participants
reported (corrected to) normal vision and hearing. All partici-
pants had a valid driver’s license with 1 to 13 years of driving
experience (mean=6.25 years). They reported that they drove
between 200 and 35000 km per year (mean: 5190 km). One
participant was excluded from the data analysis when experi-
ment debriefing revealed that he misunderstood the task. All
participants received monetary compensation (8 eur/hr) for
their participation.
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
Figure 2. Mean braking times for each warning condition with error
bars that represent 95% confidence intervals of the main effect, accord-
ing to Cousineau (2005; [7])
The study was conducted in a low-fidelity driving simulator
that consisted of a driver’s seat, foot pedals, and three large
LCD-monitors that were aligned horizontally (see Figure 1).
The visual simulation was programmed in Unity 5.5, with a
display refresh rate of 60 Hz.
Auditory notifications were generated in Matlab R2014a and
had a duration of 500 ms. The looming sound was calculated
as described in [13] and modulated so that it indicated a time-
to-collision of 1.86 sec. The intensity of the looming sound
ranged from 0 to 60 dB SPL. The static sound had a mean
intensity of the looming sound (i.e., 10.26 dB SPL). Auditory
notifications were presented via stereo headphones with a
source that was head-centered.
The EEG was recorded via 63 active electrodes that were
distributed on the scalp according to the international 10-20-
system. During measurement, the reference electrode was
positioned at FCz and the ground electrode at AFz. Addition-
ally, we recorded electro-oculographic (EOG) activity via four
electrodes placed around the eyes.
The DALI questionnaire is a modification of the NASA-TLX
questionnaire and was designed to measure the perceived
workload of participants in driving scenarios. It evaluates
six sub-components of workload (auditory demand, effort of
attention, interference, situational stress, visual demand, tem-
poral demand) and consists of two parts (i.e., direct rating on
a 100-point scale and a forced-choice comparison of all six
types of workload) [33].
Driving scenario
The driving scenario can be described as follows. Participants
perceived themselves as users of a vehicle with ACC that was
not coupled to an autonomous emergency braking system,
traveling on a two-lane road with no curves. There was always
a lead vehicle (xx m ahead) that continuously varied its speed
between 88.5 – 105 km/hr. The participants’ ego-vehicle
maintained a distance of 50 m to the lead vehicle and adjusted
its own speed constantly to achieve this, except when the lead
vehicle braked suddenly. Depending on the lead vehicle’s
speed upon braking, the time-to-collision ranged between 1.72
to 2.03 secs. When this happened, participants were expected
to step on the braking pedal as soon as possible to avoid a
collision with the lead vehicle. An auditory notification was
played 66.7% of the times that this critical event occurred
and was either a looming sound or a static sound with the
mean average intensity of the looming sound. There were 60
instances of each of these three conditions. Twenty percent of
auditory notification presentations (i.e., n=12) did not coincide
with a critical event. In other words, the notification system
simulated here could be described as having a hit rate of 66.7%
and a false alarm rate of 20%, namely a sensitivity index
of d’=1.27. Finally, there were 12 instances when neither
notification nor lead vehicle braking occurred. Altogether,
this resulted in six different events that were equally disturbed
across the four test blocks, randomly shuffled, and presented
without replacement every 18-20 seconds.
Participants began by reading the experimental instructions
and a description of safety precautions before submitting
signed consent. Following, we affixed EEG electrodes to
the participants’ scalp with a cap and applied conductive gel.
During EEG preparation, participants were presented with as
many practice trials as was necessary, in order to familiarize
them with the simulator environment, the notifications, and
the expected responses to critical events. Testing was divided
into four blocks, which consisted of 54 trials (i.e., emergency
braking events) each, that were separated by a 15 min break.
The DALI questionnaire was administered between the test
blocks and the participants were required to report the per-
ceived workload for only one of the three auditory notification
condition per administration. This was counterbalanced across
participants. After testing, participants were debriefed and
remunerated for their time. Altogether, testing approximated
1 hour, while the EEG preparation and breaks amounted to
another hour.
Emergency braking times
The behavioral measure for the true warning condition was
braking time, which was defined as the elapsed time from
critical event onset to braking onset. A one-way repeated mea-
sures ANOVA for the factor auditory warning (looming, static,
no warning) revealed a main effect,
F(2,36) = 199.5,p<
. Planned comparisons were performed
with two-tailed paired-samples t-tests that revealed that loom-
ing (
t(18) = 16.45,p<0.001,Cohen0s d =3.77
), and static,
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
t(18) = 13.44,p<0.001,Cohen0s d =3.08
, notifications re-
sulted in significantly faster braking times than when no notifi-
cations were presented. More importantly, looming warnings
resulted in significantly faster reactions than static warnings,
t(18) = 2.46,p=0.02,Cohen0s d =0.57
(see Figure 2, with
95% confidence intervals for the main effect [7]). Thus, we
found a medium effect size for an emergency braking benefit
from using looming notification warnings, relative to static
notification warnings.
Using a two-tailed paired-samples t-test, we compared the
number of brake activations that occurred in the absence of a
critical event across both notification conditions. This analysis
did not reveal significant differences between the looming and
the static warning (t(18) = 0.57,p=0.58).
To summarize, the current behavioral results are in support of
H1, H2, and H3.
ERP components
To analyze the ERP signal, we first filtered the continuous
EEG signal and re-referenced it to the common average refer-
ence. Next, non-cortical activity was removed from the EEG
signal by using an independent component analysis (ICA) that
decomposed electrode activity into estimated source dipoles.
Thirty clusters of potential source dipoles were identified, of
which eight clusters were selectively removed from further
analyses given that they were likely to be non-cortical activity
(i.e., generated by line noise, muscle and eye-movements).
The remaining EEG activity was back-projected for analysis at
the electrode level. Data was epoched relative to notification
onset (i.e., 0 ms) and baselined to the mean activity of 1000 ms
before event onset. Since we had no a-priori hypothesis of the
ERP component that would discriminate between the condi-
tions of looming and static notifications, we analyzed our data
with a mass-univariate analysis (MUA). This is a data-driven
method that compares the ERP signal of all electrodes at regu-
lar time intervals with two-tailed paired-samples t-tests; false
discovery rate was controlled with the Benjamini-Hochberg
procedure [2]. The MUA revealed significant differences be-
tween the ERP to looming and static notifications in a time
interval between 240 and 280 ms after sound onset. Figure 3
illustrates this difference and indicates that the looming notifi-
cation evoked a significantly smaller ERP amplitude, relative
to the static notification, in this time interval that coincided
with fronto-central negativity (N2) and left parietal positivity
(P2). No significant differences were revealed in the early,
perceptual components.
To calculate the readiness potential, we used the same pre-
processing steps as for the sound-evoked ERPs, except the
event trigger (i.e., t=0 ms) was the braking response and ERP
waveforms were baselined to the mean activity of 1000 ms
after this event. An MUA showed there there were no signif-
icant differences between the readiness potential induced by
the looming and static notifications.
To summarize, only one late ERP component (i.e., fronto-
central N2, left parietal P2) discriminated between the influ-
ence of looming and static warnings on emergency braking.
This supports H5 and suggests that the looming benefit influ-
ences emergency braking in terms of its underlying cognitive
processes. There is currently no evidence for H4 and H6,
namely that the looming benefit is associated with perceptual
and motor preparation processes of emergency braking.
Perceived Workload
To analyze perceived workload, the scores for all six categories
of the DALI were first calculated according to the standard
method for the NASA-TLX [15]. Subsequently, mean work-
load of the six categories were submitted as a combined work-
load value to a one-way repeated measures ANOVA for the
factor auditory warning (looming, static, no warning). We did
not find a significant difference in DALI workload between
the warning conditions (
F(2,36) = 2.09,p=0.14
; see Figure
The current study replicated and extended the finding of
a looming benefit on emergency braking [13] to a semi-
automated driving scenario. On the behavioral level, we repli-
cated the observation that braking times in an emergency brak-
ing are faster when an auditory warning given is beforehand as
compared to when there is no warning (H1). Second, we repli-
cated the observation that looming sounds as warnings lead to
faster braking times than constant sounds (H2). Importantly,
this looming benefit was not accompanied by an increase in
error rates (i.e., false alarms) (H3), which suggests that the
observed braking time differences do not simply reflect a low-
ered response criterion, but instead reflect a true processing
benefit. With regards to brain responses, we found that loom-
ing warnings discriminated from static warnings with regards
to cognitive processing, as indexed by a fronto-central negativ-
ity in the time range of the N2 (H5), but did not discriminate
on the level of perceptual processing or response preparation.
Finally, and in contrast to our expectations, we did not find
any evidence to suggest that auditory warnings influence the
perceived workload of emergency braking.
The behavioral findings here suggest that the looming benefit
on emergency braking is supported by a direct relationship
between the auditory warning and the collision event itself.
Although looming sounds could also contribute to faster brak-
ing times by capturing attentional resources from other tasks
to promote a response to the collision event, this is unlikely to
be an explanation for the current results. If attentional capture
played a role, we would have expected the looming benefit
to be negligible after numerous repetitions, due to an adapta-
tion to the looming sound. Yet, we observed a clear looming
benefit, despite presenting it thrice as often as Gray (2011;
[13]). Therefore, we believe that auditory looming warnings
will continue to offer a braking benefit even with repeat ex-
posure. This favors the real-world implementation of such
sounds, particular for anticipated collisions.
At first glance, the looming benefit on braking times might
seem small. Nonetheless, this is a robust finding and we
believe that this looming benefit will be larger in real-world
situations. First of all, our participants had only one task (i.e.,
emergency braking) and still benefited from looming warnings.
Therefore, we can expect the looming benefit to be even larger
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
Figure 3. Results of the ERP analyses. a) depicts the averaged activity of all significant fronto-central electrodes in the time window from sound onset
(0 ms) to 400 ms after sound onset. The static warning condition shows a larger negative peak. b) depicts the averaged activity of all significant left
parietal electrodes in the time window from sound onset (0 ms) to 400 ms after sound onset. The static warning condition shows a larger positive peak.
c) Overview over significant electrodes at 260 ms after sound onset. Blue electrodes indicate that the static warning condition is more negative than the
looming-warning condition, red electrodes indicate the opposite.
in scenarios where participants might be distracted by other
tasks. In fact, the looming benefit was reported to approximate
115 ms in [13]. Moreover, our participants were constantly
prepared to respond to an emergency braking scenario and,
yet, the benefit of looming sounds persisted. This state of
readiness and heightened arousal is unlikely to be true in the
real world, strongly suggesting that larger looming benefits
will be observed in the real world. Finally, it is important
to note that the looming benefit does not merely induce a
braking response. If this was true, we would have observed
more false braking to looming warnings than static warnings.
Instead, looming warnings directly support our users’ (or their
brains’) ability to evaluate an impending collision event. This
is supported by the ERP results.
Differences between the looming and static warning were
found to influence the ERP waveform to the potential collision
event at a late stage. Specifically, a fronto-central negative
deflection (i.e., N2; 240 to 280 ms) was larger when the poten-
tial collision was accompanied by a static warning, relative to
when a looming warning was played instead. A left parietal
positive deflection was similarly observed at the same time-
region, which discriminated between the two warnings. Given
this timing, we infer that the warnings impacted our partici-
pants’ cognitive assessment of the collision event, instead of
their detection of it.
To account for the currently observed differences in late ERP
components, we suggest that looming warnings facilitate the
processing of the looming visual stimulus of a rapidly ap-
proaching lead vehicle, namely the time-to-collision. This
view is supported by Lindström et al. (2012) who also found
differences in the fronto-central N2 to depend on audiovisual
congruency [26]: In their experiment, the fronto-central N2
was larger when a sound appeared simultaneously with an
incongruent visual stimulus. The authors attributed the N2
difference to the processing of audiovisual associations which
is easier if the visual and the auditory stimulus are congruent.
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
Figure 4. Results of the DALI showing for each warning warning con-
dition the perceived workload in all 6 workload sub-indices and overall
mean workload (right). Error bars indicate the standard error of the
A similar line of reasoning can explain the current results.
Here, a looming sound is congruent with the visual percept of
a fast approaching collision object, while a static warning is
incongruent. Therefore, the pairing of a static sound with a
collision event is incongruent and, hence, harder to process.
This, in turn, results in a larger front-central negativity (N2)
and longer braking times.
Analysis of ERPs preceding the braking response revealed no
significant differences in the readiness potential between static
and looming sounds. Thus, we report no evidence for the pos-
sibility that the looming benefit promotes motor preparation
for braking. This finding contrasts with previous research in
which an activation of motor related areas was observed for
looming sounds [3, 42]. We speculate that this discrepancy
in findings might be due to task differences. In the previ-
ous studies, the task was to actively listen to looming sounds.
Preferential activation of motor-related brain regions occurred
shortly before the time-to-collision. In our experiment, partici-
pants were supposed to brake as soon as possible when their
car was about to collide. Therefore, any possible collision was
already mitigated prior to the time of collision itself.
Finally, the DALI questionnaire revealed no significant differ-
ences in subjective overall perceived workload. This contra-
dicts H7 and H8 which predicted that looming sounds could
give rise to more or less workload. In retrospect, this is consis-
tent with our ERP results that suggest that the looming benefit
is due to the congruency of the auditory warning characteris-
tics with the visual event. If true, such an influence is unlikely
to correspond with notions of mental workload. Instead, it
strongly suggests that warning displays might be better evalu-
ated in terms of their perceived similarity with the events that
they are intended to notify or warn about in the first place.
As outlined above, we interpret the observed looming benefit
in terms of audio-visual congruency rather than hastened at-
tentional re-direction or motor preparation. On the grounds of
this interpretation, one might even argue that looming sounds
as warnings should be used specifically in those situations
in which the visual input is also looming. Thus, it could be
interesting to investigate if looming sounds from different di-
rections could similarly result in a heightened awareness of
neighboring collision objects that could benefit other vehicle
maneuvers. For example, one could pair a looming sound from
the left with a car approaching from the left to see whether
the looming sound results in a faster evasive maneuver to the
In the context of highly automated driving, it would be prof-
itable to consider the influence of looming warnings when
users are occupied with an additional non-driving task. One
important research question for future studies might be to ask
how efficient looming warnings in attracting attention away
from different non-driving tasks, from passive reading to ac-
tive work such as writing. Finally, it would be interesting
to examine how the looming benefit might vary when the
auditory modality is occupied either by another task (e.g., tele-
phone conversation) or ambient sounds (e.g., radio). Besides
the auditory modality, tactile looming notifications can also
be implemented as an in-vehicle display and its influence on
driving awareness evaluated. Nonetheless, it should be noted
that previous studies have suggested that tactile warnings have
limited utility and require careful design [1, 5].
Other auditory sounds have already been demonstrated to be
effective and for different reasons.For example, verbal warn-
ings can sometimes result in faster responses than non-verbal
warnings [18], presumably because they are more easily dis-
criminable from auditory stimuli that are not in-vehicle warn-
ings [36, 11]. The current work suggests that it is a viable
design principle to design auditory warnings to be consistent
with the visual event that they are intended to elicit a fast
response to. This will result in more fluent cognitive process-
ing of the event itself that can be expected to translate into
faster responses. The looming characteristic is one that can be
easily introduced to verbal and iconic sounds that are shown
to be effective in their own right. In this regard, it might
make effective warnings more salient and, hence, even more
We anticipate that implicit brain responses will be an increas-
ingly important tool in trying to understand how humans re-
spond to automotive user interfaces. This is especially so, in
light of the trend of automated vehicles that will result in fewer
explicit user responses[22]. User interface designs are rarely
conceived in a vacuum. They often reflect assumptions of the
nature of operational demands placed on the users, which are
increasingly verifiable not only by more efficient behavior but
also by brain responses[21].
This work was financially supported by the German Research
Foundation (DFG) within project C03 of SFB/Transregio 161.
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
1. P Bazilinskyy, SM Petermeijer, V Petrovych, D Dodou,
and JCF De Winter. 2018. Take-over requests in highly
automated driving: A crowdsourcing survey on auditory,
vibrotactile, and visual displays. Transportation Research
Part F: Traffic Psychology and Behaviour 56 (2018),
2. Yoav Benjamini and Daniel Yekutieli. 2001. The control
of the false discovery rate in multiple testing under
dependency. Annals of statistics (2001), 1165–1188.
3. Jac Billington, Richard M Wilkie, David T Field, and
John P Wann. 2010. Neural processing of imminent
collision in humans. Proceedings of the Royal Society of
London B: Biological Sciences (2010), rspb20101895.
4. S. Borojeni, L. Chuang, W. Heuten, and S. Boll. 2016.
Assisting Drivers with Ambient Take-Over Requests in
Highly Automated Driving. In Proceedings of the 8th
International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (Automotive’UI
16). ACM, New York, NY, USA, 237–244. DOI:
5. Shadan Sadeghian Borojeni, Torben Wallbaum, Wilko
Heuten, and Susanne Boll. 2017. Comparing
Shape-Changing and Vibro-Tactile Steering Wheels for
Take-Over Requests in Highly Automated Driving. In
Proceedings of the 9th International Conference on
Automotive User Interfaces and Interactive Vehicular
Applications. ACM, 221–225.
6. Lewis L Chuang, Christiane Glatz, and Stas Krupenia.
2017. Using EEG to understand why behavior to auditory
in-vehicle notifications differs across test environments.
In Proceedings of the 9th International Conference on
Automotive User Interfaces and Interactive Vehicular
Applications. ACM, 123–133.
7. Denis Cousineau. 2005. Confidence intervals in
within-subject designs: A simpler solution to Loftus and
Masson’s method. Tutorials in quantitative methods for
psychology 1, 1 (2005), 42–45.
8. Tilman Dingler, Jeffrey Lindsay, and Bruce N Walker.
2008. Learnabiltiy of sound cues for environmental
features: Auditory icons, earcons, spearcons, and speech.
In Proceedings of the 14th International Conference on
Auditory Display. International Community for Auditory
Display, 1–6.
9. Franc CL Donkers and Geert JM Van Boxtel. 2004. The
N2 in go/no-go tasks reflects conflict monitoring not
response inhibition. Brain and cognition 56, 2 (2004),
10. Stephan Getzmann. 2011. Auditory motion perception:
onset position and motion direction are encoded in
discrete processing stages. European Journal of
Neuroscience 33, 7 (2011), 1339–1350.
11. Christiane Glatz, Stas S Krupenia, Heinrich H Bülthoff,
and Lewis L Chuang. 2018. Use the Right Sound for the
Right Job: Verbal Commands and Auditory Icons for a
Task-Management System Favor Different Information
Processes in the Brain. In Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems.
ACM, 472.
Christian Gold, Daniel Damböck, Lutz Lorenz, and Klaus
Bengler. 2013. “Take over!” How long does it take to get
the driver back into the loop?. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting,
Vol. 57. SAGE Publications Sage CA: Los Angeles, CA,
Rob Gray. 2011. Looming auditory collision warnings for
driving. Human factors 53, 1 (2011), 63–74.
14. Marc Green. 2000. " How long does it take to stop?"
Methodological analysis of driver perception-brake times.
Transportation human factors 2, 3 (2000), 195–216.
15. SG Hart and L Staveland. 1988. Development of
NASA-TLX (Task Load Index): Results of empirical and
theoretical research. Advances in psychology 52 (1988),
Stefan Haufe, Matthias S Treder, Manfred F Gugler, Max
Sagebaum, Gabriel Curio, and Benjamin Blankertz. 2011.
EEG potentials predict upcoming emergency brakings
during simulated driving. Journal of neural engineering
8, 5 (2011), 056001.
Stephen Hirst and Robert Graham. 1997. The format and
presentation of collision warnings. Ergonomics and
safety of intelligent driver interfaces (1997), 203–219.
18. Cristy Ho and Charles Spence. 2005. Assessing the
effectiveness of various auditory cues in capturing a
driver’s visual attention. Journal of experimental
psychology: Applied 11, 3 (2005), 157.
19. Eiichi Jodo and Yukihiko Kayama. 1992. Relation of a
negative ERP component to response inhibition in a
Go/No-go task. Electroencephalography and clinical
neurophysiology 82, 6 (1992), 477–482.
20. Tarannum Ayesha Kazi, Neville A Stanton, Guy H
Walker, and Mark S Young. 2007. Designer driving:
drivers’ conceptual models and level of trust in adaptive
cruise control. International journal of vehicle design 45,
3 (2007), 339–360.
21. Thomas Kosch, Markus Funk, Albrecht Schmidt, and
Lewis L Chuang. 2018. Identifying Cognitive Assistance
with Mobile Electroencephalography: A Case Study with
In-Situ Projections for Manual Assembly. Proceedings of
the ACM on Human-Computer Interaction 2, EICS
(2018), 11.
22. Andrew L Kun, Susanne Boll, and Albrecht Schmidt.
2016. Shifting gears: User interfaces in the age of
autonomous driving. IEEE Pervasive Computing 1
(2016), 32–38.
David N Lee. 1976. A theory of visual control of braking
based on information about time-to-collision. Perception
5, 4 (1976), 437–459.
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
S. Levin. 2018. Video released of Uber self-driving crash
that killed woman in Arizona. The Guardian (18 March
2018). Retrieved from
25. Benjamin Libet, Curtis A Gleason, Elwood W Wright,
and Dennis K Pearl. 1983. Time of conscious intention to
act in relation to onset of cerebral activity
(readiness-potential) the unconscious initiation of a freely
voluntary act. Brain 106, 3 (1983), 623–642.
26. Riikka Lindström, Petri Paavilainen, Teija Kujala, and
Mari Tervaniemi. 2012. Processing of audiovisual
associations in the human brain: dependency on
expectations and rule complexity. Frontiers in psychology
3 (2012).
27. Bonnie M Muir and Neville Moray. 1996. Trust in
automation. Part II. Experimental studies of trust and
human intervention in a process control simulation.
Ergonomics 39, 3 (1996), 429–460.
28. Risto Näätänen and Terence Picton. 1987. The N1 wave
of the human electric and magnetic response to sound: a
review and an analysis of the component structure.
Psychophysiology 24, 4 (1987), 375–425.
29. Wassim G Najm, Basav Sen, John D Smith, BN
Campbell, and others. 2003. Analysis of light vehicle
crashes and pre-crash scenarios based on the 2000
general estimates system. Technical Report. United
States. National Highway Traffic Safety Administration.
30. M. A. Nees and B. N. Walker. 2011. Auditory Displays
for In-Vehicle Technologies. Reviews of Human Factors
and Ergonomics 7, 1 (2011), 58–99. DOI:
31. Eunmi L Oh and Robert A Lutfi. 1999. Informational
masking by everyday sounds. The Journal of the
Acoustical Society of America 106, 6 (1999), 3521–3528.
32. Raja Parasuraman, Thomas B Sheridan, and
Christopher D Wickens. 2000. A model for types and
levels of human interaction with automation. IEEE
Transactions on systems, man, and cybernetics-Part A:
Systems and Humans 30, 3 (2000), 286–297.
Annie Pauzié. 2008. A method to assess the driver mental
workload: The driving activity load index (DALI). IET
Intelligent Transport Systems 2, 4 (2008), 315–322.
34. Terence W Picton. 2010. Human auditory evoked
potentials. Plural Publishing.
35. Ioannis Politis, Stephen Brewster, and Frank Pollick.
2015. To beep or not to beep?: Comparing abstract versus
language-based multimodal driver displays. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems. ACM,
36. Ay¸se Pinar Saygin, Frederic Dick, and Elizabeth Bates.
2005. An on-line task for contrasting auditory processing
in the verbal and nonverbal domains and norms for
younger and older adults. Behavior Research Methods 37,
1 (2005), 99–110.
Menja Scheer, Heinrich H Bülthoff, and Lewis L Chuang.
2016. Steering demands diminish the early-P3, late-P3
and RON components of the event-related potential of
task-irrelevant environmental sounds. Frontiers in human
neuroscience 10 (2016), 73.
Menja Scheer, Heinrich H Bülthoff, and Lewis L Chuang.
2018. Auditory Task Irrelevance: A Basis for
Inattentional Deafness. Human factors 60, 3 (2018),
39. William Schiff, James A Caviness, and James J Gibson.
1962. Persistent fear responses in rhesus monkeys to the
optical stimulus of" looming". Science 136, 3520 (1962),
40. Albrecht Schmidt and Thomas Herrmann. 2017.
Intervention user interfaces: a new interaction paradigm
for automated systems. interactions 24, 5 (2017), 40–45.
41. Mark Schmuckler, Lisa M Collimore, James L
Dannemiller, and others. 2007. Infants’ reactions to
object collision on hit and miss trajectories. Infancy 12, 1
(2007), 105–118.
42. Erich Seifritz, John G Neuhoff, Deniz Bilecen, Klaus
Scheffler, Henrietta Mustovic, Hartmut Schächinger,
Raffaele Elefante, and Francesco Di Salle. 2002. Neural
processing of auditory looming in the human brain.
Current Biology 12, 24 (2002), 2147–2151.
43. Elyse S Sussman and Mitchell Steinschneider. 2011.
Attention modifies sound level detection in young
children. Developmental cognitive neuroscience 1, 3
(2011), 351–360.
44. Audrey LH Van Der Meer, Monica Svantesson, and
FR Ruud Van Der Weel. 2012. Longitudinal study of
looming in infants with high-density EEG.
Developmental neuroscience 34, 6 (2012), 488–501.
Session 7: Special Approaches AutomotiveUI ’18, Toronto, Canada
... Finally, we designed two looming soundscapes. A research (Jeon et al., 2017;Jeon, Gable et al., 2015;Lahmer et al., 2018) introduced the plausibility of using looming sounds in addition to intermittent warnings in the vehicle environment. The first looming sound includes two dominant frequencies (around 560 Hz and 620 Hz) and the second one includes only one dominant frequency (around 560 Hz), which are similar to Tesla's sound frequency. ...
... Both indicator sounds lasted less than 2 seconds (increasing: 1.62 sec, decreasing: 1.92 sec), which can be categorized as cautionary or early cautionary sounds (National Highway Traffic Safety Administration, 2016). Literature also proposed using looming sounds until drivers take over the vehicle control (Jeon et al., 2017;Jeon, Gable, et al., 2015;Jeon, Hermann, et al., 2015;Lahmer et al., 2018). The literature shows that 5-8 (Wan et al., 2016) or 7 seconds (Sanghavi et al., 2021) is an optimal takeover time window. ...
In semi-automated vehicles, non-speech sounds have been prevalently used as auditory displays for control transitions since these sounds convey urgency well. However, there are no standards of specifications for warning sounds so that diverse non-speech sounds are being employed. To shed light on this, the effects of different non-speech auditory warnings on driver performance were investigated and quantified through the experimental study and human performance modeling approaches. Twenty-four young drivers drove in the driving simulator and experienced both handover and takeover transitions between manual and automated modes while performing a secondary task. The reaction times for handover and takeover, mental workload, and subjective responses were reported. Overall, a traditional warning sound with many repetitions and an indicator sound with decreasing polarity outperformed and were preferred. Additionally, a mathematical model, using the Queuing Network-Model Human Processor (QN-MHP) framework, was applied to quantify the effects of auditory warnings’ acoustic characteristics on drivers’ reaction times in response to takeover request displays. The acoustic characteristics, including the fundamental frequency, the number of repetitions, and the range of dominant frequencies were utilized in modeling. The model was able to explain 99.7% of the experimental data with a root mean square error (RMSE) of 0.148. The present study can contribute to establishing standards and design guidelines for takeover request displays in semi-automated vehicles.
... If a TOR occurs, the drivers have to act fast. To drag the drivers attention quickly to the TOR, in-steering wheel [12] or in-seat [134,135] vibrotactile feedback, warning sounds [95,134,185], language [141,196], ambient lights [11], jumping LED blocks [183], and even scent [173] or proprioceptive [41] cues are appropriate. On the visual channel, changing hue [92] works best. ...
... On the visual channel, changing hue [92] works best. For the auditory channel, an increasing alarm sound [95,185] leads to better TOR performance. In contrast, [210] found auditory TOR signals to be slower than visual, haptic, or multimodal signals. ...
Full-text available
Automated vehicles (AVs) are on the edge of being available on the mass market. Research often focuses on technical aspects of automation, such as computer vision, sensing, or artificial intelligence. Nevertheless, researchers also identified several challenges from a human perspective that need to be considered for a successful introduction of these technologies. In this paper, we first analyze human needs and system acceptance in the context of AVs. Then, based on a literature review, we provide a summary of current research on in-car driver-vehicle interaction and related human factor issues. This work helps researchers, designers, and practitioners to get an overview of the current state of the art.
... Newell (1990) distinguishes four bands, which again are all relevant for specific aspects of driving: Biological (actions over ms, e.g. brain processes underlying ms level differences in braking response times, Gray, 2011;Lahmer et al., 2018), cognitive (actions over seconds, such as how eye-movements affect steering movements, Salvucci and Taatgen, 2011;Kujala and Salvucci, 2015;Lee and Lee, 2019), rational (actions over multiple seconds to minutes, e.g., how to best interleave attention, Janssen et al., 2012;Janssen et al., 2019c), and social (actions over multiple minutes to years, such as development of trust, Forster et al., 2018). Again, the time scale of a model determines what types of (research) questions can be addressed and also what type of data is needed to validate such theories, as these need to be in sync with the model: milliseconds (e.g., EEG, fMRI), seconds (e.g., eye-tracking, steering actions), minutes (e.g., behavioral choices), or hours (e.g., duration of travel, fuel efficiency) (see also chapter 1 in Salvucci and Taatgen, 2011). ...
Full-text available
This paper provides a framework for examining human-vehicle interactions with respect to three dimensions that can involve models or simulations: the agents, the environments, and the scenarios. Agents are considered on a spectrum from human to artificial actors. Environments are considered on a spectrum from simulated to real. Scenarios are considered on a spectrum from constrained to unconstrained. It is argued that these three dimensions capture key differences in research approaches within the field of human-vehicle interaction, and that explicitly situating research and discussions within this framework will allow researchers to better compare and contrast research outcomes and contributions. The framework is used to locate different disciplines in the community with respect to one another, and to identify areas which are as-yet unexplored.
... Focusing on automated vehicles, a large body of research has investigated the effectiveness of providing last-minute alerts to warn drivers about situations where human assistance is needed. However, in such automated circumstances, people's susceptibility to alerts is reduced ( Van der Heiden et al., 2018;Lahmer et al., 2018;Scheer et al., 2018). Moreover, even if an alert is processed, mode confusion might limit the human driver's understanding of their role and limit their ability to take the right action . ...
Full-text available
We review the history of human-automation interaction research, assess its current status and identify future directions. We start by reviewing articles that were published on this topic in the International Journal of Human-Computer Studies during the last 50 years. We find that over the years, automated systems have been used more frequently (1) in time-sensitive or safety-critical settings, (2) in embodied and situated systems, and (3) by non-professional users. Looking to the future, there is a need for human-automation interaction research to focus on (1) issues of function and task allocation between humans and machines, (2) issues of trust, incorrect use, and confusion, (3) the balance between focus, divided attention and attention management, (4) the need for interdisciplinary approaches to cover breadth and depth, (5) regulation and explainability, (6) ethical and social dilemmas, (7) allowing a human and humane experience, and (8) radically different human-automation interaction.
Adaptive Cruise Control (ACC) is one of Advanced Driver Assistance Systems (ADAS) which takes over vehicle longitudinal control under necessary driving scenarios. Vehicle in ACC mode automatically adjusts speed to follow the preceding vehicle based on evaluation of the surrounding traffic. ACC reduces drivers’ workload as well as improves driving safety, energy economy, and traffic flow. This article provides a comprehensive review of the researches on ACC. Firstly, an overview of ACC controller and applied control theories are introduced. Their principles and performances are discussed. Secondly, several application cases of ACC control algorithms are presented. Then validation work including simulation, Hardware-in-the-Loop (HiL) test and on-road experiment is descripted to provide ideas for testing ACC systems for different aims and fidelities. In addition, studies on human-machine interaction are also summarized in this review to provide insights on development of ACC from the perspective of users. At last, challenges and potential directions in this field is discussed, including consideration of vehicle dynamics properties, contradiction between algorithm performance and computation as well as integration of ACC to other intelligent functions on vehicles.
Artificial intelligence coupled with digitally connected technologies are becoming more self-evident. These developments indicate an increasing symbiosis between human and machine, referring to a new phase of interaction—symbiotic intelligence. In this vein, the human-centred development of technologies is becoming more and more important. The detection of user’s mental states, such as cognitive processes, emotional or affective reactions, offers great potential for the development of intelligent and interactive machines. Neurophysiological signals provide the basis to estimate many facets of subtle mental user states, like attention, affect, cognitive workload and many more. This has led to extensive progress in brain-based interactions—Brain-Computer Interfaces (BCIs). While most BCI research aims at designing assistive, supportive or restorative systems for severely disabled persons, the current discussion focuses on neuroadaptive control paradigms using BCIs as a strategy to make technologies more human-centred and also usable for non-medical applications. The primary goal of our neuroadaptive technology research agenda is to consistently align the increasing intelligence and autonomy of machines with the needs and abilities of the human—a human-centred neuroadaptive technology research roadmap. Due to its far-reaching social implications, our research and developments do not only face technological but also social challenges. If neuroadaptive technologies are applied in non-medical areas, they must be consistently oriented to the needs and ethical values of the users and society.
Context: Cyber-Physical Systems (CPSs) are gradually and widely introducing autonomous capabilities into everything. However, human participation is required to accomplish tasks that are better performed with humans (often called human-in-the-loop). In this way, human-in-the-loop solutions have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of humans with autonomous systems behaviors. Objective: The objective of this paper is to provide appropriate techniques and methods to help designers analyze and design human-in-the-loop solutions. These solutions require interactions that engage the human, provide natural and understandable collaboration, and avoid disturbing the human in order to improve human experience. Method: We have analyzed several works that identified different requirements and critical factors that are relevant to the design of human-in-the-loop solutions. Based on these works, we have defined a set of design principles that are used to build our proposal. Fast-prototyping techniques have been applied to simulate the designed human-in-the-loop solutions and validate them. Results: We have identified the technological challenges of designing human-in-the-loop CPSs and have provided a method that helps designers to identify and specify how the human and the system should work together, focusing on the control strategies and interactions required. Conclusions: The use of our approach facilitates the design of human-in-the-loop solutions. Our method is practical at earlier stages of the software life cycle since it allows domain experts to focus on the problem and not on the solution.
Full-text available
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.
Conference Paper
Full-text available
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were designed to cue long-distance truck drivers for task-management and driving conditions, particularly for automated driving scenarios. Two experiments separately evaluated naive students and professional truck drivers for their behavioral and brain responses to auditory notifications, which were either auditory icons or verbal commands. Our EEG/ERP results suggest that verbal commands were more readily recognized by the brain as relevant targets, but that auditory icons were more likely to update contextual working memory. Both classes of notifications did not differ on behavioral measures. This suggests that auditory icons ought to be employed for communicating contextual information and verbal commands, for urgent requests.
Full-text available
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness.
Conference Paper
Full-text available
Automation shortcomings in highly automated driving (level 3) might require the driver to take over vehicle control. When the vehicle issues a take-over request (TOR), the driver has to immediately comprehend the situation, which highly demands visual attention. Therefore, haptic channel has the potential to be used for conveying information about driving context at take-over to assist decision making. In this work, we introduce the concept and prototype design of a shape-changing steering wheel which conveys contextual information at take-over. We evaluated this concept in comparison with an identical wheel with vibration cues. Results showed that haptic cues on steering wheel at TOR reassure drivers of their decisions rather than assisting them in decision making, and overall workload ratings are decreased using vibration cues. To assist decision making, contextual haptic cues in level 3 should be located on-body or on drivers' seat.
Conference Paper
Full-text available
In this study, we employ EEG methods to clarify why auditory notifications, which were designed for task management in highly automated trucks, resulted in different performance behavior, when deployed in two different test settings: (a) student volunteers in a lab environment, (b) professional truck drivers in a realistic vehicle simulator. Behavioral data showed that professional drivers were slower and less sensitive in identifying notifications compared to their counterparts. Such differences can be difficult to interpret and frustrates the deployment of implementations from the laboratory to more realistic settings. Our EEG recordings of brain activity reveal that these differences were not due to differences in the detection and recognition of the notifications. Instead, it was due to differences in EEG activity associated with response generation. Thus, we show how measuring brain activity can deliver insights into how notifications are processed, at a finer granularity than can be afforded by behavior alone.
Conference Paper
Full-text available
Take-over situations in highly automated driving occur when drivers have to take over vehicle control due to automation shortcomings. Due to high visual processing demand of the driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance.
Full-text available
The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task’s demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the ‘hard’ steering condition relative to the ‘easy’ condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments.
The field of automotive user interfaces has developed rapidly over the last several years. To date, the field has primarily focused on creating user interfaces that promote safe driving, including when the driver is engaged in a secondary task in addition to operating the vehicle. However, researchers now need to prepare for a major change in the automotive domain: the automated driving revolution. The authors argue for a new research agenda that focuses on four challenges for automotive user interfaces: assuring safety in the age of automation, transforming vehicles into places for productivity and play, taking advantage of new mobility options made possible by automated vehicles, while throughout all this preserving user privacy and data security. This article is part of a special issue on smart vehicle spaces.