Conference PaperPDF Available

Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain

Authors:

Abstract and Figures

Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were designed to cue long-distance truck drivers for task-management and driving conditions, particularly for automated driving scenarios. Two experiments separately evaluated naive students and professional truck drivers for their behavioral and brain responses to auditory notifications, which were either auditory icons or verbal commands. Our EEG/ERP results suggest that verbal commands were more readily recognized by the brain as relevant targets, but that auditory icons were more likely to update contextual working memory. Both classes of notifications did not differ on behavioral measures. This suggests that auditory icons ought to be employed for communicating contextual information and verbal commands, for urgent requests.
Content may be subject to copyright.
Use the Right Sound for the Right Job: Verbal Commands
and Auditory Icons for a Task-Management System Favor
Different Information Processes in the Brain
Christiane Glatz
Max Planck Institute for
Biological Cybernetics
Tübingen, Germany
IMPRS for Cognitive &
Systems Neuroscience
Tübingen, Germany
christiane.glatz@tuebingen.mpg.de
Stas S. Krupenia
Scania CV AB
Soedertaelje, Sweden
stas.krupenia@scania.com
Heinrich H. Bülthoff
Max Planck Institute for
Biological Cybernetics
Tübingen, Germany
heinrich.buelthoff@tuebingen.mpg.de
Lewis L. Chuang
Max Planck Institute for
Biological Cybernetics
Tübingen, Germany
lewis.chuang@tuebingen.mpg.de
ABSTRACT
Design recommendations for notifications are typically based
on user performance and subjective feedback. In comparison,
there has been surprisingly little research on how designed
notifications might be processed by the brain for the informa-
tion they convey. The current study uses EEG/ERP methods
to evaluate auditory notifications that were designed to cue
long-distance truck drivers for task-management and driving
conditions, particularly for automated driving scenarios. Two
experiments separately evaluated naïve students and profes-
sional truck drivers for their behavioral and brain responses
to auditory notifications, which were either auditory icons or
verbal commands. Our EEG/ERP results suggest that verbal
commands were more readily recognized by the brain as rele-
vant targets, but that auditory icons were more likely to update
contextual working memory. Both classes of notifications did
not differ on behavioral measures. This suggests that auditory
icons ought to be employed for communicating contextual
information and verbal commands, for urgent requests.
ACM Classification Keywords
H.5.2 Information Interfaces and Presentation:
User Interfaces – Input Devices and Strategies
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI 2018 April 21–26, 2018, Montreal, QC, Canada
© 2018 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-5620-6/18/04.
DOI: https://doi.org/10.1145/3173574.3174046
Author Keywords
Notifications; Electroencephalography; Auditory Displays;
Autonomous Vehicles; In-Vehicle Interfaces
INTRODUCTION
Auditory notifications are used extensively by in-vehicle inter-
faces to inform the user of important events. Whilst parking,
for example, decreasing beep intervals could communicate the
nearing distance between a driver and an obstacle. This raises
the question: How should auditory notifications be designed?
Should they be verbal notifications that communicate instruc-
tions explicitly or should they be recognizable auditory icons
that denote a critical scenario?
Previous researchers have generally agreed on the essential
design guidelines for auditory notifications [46]. Auditory
notifications need to be: (1) easily detectable [24, 45, 16], (2)
readily discriminable against background noise [48, 39], (3)
capture attention [29, 30], and, after all these requirements
have been fulfilled (4) easily interpretable [45, 18, 70]. How-
ever, there are many ways in which auditory notifications can
be designed to comply with these criteria. Preference between
different designs is often determined by user studies that eval-
uate performance or subjective feedback. Unfortunately, such
measures can often contradict from one study to another or
fail to discriminate between different designs.
Verbal commands and auditory icons represent two general
classes of auditory notifications that are commonly employed.
They are favored over synthetic sounds [38, 13] because they
are based on prior user experiences. Hence, they are eas-
ily learned in novel use settings for their intended meanings.
Nonetheless, no clear consensus has been established for pre-
ferring either verbal commands or auditory icons.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 1
Besides performance and subjective measurements, brain re-
sponses to notifications can also serve as a way to evaluate
auditory notifications. Surprisingly, this approach is rarely em-
ployed (although, see [37]). In particular, electroencephalog-
raphy (EEG) reveals how the brain processes information –
namely, the extent to which notifications capture attention
and are interpreted – regardless of how this eventually influ-
ences observable performance or subjective feedback. In other
words, EEG provides a higher functional resolution of the
processes that take place between stimulus presentation and
the elicited response. Therefore, we can evaluate auditory
notifications based on how user brains respond to them, and
not merely on how users respond to them. The understanding
that we can gain from the additional information provided by
EEG can contribute towards the design of more appropriate
and effective notifications that are easy to use.
The increasing reliance on automation to perform tasks is
transforming the role of notifications. While notifications used
to be prized for their effectiveness as a ‘call to action’, they
are increasingly used to update and inform users on the cur-
rent situation, or to assist users in supervising automation.
This trend is especially prevalent in the context of automated
vehicles, whereby auditory notifications have been specially
designed to update drivers of automated trucks on prevailing
road conditions and to remind them to supervise logistical
tasks. Arguably, such notifications might not be readily eval-
uated by behavioral responses alone, but by how they are
processed by the brain for conveyed information.
In the current work, we answer the following four research
questions (RQ1-RQ4) using EEG analysis. RQ1: Are auditory
icons or verbal commands more easily detected? RQ2: Are
auditory icons or verbal commands more discriminable? RQ3:
Do auditory icons or verbal commands capture more attention?
RQ4: Do auditory icons or verbal commands result in more
context-updating of what has to be done next?
To summarize, this paper makes the following contributions:
1.
EEG demonstrates that both types of notifications are
equally detectable and orient attention to the same extent or
rather more in an applied context.
2.
By using EEG, we are able to show that verbal commands
are discriminated more easily than auditory icons, but that
auditory icons are more likely to update contextual working
memory.
3.
Evaluating brain responses reveals that auditory notifica-
tions should follow a purpose-oriented design. That is, ver-
bal commands seem more suitable for urgent requests while
auditory icons should be used to communicate less pressing
contextual information. Unlike behavioral measures, EEG
gives insight in different stages of processing a notification.
Notably, these insights are obtained in a passive and unob-
trusive way and cannot be gained through behavioral results
only.
4.
The results of this study demonstrate that laboratory find-
ings of auditory EEG studies can be extended to more real-
istic environments, such as driving simulators. Qualitative
comparisons suggest that results scale with experience and
that the effect of notifications on brain responses increase
with increasing relevance.
RELATED WORK
Challenges for Designing In-vehicle Auditory Displays
Sound design is a challenging task not only technically but
even more so regarding human perception. Human’s auditory
perception is influenced by a variety of factors such as emo-
tions, memories, cognition, environment, previous experience,
and the ability to understand speech [49]. In the design of
auditory displays, sounds that communicate information to the
user, factors like these are relied upon [43]. When not using
words to communicate information but sounds, we talk about
sonification [71]. In this regard, sound designers have to con-
sider the important aspect of making sounds recognizable and
identifiable for what they were designed. At the same time, au-
ditory notifications need to convey the level of urgency without
annoying the user [42]. Especially for in-vehicle notifications,
auditory displays whose meaning is ambiguous can have a
negative influence (i.e. higher perceived workload, slower
response times) [72]. Hence auditory notifications should be
evaluated, for example, based on their detectability, their po-
sition of presentation, their identifiability, and their conveyed
meaning [69].
To optimize the design of auditory notifications, a two-stage
pipeline has been proposed [39]. At an early first stage, design-
ers consult with the user audience about the designed sound
which feeds back into the design process. The second stage
is an evaluation of the auditory display in a greater auditory
context, simulating the use of the sound in its intended environ-
ment. While this is a step in the right direction for optimizing
the design of effective auditory notifications, it relies on sub-
jective user feedback, as well as performance measures, and
does not reveal the actual influence auditory notifications have
on the processing by the human brain.
Auditory Notifications
Auditory notifications can be categorized into two main groups,
speech and non-speech. We differentiate non-speech sounds
into auditory icons (representative sounds) and earcons (ab-
stract synthesized sounds) [24, 32]. Unlike earcons, verbal
commands and auditory icons are readily recognizable and
do not require learning, which typically translates to faster
responses [44].
Verbal Commands
We often rely on speech to communicate our intentions to
each another. Once proficiency is acquired in a given lan-
guage, verbal commands can be relied on to communicate
complex messages that can be unambiguously interpreted [38,
13]. Thus, it is natural for humans to prefer speech notifica-
tions [44].
Nonetheless, verbal notifications face the risk of being masked
by or confused with real conversations [50]. To circumvent
this problem, contrivances could be introduced to make verbal
notifications less human-like and more discriminable from
real speech. This could be achieved by manipulating the pitch
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 2
or other spectral properties of verbal commands. Pilots were
found to discriminate easily between natural speech and syn-
thesized speech [63]. Thus, presenting notifications in syn-
thesized speech could prevent verbal commands from being
confused with real conversations in our environment.
Another shortcoming of speech is that it is harder to spatially
localize than other sounds, presumably because of its smaller
bandwidth [68]. However, this can easily be compensated by
being interpreted unambiguously when it is used appropriately.
Verbal commands can present unambiguous spatial informa-
tion through their semantic context. For instance, presenting
the word ‘front’ from a front speaker results in fast response
times to potential head-on collisions [29]. However, if not
used appropriately, speech can attract attention to the extent
that it could interfere with other tasks such as driving [64].
Auditory Icons
Auditory icons are sounds that represent real world events.
These are sounds with stereotypical associations with the ob-
ject or event/action that created the sound. For example, the
sound of a car horn could indicate a safety critical situation
that requires immediate attention and action. Being familiar
sounds, auditory icons are easily learned for their intended
function.
An advantage of auditory icons is that they are not easily
masked by background speech [38, 66, 30]. For instance, au-
ditory icons are unlikely to be confused with a radio jockey’s
monologues. Nonetheless, auditory icons, like skidding tires
or the car horn, can still be confused with real environmental
occurrences. Also, auditory icons have been shown to be more
likely in generating false alarms than abstract notifications
[26]. This is most likely due to the fact that humans might
have overlearned certain cues (e.g., car horn). Given that back-
ground experiences are likely to be different across different
users, auditory icons might be challenging to calibrate for their
conveyed urgency.
Auditory icons are susceptible to misinterpretation because a
single sound can represent more than one meaning [24, 45].
Depending on previous experience and the use-context, au-
ditory icons can be recognized as an object (that generates
the sound) or as the action that generated the sound [45, 23,
44]. For example, the sound of screeching tires can be inter-
preted either as a proximal collision vehicle or as a command
for braking. In complex operations, auditory icons might not
be the appropriate notification [25, 28]. However, [3] used
skidding tires and a car horn honk in highly safety critical situ-
ations, namely to signal impending collisions. The successful
use of auditory icons in this case might be due to the straight-
forward association of meaning. In establishing guidelines
for designing auditory icons, [45] suggests that auditory icons
usability is highly affected by their identifiability. Nonetheless,
the recognition accuracy for auditory icons, but not response
times, can be significantly improved if users are aware of the
icons’ design [40]. This suggests that auditory icons should
use mappings that do not have multiple interpretations and can
be easily associated with the events they are representing.
Verbal Commands versus Auditory Icons
Auditory notifications can be evaluated on two different per-
formance measures. First, effective notifications are believed
to elicit faster reaction times. Second, effective notifications
are more accurately detected from background noise and dis-
criminated from other notifications. Previous research has
compared the effectiveness of verbal commands and auditory
icons across different scenarios and has generally found mixed
support for either sound type.
To begin, while some studies have found faster responses for
verbal notification [40, 29, 21], others found faster responses
for auditory icons [24, 60]. Other studies have found a re-
sponse time preference for neither auditory icons nor verbal
notifications [44, 70]. Overall, this would suggest that the
processing for both sounds is not different. Some reasons for
these mixed findings could be the context in which they were
presented as well as the type of task participants were asked
to do. Some studies, for example, took place in a simulated
driving context and asked participants to avoid collisions by
breaking while others required participants to match a pre-
sented sound to a description of the object/action generating
this sound on a desktop PC.
Similarly, measures for response accuracy also provide mixed
support for either verbal commands or auditory icons. [9]
found that people were more accurate at matching auditory
icons to visual context than words. This could be due to a
trade-off between response times and discrimination sensitiv-
ity since reaction times were faster for nouns than auditory
icons in this study. In contrast, [61, 51] found higher accuracy
when cuing with verbal commands than with auditory icons.
[38], on the other hand, found no difference in accuracy for
verbal commands compared to auditory icons. These different
findings could be due to the different experimental tasks par-
ticipants were asked to complete. On the other hand it could
also depend on the type of auditory notifications employed. If
time was not stressed, participants could respond when they
had fully evaluated the auditory stimulus, even if the auditory
icon was not readily interpretable at first.
Given that there will always be background noise, including
meaningful sounds; it is worthwhile to use prominent and
highly discriminable auditory notifications. Previous research
has shown that distractor sounds have a larger negative impact
on masking auditory icons than verbal commands [60]. That
is, distractors were more likely to interrupt the processing of
auditory icons than verbal commands. This suggests that non-
verbal processing is more likely to be affected by increases
in processing workload, often present in stressful and urgent
situations.
Event-Related EEG Potentials
Brain responses can be used to evaluate auditory notifications,
especially in terms of how they are processed for information.
One prominent EEG measure is the event-related potential
(ERP) that represents brain activity that proceeds from the pre-
sentation of a given stimuli. An ERP is an average waveform
of negative and positive voltage deflections, which can be func-
tionally related to different stages of information processing of
the presented stimuli [41]. Auditory stimuli characteristically
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 3
elicit a series of evoked potentials, namely N1, P2, P3a, and
P3b components which are associated with how the presented
sound is processed [8, 54, 35]. Thus, amplitudes of these
potentials can provide insight into how the brain processes a
given sound.
N1
The N1 is the first negative deflection of the ERP waveform,
e.g. [41, 54, 74]. It reflects early involuntary sensory process-
ing and is highly sensitive to a sound’s physical properties
(e.g., pitch, intensity) [41, 65]. It is a reliable indicator to the
perceptual detection of a sound’s presentation [73, 34, 54].
P2
The P2 is the second positive deflection, prominent at frontal
and central electrode sites. It is often reported as part of the
N1-P2 complex, also called ‘vertex-potential’ [8, 54]. In the
current context, we interpret the P2 component as a measure
for discriminability. It is elicited by attended and not attended
stimuli [8, 55]. This means that it is evoked involuntarily by
both, target and non-target stimuli of an oddball paradigm.
The difference between the P2 of target and non-target stimuli
is that the P2 amplitude is larger for stimuli containing target
features [41]. This fits the theory of P2 reflecting object dis-
crimination [47, 22]. P2 amplitudes are larger after one learns
to discriminate a stimulus from other target stimulus [2]. In
this respect, a larger P2 amplitude reflects target identification
and only subsequently a P3 is elicited. Once this classification
has taken place, the resolved information can be transmitted
to higher cortical areas to be evaluated further [54]. If the
stimulus being processed does not contain target features, no
P3 is elicited and, hence, further cognitive evaluation of the
stimulus is stopped.
P3a
The P3a refers to the third positive peak that is observed at
frontal areas. It is evoked by unexpected stimuli regardless
and decreases in amplitude to a surprising stimulus if it is
presented repeatedly [59]. It is sometimes referred to as the
novelty P3 and is believed to reflect an automatic orienting
response to interesting information [37, 19, 57].
P3b
The P3b refers to the third positive peak that is observed at
centro-parietal regions. It is sensitive to the presentation of
task-relevant stimuli, especially those that occur infrequently
[15, 56]. In the current context, P3b is treated as a measure for
context-updating, a process that underlies how we update our
situational understanding when unexpected events occur [14,
20, 57]. When an interesting stimulus is recognized, which
is different from the standard background, our brains update
our mental representations of the environment. This updating
process is reflected by larger P3b amplitudes for target stimuli
than standard stimuli. Furthermore, P3b amplitudes are also
treated as an index for working memory load, whereby larger
P3bs are associated with less mental effort [6].
To date, little research has been conducted to investigate how
verbal commands might be processed differently by the brain,
compared to auditory icons. Behavioral studies have mixed
results given that they tend to differ depending on the experi-
mental task and context. The notifications that are used here
were designed for an auditory display of a highly automated
truck environment [36], for which verbal commands have been
claimed to deliver faster responses than auditory icons [21].
However, the underlying reason for this has been unclear. The
current study was designed to look specifically at how the
brain might process these auditory notifications differently,
depending on whether they were verbal commands or equiva-
lent auditory icons. For this purpose, we measured the EEG
activity of naïve participants (Experiment 1) and professional
truck drivers (Experiment 2). The current EEG dataset has
been previously analyzed for differences between the two par-
ticipant groups and have shown that both groups respond to
these notifications similarly as a whole [7]. While professional
truck drivers responded slower in general, this was not due to
fundamental differences in brain responses to the auditory noti-
fications. In fact, the current analyses show similar EEG/ERP
waveforms between the two participant groups. In contrast to
previous work, this current work focuses specifically on how
verbal commands and auditory icons are processed differently
in the brain. Although both types of auditory notifications
produce similar brain responses, significant differences exist
in specific ERP components, which suggests that they should
be employed for different purposes.
STUDY METHODS
This study compared auditory icons to verbal commands in
a controlled experimental laboratory environment. It was a
within-subject design that used separable EEG/ERP compo-
nents to evaluate these notifications for how well they were
detected (N1), discriminated for against other sounds (P2), cap-
tured attention (P3a), and updated contextual working memory
(P3b). The whole experiment lasted 2.5 hours, which included
training, preparation time, and debriefing. The experimental
procedure was approved by the Ethics Council at the Univer-
sity Hospital Tübingen.
Two experiments comprised this study. Experiment 1 was per-
formed on university students (N=15; mean age=
26.1±4.0
years; 9 males) and a follow-up experiment 2, on professional
truck drivers (mean age=
41.4±12.1
years; 13 males). Exper-
iment 1 was conducted in a psychophysical laboratory setting
and experiment 2, in a high perceptual fidelity fixed-based
truck simulator.
Participants
All participants reported no known hearing deficits, normal
(or corrected-to-normal) vision, and no history of neurological
problems. They provided signed consent to written instruc-
tions, and were remunerated for their voluntary participation.
Stimuli and apparatus
Auditory notifications
The auditory notifications (duration: 500 ms) were adapted
from target sounds that were originally designed for the in-
vehicle interface of an autonomous truck cabin [21, 36]. There
were 12 notifications in total, 6 verbal commands and 6 audi-
tory icons that were complements of each other. They were
designed to remind truck drivers to perform certain tasks and
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 4
Figure 1. Experiment 1’s ERP responses (left) with scalp topography plots (right) of statistically significant differences across time and electrodes
respectively. Left: ERP waveforms are averaged across the frontal (pink) and parietal (green) electrodes and deflections are labeled for N1, P2, P3a,
and P3b. The shaded areas between the two waveforms indicate time-regions that are significantly different. Right: The scalp topographies show the
EEG activity to verbal commands and auditory icons at time-ranges A and B. Electrodes that are significantly different are represented by white dots.
of driving conditions at the appropriate times. These verbal
commands (auditory icons) were “system” (synthetic tone),
“convoy” (train whistle), “driver” (human whistle), “weather”
(raindrop), “road” (ground rumbling), and “traffic” (car horn).
Verbal commands were in Swedish, which was the mother
tongue of the professional drivers in Experiment 2 but not
the student volunteers of Experiment 1. However, the student
volunteers were extensively briefed on the auditory stimuli and
practiced discriminating them until they were 80% accurate,
prior to testing.
Ninety distractor sounds were created. Each distractor was a
simultaneous presentation of 2 verbal commands and 2 audi-
tory icons, played in reverse and their loudness adjusted to be
comparable to the notification targets.
Experiment 1: Psychophysics laboratory
The experimental laboratory was a dark room which was insu-
lated against external sounds. The visualization was presented
on a desktop screen (ViewPixx Screen, 60.5 x 36.3 resolu-
tion; 120 Hz) at a fixed distance of 45 cm from the partici-
pant who was in a chin-rest. The experiment was controlled
with customized software (MATLAB 8.2.0.701, R2013b) and
Psychophysics Toolbox 3.0.12 [5, 52, 33]. An ASIO 2.0 com-
patible sound card was used to control sound presentation
(SoundBlaster ZxR; Creative Labs). The auditory stimuli were
presented via stereo speakers, one paced on the left the other
on the right side of the desktop display.
Experiment 2: High fidelity truck simulator
Professional truck drivers sat in a driving simulator that con-
sisted of a realistic truck cabin. This contained a steering
wheel, dashboard with instruments, and a pneumatic seat. The
visualization consisted of an automated drive on the high-
way from Linköping to Norrköping with minimal traffic. The
highway had two lanes, one for each antagonistic traffic di-
rection. A three wall display (approx. 150 deg field-of-view)
presented the frontal visualization (450 cm distance to head).
Two vertically-aligned displays were attached to the outside
of the cabin, to simulate side mirrors for displaying the rear
traffic scene. Using OpenDrive8 road network files (xodr) and
an additional file describing the landscape (xml), a customized
graphical engine (i.e., VISIR) rendered the presented visual-
ization. Buttons located on the left and right on the steering
wheel collected the participants’ behavioral responses. An
ASIO 2.0 compatible sound card (RME HDSP 9632; RME
Intelligent Audio Solutions) controlled the presentation of au-
ditory notifications. A 5.1 surround sound system, installed in
the truck cabin, was used to display the sounds.
Task and Procedure
During testing all participants observed an automated driving
scene that was visually presented. They had to respond to au-
ditory notifications whenever one was presented, with a button
press using either their left or right index fingers. Six notifi-
cations (i.e., 3 complementary pairs of verbal commands and
auditory icons) were pre-assigned to a left index-finger press
and the remaining six, to a right index-finger press. In other
words, each button corresponded to three events, which were
represented by a verbal command as well as an auditory icon.
Button press assignment was randomized across participants.
Prior to testing, all participants practiced until they were able
to achieve accuracy levels of at least 80% in this task.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 5
Approximately 980 sounds were presented throughout the ex-
periment testing. All sound presentations were separated by a
time-interval randomly selected from a uniform range of 2300-
2700 ms. Twenty percent of these were target notifications
and eighty percent were distractors. No button presses were
necessary when distractors were presented. Target notifica-
tions were evenly divided for verbal commands and auditory
icons. Participants had 2000 ms to respond after each auditory
notification was presented.
EEG recording
The EEG was recorded using 59 active electrodes mounted to
the scalp using an elastic cap according to the international 10-
20 system (ActiCap System, Brain Products GmbH, Munich,
Germany). Four additional electrodes were used to record
the vertical and horizontal electrooculogram from the right
and left canthi as well as above and below the left eye. All
signals were recorded with the online reference FCz and AFz
as the ground. EEG signals were digitized with a sampling
rate of 1000 Hz. Electrode gel was applied to each electrode to
ensure an impedance below 20 k
W
. A parallel port connection
between recording PC and experimental PC synchronized the
EEG recording with the experimental events, such as the sound
onset and button press.
EEG data processing and analysis
To analyze the EEG data, MATLAB (8.2.0.701, R2013b) and
EEGLAB v.14.0.0 (https://sccn.ucsd.edu/eeglab/), an open
source software to analyze electrophysiological data, was used
[10]. Before analyzing the ERP to the auditory stimuli, the
data was preprocessed for every subject according to the fol-
lowing steps. To reduce the computational costs, the data
recorded at 1000Hz was downsampled to 250Hz. To remove
any slow drifts, a high-pass filter (cut-off = 0.5 Hz) was ap-
plied subsequently to the data. Using CleanLine, a plugin in
EEGLAB, 50 Hz electrical noise picked up from the environ-
ment when recording electrical brain activity was removed.
Then, bad channels, e.g. channels with flat lines, are removed
using artifact subspace reconstruction. Following these clean-
ing steps, the data is re-referenced offline to the common
average reference and then submitted to the Adaptive Mix-
ture ICA (AMICA, [11]). This algorithm decomposes the
electrical activity recorded at sensor level (electrodes) into
source-resolved activity, also called independent components
(ICs). These ICs were subject to equivalent dipole estimation.
A MNI Boundary Element Method head model was used to fit
an equivalent dipole to the ICs [53]. IC dipoles with location
outside the brain, as well as, ICs with a residual variance larger
than 15% were excluded. Next, the ICs of all participants were
grouped into 30 clusters using k-means based on their power
spectrum. These clusters were then inspected for non-cortical
electrical activity such as eye-related activity, muscle-related
activity, line noise, and unresolved components. Clusters, con-
taining such non-cortical activity, were determined based on
their power spectrum, their scalp topography, and their dipole
location in a volumetric brain model. These non-cortical ac-
tivity clusters, present across the group of participants, were
removed from the EEG data (for examples, see [7]). Finally,
this EEG data for cortical activity was backprojected to the
sensor level, and analyzed for potential differences between
verbal commands and auditory icons.
ERPs were computed for each participant, and for every elec-
trode, by extracting an epoch of EEG activity around the
notification presentation. The presentation onset of the noti-
fications was the trigger event for an epoch that consisted of
500 ms of baseline activity pre-trigger and 1000 ms of brain
response post-trigger. All epochs that belonged to either ver-
bal commands or auditory icons were mean-averaged for each
electrode. We further grouped the frontal and parietal elec-
trodes into two separate groups for visualization (see Figures
1 and 2, right). These group averaged waveforms depict dis-
tinct ERP components (i.e., N1, P2, P3a, and P3b) that serve
as established neural correlates for perceptual and cognitive
mechanisms. With regards to auditory information processing,
they relate to detection (N1), discrimination (P2), attentional
capture (P3a), and context-updating (P3b).
We performed mass-univariate analysis (MUA) to statistically
evaluate EEG differences between verbal commands and au-
ditory icons[27]. Simply put, this method compares the two
conditions at every electrode and time-point and performs a
t-test for significant differences. A false discovery rate proce-
dure (i.e. FDR-BH; [4]) was applied to control for multiple
comparisons.
RESULTS
Behavioral performance
Behavior performance was evaluated in terms of discrimina-
tion sensitivity and correct response times. All scores were
submitted to a within-subjects t-test and the Bayes Factor
(
BF01
) was calculated for the likelihood of the null-hypothesis
relative to the alternative-hypothesis. The behavioral data of
one participant from Experiment 2 had to be excluded due to
missing button presses.
Discrimination sensitivity (
d0
) was computed for each partic-
ipant as the difference between the z-score of correct recog-
nition and false recognition. In Experiment 1,
d0
scores
were not significantly different for verbal icons and audi-
tory icons (
t14 =0.17,p=0.87,Cohen0sd=0.04
), respec-
tively
3.82 ±1.15
and
3.78 ±1.10
. The null-hypothesis was
favored by a Bayes Factor of
3.7
. In Experiment 2,
d0
scores
were not significantly different for verbal icons and audi-
tory icons (
t13 =1.53,p=0.15,Cohen0sd=0.41
), respec-
tively
2.45 ±1.44
and
2.17 ±1.12
. The null-hypothesis was
marginally favored by a Bayes Factor of 1.4.
Response times were calculated for all correct responses that
occurred within 2500 ms of notification onset, for each par-
ticipant. In Experiment 1, participants were not significantly
faster in responding to verbal commands compared to auditory
icons (
t14 =0.68,p=0.51,Cohen0sd=0.18
), respectively
1070 ±131
and
1094 ±182
ms. The null-hypothesis was fa-
vored by a Bayes Factor of
3.1
. In Experiment 2, participants
were not significantly faster in responding to verbal commands
compared to auditory icons (
t13 =0.46,p=0.65,Cohen0sd=
0.12
), respectively
1236 ±101
and
1251 ±156
ms. The null-
hypothesis was favored by a Bayes Factor of 3.3.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 6
Figure 2. Experiment 2’s ERP responses (left) with scalp topography plots (right) of statistically significant differences across time and electrodes
respectively. Left: ERP waveforms are averaged across the frontal (pink) and parietal (green) electrodes and deflections are labeled for N1, P2, P3a,
and P3b. The shaded areas between the two waveforms indicate time-regions that are significantly different. Right: The scalp topographies show the
EEG activity to verbal commands and auditory icons at time-ranges A and B. Electrodes that are significantly different are represented by white dots.
To summarize, the current behavioral results do not argue in
favor of either verbal commands or auditory icons.
EEG/ERP responses
The EEG/ERP activity elicited by verbal commands and au-
ditory icons were similar in general morphology, latency, and
scalp distribution in the anterior-posterior dimension for both
Experiments 1 and 2 (Figs. 1 and 2). Statistically significant
differences were revealed in the EEG/ERP activity generated
by auditory icons and verbal commands in the frontal as well
as parietal electrodes
1
. The frontal group of electrodes are:
F5, F3, F1, Fz, F2, F4, F6, FC5, FC3, FC1, FC2, FC4, FC6.
The parietal group of electrodes are: P5, P3, P1, Pz, P2, P4,
P6, CP5, CP3, CP1, CPz, CP2, CP4, CP6.
In Experiment 1, student participants showed significant dif-
ferences in their EEG/ERP responses to these notifications,
even though the verbal commands were not presented in their
native language nor did they have contextual meaning. Specifi-
cally, the amplitude of the P2 component (236–304 ms; frontal
electrodes) was significantly larger for verbal commands than
for auditory icons. This suggests that verbal commands were
more discriminable than auditory icons from presented sounds.
The P3b amplitude (512–640 ms; parietal electrodes) was sig-
nificantly larger for auditory icons than for verbal commands.
This suggests that auditory icons induced more context updat-
ing than verbal commands did.
In Experiment 2, the professional truck drivers generated simi-
lar results as the naïve participants of Experiment 1. Similarly,
1
Electrode labels are in italics to distinguish them from ERP compo-
nent labels.
verbal commands generated larger P2 component deflections
(212–352 ms; frontal electrodes) than auditory icons, and au-
ditory icons generated larger P3b deflections (412–624 ms)
than verbal commands. However, EEG/ERP activity in the
frontal electrodes revealed that auditory icons generated larger
N1 deflections (160–212 ms) than verbal commands, and that
verbal commands generated larger P3a deflections (352–468
ms) than auditory icons. Differences in N1 deflections suggest
that auditory icons were more likely to be detected against the
general auditory background. Differences in P3a deflections
suggest that verbal commands were more likely than auditory
icons to capture observer attention.
DISCUSSION
The design of auditory displays faces the challenges of in-
cooperating human perception to make notifications effective
for their designed purpose (see Sect. 1 and 2). Behavioral per-
formance measures are limited in discriminating between noti-
fications for the various purposes that they might be designed
for. Some notifications might be designed for the purposes
of being highly detectable while others might be designed to
communicate a given context or scenario. Performance mea-
sures, e.g., response times or accuracy, do not discriminate for
how the brain processes notifications for information.
To summarize, we evaluated verbal commands and auditory
icons that were especially designed for in-vehicle information
displays related to aspects of task management and context-
updating in highly automated trucks [21, 36]. We were moti-
vated to do so according to the guidelines for auditory displays
[46], which states that auditory displays ought to be highly
detectable in terms of their physical properties (N1), support
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 7
learned discrimination from other target notifications (P2),
have the potential for capturing attention (P3a), and com-
municate for its intended purpose (e.g., updating contextual
working memory; P3b). Experiment 1 tested naïve students,
as is often the case during design and prototyping phases, and
Experiment 2 tested professional truck drivers, as is often
the case when performing an evaluation and validation. Both
types of notifications were effective and did not significantly
differ in terms of response times or accuracy. However, ver-
bal commands are more easily discriminable from other target
notifications at an early perceptual stage (i.e., larger P2 compo-
nent), and auditory icons are more likely to update contextual
working memory (i.e., larger P3b). This is consistently true
regardless of testing environment or participant groups. In fact,
discriminable and significant trends in EEG/ERP waveforms
are amplified in professional truck drivers. Professional truck
drivers also show a neural preference for verbal commands
with regards to their detection (N1) and attentional capture
properties (P3a), which is presumably driven by a familiarity
with the given language.
Therefore, we advocate the use of both verbal commands and
auditory icons. However, they should be employed accord-
ing to the job that they are intended for. We suggest that
verbal commands should be employed in critical situations
that require immediate action while auditory icons seem more
appropriate to notify the user of non-urgent environmental
updates. In the context of highly automated vehicles, ver-
bal commands should be used for time-critical situations that
cannot afford ambiguity, such as ‘low fuel’, while auditory
icons might be better employed in indicating that driving con-
ditions are changing, such as the sound of a thunderstorm for
inclement weather.
Justification for the Current Interpretation
Our recommendations for using verbal commands and audi-
tory icons for different purposes are based on the following
reasons, based on the EEG/ERP responses to these auditory
notifications. We do not address the performance results given
that they are shown to be equally effective in terms of discrim-
inability and correct response times.
To begin, we do not emphasize the detectability of either
notification types (in terms of their physical properties). This is
because the timing of N1s were identical in both Experiments
1 and 2. Interestingly, the amplitude of N1 was larger for
auditory icons in Experiment 2. We believe that this reflects
the larger variability of the spectral properties of auditory
icons relative to verbal commands, which renders it more
detectable against a richer (i.e., noisier) background. While
this could be treated in favor of auditory icons, we believe
that the consistently larger amplitudes in P2 components for
learned notification discriminability across both experiments
compensates for this minor advantage.
Next, the frontal P2 component is larger for verbal commands
than for auditory icons. P2 is believed to reflect learned object
discrimination [8, 47, 22]. In this regard, P2’s amplitude indi-
cates the efficiency in recognizing the associated notification
as a discriminable target, relative to other target notifications.
Trained discriminability is known to have an effect on P2
amplitude. For example, musicians who are trained to dis-
criminate sounds for pitch and timbre generate larger P2s than
non-musicians, especially for musical sounds [62]. In a similar
fashion, most of us are highly trained to discriminate between
different verbal sounds for their intended meanings and associ-
ations. This reflects the natural advantage rendered by the use
of verbal commands over auditory icons. The current findings
suggest that even if certain auditory icons are determined by
sound designers as being highly discriminable and recogniz-
able as targets, they should be matched to the standards of
verbal commands, which is quantitatively measurable in terms
of P2 component amplitude.
Thirdly, we believe that verbal commands capture attention
more readily than auditory icons. P3a amplitudes are indica-
tive of an involuntary orienting response to surprising and
novel events [57]. While the larger P3a amplitude for verbal
commands was not significant in Experiment 1 (Figure 1),
it was for the participants of Experiment 2. We believe that
this was because the professional truck drivers understood
the verbal commands and their operational implications more
readily, which increased the potential of verbal commands’s
in capturing attention (Figure 2).
Last, but not least, the P3b component reflects the updating
of one’s mental representation of relevant information [14].
P3b amplitudes are larger when a task-relevant event occurs
that is different from one’s expectation. For this reason, it is
believed to underlie context-updating [57]. Related to this,
P3b amplitudes have also been used to evaluate for working
memory load or mental workload [6]. Larger P3b are asso-
ciated with low mental workload and smaller P3b, with high
mental workload. In the current context, this would suggest
that auditory icons are more memorable and result in stronger
context-updating than verbal commands, in a way that requires
significantly less mental effort.
Auditory displays are designed to capture attention and to
clearly communicate events. On the one hand, notifications
that are readily recognized as task-relevant targets and cap-
ture attention are necessary for urgent events. On the other
hand, notifications to indicate changing circumstances are also
required to assist in updating a user’s situational awareness.
The current EEG/ERP results indicate that verbal commands
are more discriminable and better at capturing attention than
auditory icons. This suggests that verbal commands should
be used in critical situations where quick action is required.
Previous research based on behavioral results are in agree-
ment with our conclusion. For example, verbal notifications
are claimed to be especially effective in stressful situations
because speech is processed automatically [24]. The current
EEG/ERP results also suggest that auditory icons result in
less effortful context-updating than verbal commands. Hence,
auditory icons appear to be more suitable in communicating
environmental circumstances that are less urgent. [17, 32] have
suggested using auditory icons as notifications that inform and
advise about background events. Their recommendation based
on behavioral results agree with our current findings. It is
worth noting that auditory icons are also believed to produce
higher compliance levels than verbal commands, if they are
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 8
understood in the first place [16]. In conclusion, our results
advocate that different types of notification should be designed
in accordance to their intended purpose. This agrees with pre-
vious findings that, until now, have been mixed due to the
imprecision of behavioral results in discriminating for how
these notifications might be processed by the brain.
Limitations of the Current Study
One might argue that we only observe the changes in the
ERPs because the neural origins of auditory icons and verbal
commands are possibly oriented differently. This is a known
limitation of ERP analysis, namely that it has limited spatial
resolution for localizing brain regions that give rise to detected
activity. However, this line of argument is unlikely. Previous
work that relied on neuroimaging with better spatial resolution
(i.e., fMRI) have demonstrated that words and object sounds
involve the same neural region for processing information
content [12]. In addition, verbal commands and auditory icons
produce equivalent scalp topographies [9], which we have also
observed in the current experimental paradigm. Therefore, it
is unlikely that we observed the current results because verbal
commands and auditory icons were not processed by identical
neural regions.
The differences between the participant groups of Experiments
1 and 2 were intended to reflect the different stages of notifica-
tion development, namely design and prototyping (Experiment
1) and validation (Experiment 2). Nonetheless, some differ-
ences between the participant groups might be of concern.
In particular, the student volunteers in Experiment 1 were
not proficient in the verbal commands. In spite of this, we
note that their brain responses to verbal commands and audi-
tory icons replicated in Experiment 2, which employed truck
drivers. One reason is that the native language of Experiment
1’s participants was highly similar to Experiment 2’s partici-
pants. Another reason is that the current task focused on the
participants’ ability to recognize notifications and to respond
appropriately, regardless of the extent to which they under-
stood the implications of the notifications. Professional truck
drivers who understood the language and the implications of
the notifications showed stronger neural discriminations be-
tween verbal commands and auditory icons. Nonetheless, we
can argue that the trends observed in Experiment 1 are likely
to be generalizable trends while those in Experiment 2 are
cultural and profession specific. The current study does not
directly compare the two participant groups for their discrimi-
nation ability of the given sounds. Therefore, we do not make
any inferences concerning either group’s proficiency in dis-
criminating the notifications from one another. The focus of
this work is in evaluating how verbal commands and auditory
icons are responded to at the level of information processing
(i.e., brain responses).
CONCLUSION AND FUTURE WORK
Taken together, the current work contributes by showing that
auditory notifications can be evaluated and functionally dis-
criminated for how they are processed by the brain for in-
formation. This has implications for the operational context
as well as design. Choices for which notifications to use for
which purpose can be based not only in terms of response
times and discrimination accuracy, which is not necessarily
the operational objective, but in terms of how the notifications
are: (1) detected against the auditory scene, (2) discriminated
against other notification targets, (3) likely to capture attention,
and (4) capable of updating contextual working memory.
To date, most studies have questioned whether verbal com-
mands or auditory icons serve better as notifications, namely
in terms of how well they elicit a speeded and accurate re-
sponse. The current findings suggest that this question, while
well-intentioned, is misplaced. Our results demonstrate that
verbal commands and auditory icons have different qualities.
While verbal commands are better discriminated against other
notifications, auditory icons can update contextual working
memory with less effort. Practically speaking, this suggests
that verbal icons are ideally used for time-critical informa-
tion where there is no leeway for ambiguity, e.g., collision
warnings. Meanwhile, auditory icons are likely to be more
effective in communicating contextual information, such as en-
try into a poorly maintained road section or changing weather
conditions. In other words, verbal commands and auditory
icons should be used as complementary (and not competing)
notifications.
Previous research has recommended using auditory icons to
notify users of environmental events [17, 32]. More specif-
ically, auditory icons have been suggested to enhance situa-
tional awareness [1, 31]. For example, a walking sound can
more effectively indicate a nearing pedestrian. In addition,
auditory icons might be favored because it is believed that
they can be processed in parallel to other auditory events [1].
These findings so far converge with our current results and
interpretation. Nonetheless, there are works that do not. For
example, contrary to our current believe, that verbal com-
mands capture attention, some work have shown that certain
auditory icons (i.e., car horn) result in significantly faster re-
sponse times (e.g., [24]). We might account for this by the fact
that some auditory icons are overlearned to indicate danger.
It should be noted that verbal processing is known to differ
for different word classes (i.e., verbs, nouns) [58, 67]. The
current study only uses nouns for verbal commands and, thus,
future studies should verify whether verbal commands attract
attention preferentially for all word classes, relative to audi-
tory icons. In this work, we present EEG/ERP evidence that
discriminates for how auditory icons and verbal commands
are processed by the brain. Nonetheless, we do not doubt that
nuances in how auditory notifications are engineered could
ultimately render an auditory icon attention-grabbing and/or
a verbal command more suited for communicating context.
Our current results contribute by providing a starting point for
understanding what type of sounds ought to be employed for
which purposes, bearing in mind the brain’s likely response to
them.
The participants in Experiment 1 possessed neither a language
proficiency for the verbal commands nor an expert understand-
ing of the operational tasks that the notifications indicated.
Therefore, the EEG/ERP differences (i.e., P2, P3b) found
between verbal commands and auditory icons can be con-
sidered as general differences between the two notification
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 9
classes. In contrast, Experiment 2 was performed on profes-
sional truck drivers in a highly realistic test environment. A
comparison between the two experiments reveals that these
differences in brain responses scale with realism and user pro-
ficiency. Thus, the current approach of evaluating notification
designs on the basis of brain responses is robust, even when
behavioral responses do not differ. Notifications that are first
designed in sterile lab environments could also be evaluated
for the EEG/ERP responses that they elicit. This would narrow
down the candidates for deployment and validation in high
fidelity simulation environments or field-testing. Besides this,
EEG/ERP methods could also be used to discriminate between
different instantiation of the same target notification. One ex-
ample would be to determine the preferability of semantically
comparable verbal commands, such as tank or fuel.
To conclude, the current work suggests that verbal commands
and auditory icons serve different purposes, at least from the
standpoint of how they are processed by the brain. Thus, eval-
uations that directly compare them in terms of performance
measures might not be appropriate. This might also explain
the mixed evidence from previous studies in support of ei-
ther auditory notifications. The growing accessibility of brain
recording methods (i.e., EEG) mean that the current approach
can be used to support finer functional discriminations for noti-
fications and can be effectively deployed, even in challenging
deployment scenarios such as high fidelity truck simulators.
ACKNOWLEDGMENTS
We thank the reviewers for their valuable feedback, which
was helpful in revising the paper. We also thank K-Marie
Lahmer and Rickard Leandertz for their assistance in data
collection, and BrainProducts GmbH (Munich, Germany) for
loaning us the necessary equipment for this study. This work
was supported by the German Research Foundation through
SFB/Transregio 161 projects, as well as by Scania CV AB,
Sweden.
REFERENCES
1. Matt Adcock and Stephen Barrass. 2004. Cultivating
Design Patterns for Auditory Displays. In Proceedings of
ICAD 04-Tenth Meeting of the International Conference
on Auditory Display. 4–7.
2. Claude Alain. 2007. Breaking the wave: effects of
attention and learning on concurrent sound perception.
Hearing research 229, 1 (2007), 225–236.
3. Steven M Belz, Gary S Robinson, and John G Casali.
1999. A new class of auditory warning signals for
complex systems: Auditory icons. Human Factors 41, 4
(1999), 608–618. DOI:
http://dx.doi.org/10.1518/001872099779656734
4. Yoav Benjamini and Yosef Hochberg. 1995. Controlling
the false discovery rate: a practical and powerful
approach to multiple testing. Journal of the royal
statistical society. Series B (Methodological) (1995),
289–300.
5. David H Brainard. 1997. The Psychophysics Toolbox.
Spatial vision 10, 4 (1997), 433–436. DOI:
http://dx.doi.org/10.1163/156856897X00357
6. Anne-Marie Brouwer, Maarten A Hogervorst, Jan BF
Van Erp, Tobias Heffelaar, Patrick H Zimmerman, and
Robert Oostenveld. 2012. Estimating workload using
EEG spectral power and ERPs in the n-back task. Journal
of neural engineering 9, 4 (2012), 045008.
7. Lewis Chuang, Christiane Glatz, and Stas Krupenia.
2017. Using EEG to Understand why Behavior to
Auditory In-vehicle Notifications Differs Across Test
Environments. In Proceedings of the 9th International
Conference on Automotive User Interfaces and
Interactive Vehicular Applications (Automotive’UI 17).
ACM, New York, NY, USA.
8.
Kate E Crowley and Ian M Colrain. 2004. A review of the
evidence for P2 being an independent component process:
Age, sleep and modality. Clinical Neurophysiology 115, 4
(2004), 732–744. DOI:
http://dx.doi.org/10.1016/j.clinph.2003.11.021
9.
A Cummings, Rita ˇ
Ceponiene, Alain Koyama, Ase Pinar
Saygin, Jeanne Townsend, and Frederic Dick. 2006.
Auditory semantic networks for words and natural
sounds. Brain Research 1115, 1 (2006), 92–107. DOI:
http://dx.doi.org/10.1016/j.brainres.2006.07.050
10. Arnaud Delorme and Scott Makeig. 2004. EEGLAB: an
open source toolbox for analysis of single-trial EEG
dynamics including independent component analysis.
Journal of neuroscience methods 134, 1 (2004), 9–21.
11. Arnaud Delorme, Jason Palmer, Julie Onton, Robert
Oostenveld, and Scott Makeig. 2012. Independent EEG
sources are dipolar. PloS one 7, 2 (2012), e30135.
12. Frederic Dick, Ay¸se Pinar Saygin, Gaspare Galati,
Sabrina Pitzalis, Simone Bentrovato, Simona D’Amico,
Stephen Wilson, Elizabeth Bates, and Luigi Pizzamiglio.
2007. What is Involved and What is Necessary for
Complex Linguistic and Nonlinguistic Auditory
Processing: Evidence from Functional Magnetic
Resonance Imaging and Lesion Data. Journal of
Cognitive Neuroscience 19, 5 (2007), 799–816. DOI:
http://dx.doi.org/10.1162/jocn.2007.19.5.799
13. Tilman Dingler, Jeffrey Lindsay, and Bruce N Walker.
2008. Learnability of Sound Cues for Environmental
Features: Auditory Icons, Earcons, Spearcons, and
Speech. 14th International Conference on Auditory
Display (2008), 1–6.
14. Emanuel Donchin and Michael G H Coles. 1988. Is the
P300 component a manifestation of context updating?
Behavioral and brain sciences 11, 3 (1988), 357–374.
15. Connie C Duncan-Johnson and Emanuel Donchin. 1977.
Effects of a priori and sequential probability of stimuli on
event-related potential. In Psychophysiology, Vol. 14.
Cambridge Univ Press 40 West 20th Street, New York,
NY 10011-4211, 95–95.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 10
16.
Judy Edworthy. 1994. The design and implementation of
non-verbal auditory warnings. Applied Ergonomics 25, 4
(1994), 202–210. DOI:
http://dx.doi.org/10.1016/0003-6870(94)90001- 9
17. Judy Edworthy and Rachael Hards. 1999. Learning
auditory warnings: The effects of sound type, verbal
labelling and imagery on the identification of alarm
sounds. International Journal of Industrial Ergonomics
24, 6 (1999), 603–618.
18. Judy Edworthy and Elizabeth Hellier. 2006. Alarms and
human behaviour: Implications for medical alarms.
British Journal of Anaesthesia 97, 1 (2006), 12–17. DOI:
http://dx.doi.org/10.1093/bja/ael114
19. Carles Escera, Kimmo Alho, Erich Schröger, and
István Winkler Winkler. 2000. Involuntary attention and
distractibility as evaluated with event-related brain
potentials. Audiology and Neurotology 5, 3-4 (2000),
151–166.
20.
Monica Fabiani, Demetrios Karis, and Emanuel Donchin.
1986. P300 and recall in an incidental memory paradigm.
Psychophysiology 23, 3 (1986), 298–308.
21. Johan Fagerlönn, Stefan Lindberg, and Anna Sirkka.
2015. Combined Auditory Warnings For Driving-Related
Information. In Proceedings of the Audio Mostly 2015 on
Interaction With Sound (AM ’15). ACM, New York, NY,
USA, Article 11, 5 pages. DOI:
http://dx.doi.org/10.1145/2814895.2814924
22. Luis García-Larrea, Anne Claire Lukaszewicz, and
François Mauguiére. 1992. Revisiting the oddball
paradigm. Non-target vs neutral stimuli and the
evaluation of ERP attentional effects. Neuropsychologia
30, 8 (1992), 723–741. DOI:
http://dx.doi.org/10.1016/0028-3932(92)90042- K
23. William Gaver. 1989. The SonicFinder: An Interface
That Uses Auditory Icons. Human-Computer Interaction
4, 1 (1989), 67–94. DOI:
http://dx.doi.org/10.1207/s15327051hci0401_3
24. Robert Graham. 1999. Use of auditory icons as
emergency warnings: evaluation within a vehicle
collision avoidance application. Ergonomics 42, 9 (Sept.
1999), 1233–48. DOI:
http://dx.doi.org/10.1080/001401399185108
25. Robert Graham, S J Hirst, and C Carter. 1995. Auditory
icons for collision-avoidance warnings. In Intelligent
Transportation: Serving the User Through Deployment.
Proceedings of the 1995 Annual Meeting of ITS America.
26. Rob Gray. 2011. Looming Auditory Collision Warnings
for Driving. Human Factors: The Journal of the Human
Factors and Ergonomics Society 53, 1 (2011), 63–74.
DOI:http://dx.doi.org/10.1177/0018720810397833
27. David M Groppe, Thomas P Urbach, and Marta Kutas.
2011. Mass univariate analysis of event-related brain
potentials/fields I: a critical tutorial review.
Psychophysiology 48, 12 (Dec. 2011), 1711–25. DOI:
http://dx.doi.org/10.1111/j.1469-8986.2011.01273.x
28. Ellen Haas and Jeffrey Schmidt. 1995. Auditory icons as
warning and advisory signals in the US Army Battlefield
Combat Identification System (BCIS). In Proceedings of
the Human Factors and Ergonomics Society Annual
Meeting, Vol. 39. SAGE Publications Sage CA: Los
Angeles, CA, 999–1003.
29. Cristy Ho and Charles Spence. 2005. Assessing the
effectiveness of various auditory cues in capturing a
driver’s visual attention. Journal of Experimental
Psychology: Applied 11, 3 (2005), 157–74. DOI:
http://dx.doi.org/10.1037/1076-898X.11.3.157
30. Cristy Ho and Charles Spence. 2006. Verbal interface
design: Do verbal directional cues automatically orient
visual spatial attention? Computers in Human Behavior
22, 4 (2006), 733–748. DOI:
http://dx.doi.org/10.1016/j.chb.2005.12.008
31. Mandana L N Kazem, Janet M Noyes, and Nicholas J
Lieven. 2003. Design Considerations for a Background
Auditory Display to Aid Pilot Situation Awareness. In
Proceedings of the 2003 International Conference on
Auditory Display. 6–9.
32.
Peter Keller and Catherine Stevens. 2004. Meaning From
Environmental Sounds: Types of Signal-Referent
Relations and Their Effect on Recognizing Auditory
Icons. Journal of Experimental Psychology: Applied 10,
1 (2004), 3–12. DOI:
http://dx.doi.org/10.1037/1076-898X.10.1.3
33. Mario Kleiner, David Brainard, Denis Pelli, Allen
Ingling, Richard Murray, and Christopher Broussard.
2007. What ’ s new in Psychtoolbox-3 ? Perception 36,
14 (2007), 1.
34. Sonja A Kotz. 2013. Electrophysiological Indices of
Speech Processing. In Encyclopedia of Computational
Neuroscience. 1–5. DOI:
http://dx.doi.org/10.1007/978-1- 4614-7320- 6
35. Nina Kraus and Trent Nicol. 2008. Auditory evoked
potentials. In Encyclopedia of Neuroscience. Springer,
214–218.
36. Stas Krupenia, Anna Selmarker, Johan Fagerlönn,
Katarina Delsing, Anders Jansson, Bengt Sandblad, and
Camilla Grane. 2014. The Methods for Designing Future
Autonomous Systems’ (MODAS) project: Developing
the cab for a highly autonomous truck. In Proceedings of
the 5th International Conference on Applied Human
Factors and Ergonomics (AHFE2014) (Krakow, Poland).
19–23.
37.
Yi-Chieh Lee, Wen-Chieh Lin, Jung-Tai King, Li-Wei Ko,
Yu-Ting Huang, and Fu-Yin Cherng. 2014. An
EEG-based approach for evaluating audio notifications
under ambient sounds. Proceedings of the 32nd annual
ACM conference on Human factors in computing systems
- CHI ’14 (2014), 3817–3826. DOI:
http://dx.doi.org/10.1145/2556288.2557076
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 11
38. Ying K Leung, Sean Smith, Simon Parker, and Russell
Martin. 1997. Learning and retention of auditory
warnings. Proceedings of the Third International
Conference on Auditory Display (1997).
39. Mats Liljedahl and Johan Fagerlönn. 2010. Methods for
sound design: a review and implications for research and
practice. In Proceedings of the 5th Audio Mostly
Conference: A Conference on Interaction with Sound.
ACM, 2.
40.
Paul A Lucas. 1994. An evaluation of the communicative
ability of auditory icons and earcons. Georgia Institute of
Technology.
41. Steven J Luck. 2005. An Introduction to the
Event-Related Potential Technique (Cognitive
Neuroscience). (2005).
42. Dawn C Marshall, John D Lee, and P Albert Austria.
2007. Alerts for in-vehicle information systems:
Annoyance, urgency, and appropriateness. Human factors
49, 1 (2007), 145–157.
43. David K McGookin and Stephen A Brewster. 2004.
Understanding concurrent earcons: Applying auditory
scene analysis principles to concurrent earcon
recognition. ACM Transactions on Applied Perception
(TAP) 1, 2 (2004), 130–155.
44. Denis McKeown. 2005. Candidates for within-vehicle
auditory displays. Georgia Institute of Technology.
45.
Elizabeth D Mynatt. 1994. Designing with auditory icons:
how well do we identify auditory cues?. In Conference
companion on Human factors in computing systems.
ACM, 269–270.
46. Michael A Nees and Bruce N Walker. 2011. Auditory
displays for in-vehicle technologies. Reviews of human
factors and ergonomics 7, 1 (2011), 58–99.
47. Gerald Novak, Walter Ritter, and Herbert G Vaughan.
1992. Mismatch detection and the latency of temporal
judgments. Psychophysiology 29, 4 (1992), 398–411.
48. Arne Nykänen. 2008. Methods for product sound design.
Ph.D. Dissertation. Luleå tekniska universitet.
49. Casey O’Callaghan. 2009. Auditory perception. (2009).
50. Eunmi L Oh and Robert A Lutfi. 1999. Informational
masking by everyday sounds. The Journal of the
Acoustical Society of America 106, 6 (1999), 3521–3528.
DOI:http://dx.doi.org/10.1121/1.428205
51. Guido Orgs, Kathrin Lange, Jan Henryk Dombrowski,
and Martin Heil. 2006. Conceptual priming for
environmental sounds and words: An ERP study. Brain
and Cognition 62, 3 (2006), 267–272. DOI:
http://dx.doi.org/10.1016/j.bandc.2006.05.003
52. Denis G Pelli. 1997. The VideoToolbox software for
visual psychophysics: transforming numbers into movies.
(1997).
DOI:http://dx.doi.org/10.1163/156856897X00366
53.
Caterina Piazza, Makoto Miyakoshi, Zyenab Akalin-Acar,
Chiara Cantiani, Gianluigi Reni, Anna Maria Bianchi,
and Scott Makeig. 2016. An Automated Function for
Identifying EEG Independent Componetns Representing
Bilateral Source Activity. XIV Mediterranean Conference
on Medical and Biological Engineering and Computing
2016, IFMBE Proceedings 57 (2016), 105–109. DOI:
http://dx.doi.org/10.1007/978-3- 319-32703- 7
54. Terence W Picton. 2010. Human auditory evoked
potentials. Plural Publishing.
55. Terence W Picton. 2014. Auditory event-related
potentials. Encyclopedia of Computational Neuroscience
(2014), 1–6.
56. Terence W Picton and Steven A Hillyard. 1974. Human
auditory evoked potentials. II: effects of attention.
Electroencephalography and clinical Neurophysiology 36
(1974), 191–199. DOI:
http://dx.doi.org/10.1016/0013-4694(74)90156- 4
57.
John Polich. 2007. Updating P300: An integrative theory
of P3a and P3b. Clinical Neurophysiology 118 (2007),
2128–2148. DOI:
http://dx.doi.org/10.1016/j.clinph.2007.04.019
58. Friedemann Pulvermüller. 1999. Words in the brain’s
language. Behavioral and Brain Sciences 22, 1999
(1999), 253–336. DOI:
http://dx.doi.org/10.1017/S0140525X9900182X
59. Brock R Riggins and John Polich. 2002. Habituation of
P3a and P3b from visual stimuli. The International
Journal of Creativity & Problem Solving 12, 1 (2002),
71–81.
60. Ay¸se Pinar Saygin, Frederic Dick, and Elizabeth Bates.
2005. An on-line task for contrasting auditory processing
in the verbal and nonverbal domains and norms for
younger and older adults. Behavior Research Methods 37,
1 (2005), 99–110.
61. Ay¸se Pinar Saygin, Frederic Dick, Stephen W Wilson,
Nina F Dronkers, and Elizabeth Bates. 2003. Neural
resources for processing language and environmental
sounds: Evidence from aphasia. Brain 126, 4 (2003),
928–945. DOI:http://dx.doi.org/10.1093/brain/awg082
62. Antoine Shahin, Daniel J Bosnyak, Laurel J Trainor, and
Larry E Roberts. 2003. Enhancement of neuroplastic P2
and N1c auditory evoked potentials in musicians. Journal
of Neuroscience 23, 13 (2003), 5545–5552.
63. Carol A Simpson and Kristine Marchionda-Frost. 1984.
Synthesized speech rate and pitch effects on intelligibility
of warning messages for pilots. Human factors 26, 5
(1984), 509–517. DOI:
http://dx.doi.org/10.1177/001872088402600503
64. Charles Spence and Liliana Read. 2003. Speech
shadowing while driving: On the difficulty of splitting
attention between eye and ear. Psychological science 14,
3 (2003), 251–256.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 12
65. Mitchell Steinschneider and Michelle Dunn. 2002.
Electrophysiology in developmental neuropsychology.
Handbook of neuropsychology 8, 1 (2002), 91–146.
66.
David L Strayer and William A Johnston. 2001. Driven to
Distraction: Dual-Task Studies of Simulated Driving and
Conversing on a Cellular Telephone. Psychological
Science 12, 6 (2001), 462–466. DOI:
http://dx.doi.org/10.1111/1467-9280.00386
67.
Anna Szekely, Simonetta D’Amico, Antonella Devescovi,
Kara Federmeier, Dan Herron, Gowri Iyer, Thomas
Jacobsen, L Arévalo Anal’a, Andras Vargha, and
Elizabeth Bates. 2005. Timed action and object naming.
Cortex 41, 1 (2005), 7–25.
68.
Tuyen V Tran, Tomasz Letowski, and Kim S Abouchacra.
2000. Evaluation of acoustic beacon characteristics for
navigation tasks. Ergonomics 43, 6 (2000), 807–827.
69. Kai Tuuri, Manne-Sakari Mustonen, and Antti Pirhonen.
2007. Same sound–different meanings: A novel scheme
for modes of listening. Proceedings of Audio Mostly
(2007), 13–18.
70. Pernilla Ulfvengren. 2003. Design of natural warning
sounds in human-machine systems. Ph.D. Dissertation.
KTH.
71. Bruce N Walker and Michael A Nees. 2011. Theory of
sonification. The sonification handbook (2011), 9–39.
72. Emily E Wiese and John D Lee. 2007. Attention
grounding: a new approach to in-vehicle information
system implementation. Theoretical Issues in Ergonomics
Science 8, 3 (2007), 255–276.
73. Istvan Winkler, Susan L Denham, and Carles Escera.
2013. Auditory Event-related Potentials. In Encyclopedia
of Computational Neuroscience. 1–29. DOI:
http://dx.doi.org/10.1007/978-1- 4614-7320- 6
74. David L Woods. 1995. The component structure of the
N1 wave of the human auditory evoked potential.
Electroencephalography and Clinical
Neurophysiology-Supplements Only 44 (1995), 102–109.
CHI 2018 Paper
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 472
Page 13
... Apart from a small number of specific methodological keywords referring to the type of analysis employed in the article, all keywords could be grouped into these categories. It is interesting to note that only four generic keywords "BCI" (42 times), "EEG" (49), "fNIRS" (17), and "human-computer interaction" (13) occur more than 10 times, showing the large variety of topics covered by these articles. ...
... Articles HCI evaluation [56], [55], [93], [66], [102], [79], [3], [5], [63], [91], [19], [118], [83], [26], [44], [31], [92], [132], [49], [14], [71], [12], [25], [78], [47], [142], [133], [134], [28], explicit control [35], [72], [119], [145], [151], [101], [99], [114], [51], [52], [33], [150], [98], [106], [76], [90], [70], [69], [103], [39], [84], [65], [95], [36], implicit open loop [121], [131], [108], [58], [140], [122], [2], [107], [7], [109], [27], [73], [112], implicit closed loop [128], [126], [130], [1], [111], [152], [147], [115], [15], neurofeedback [45], [43], [54], [81], [6], mental state assessment [138], [77], [50], [23], [154], [61], [157], [139], [110], [125], [11], [88], [34], [148], [48], [158], [96], other [127], [123], [104], [85], [21], [136], [87], [41], [38], [10], [4], [24], [97], category). Systems which are able to bring all components together show that using brain signals in runtime can yield substantial usability improvements [1,152] or unlock completely novel kinds of applications [103]. ...
... al.[49]) Middleware/Communication: For interactive applications or distributed recording setups, this attribute reports how the different parts communicate to exchange data, triggers, commands, and so on."We wrote a custom Java bridge program to connect the headset to the Android OS and Unity application on the Game tablet. ...
Article
In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, modalities, mental states and processes, and more. This analysis reveals variance in how authors report experiments, which creates challenges to understand, reproduce, and build on that research. It then describes an overarching experiment model that provides a formal structure for reporting HCI research with brain signals, including definitions, terminology, categories, and examples for each aspect. Multiple distinct reporting styles were identified through factor analysis and tied to different types of research. The paper concludes with recommendations and discusses future challenges. This creates actionable items from the abstract model and empirical observations to make HCI research with brain signals more reproducible and reusable.
... They found that audio-visual feedback led to an increase in participants' SA compared to a visual-only representation of relevant traffic objects. This study also distinguished between the effectiveness of verbal audio and auditory icons, aligning with Glatz et al. [39], who found that auditory icons are effectively perceived for contextual information while verbal audio is more suitable for time-critical information. For BVIPs, Brinkley et al. [13] developed a prototype that enhances their SA inside HAVs through the use of audible location cues and spatial audio. ...
... In particular, three participants imagined less critical information to be bothersome when conveyed through auditory cues. However, they also mentioned that when it comes to critical situations or important information, auditory verbal feedback is preferred over tactile feedback as audio was assumed to convey more context information, which aligns with Glatz et al. [39]. According to the participants, important information includes that the vehicle arrived at its destination and notifications of dangerous situations, such as a bike lane next to the arrived vehicle. ...
Article
Full-text available
Highly Automated Vehicles offer a new level of independence to people who are blind or visually impaired. However, due to their limited vision, gaining knowledge of the surrounding traffic can be challenging. To address this issue, we conducted an interactive, participatory workshop (N=4) to develop an auditory interface and OnBoard - a tactile interface with expandable elements - to convey traffic information to visually impaired people. In a user study with N=14 participants, we explored usability, situation awareness, predictability, and engagement with OnBoard and the auditory interface. Our qualitative and quantitative results show that tactile cues, similar to auditory cues, are able to convey traffic information to users. In particular, there is a trend that participants with reduced visual acuity showed increased engagement with both interfaces. However, the diversity of visual impairments and individual information needs underscores the importance of a highly tailored multimodal approach as the ideal solution.
... This aligns with Yatani et al. [89], who found that handheld tactile maps combining tactile feedback with audio instructions offer superior spatial orientation compared to audio-only feedback. Additionally, the study revealed differences in the effectiveness of verbal audio vs. auditory icons, aligning with the findings of Glatz et al. [41], who found auditory icons to be more effective for conveying contextual information, while verbal audio was better for urgent requests. Further, by comparing the effectiveness of auditory, visual, and combined audio-visual feedback, the combination of audio and visual feedback improved participants' situation awareness more than visual feedback alone [66]. ...
Preprint
Full-text available
The introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.
... A concurrent auditory task can interfere with and even mask the target auditory warning (Nees and Walker, 2011). Moreover, various types of auditory warnings have different susceptibilities to the interference of concurrent auditory tasks (Glatz et al., 2018). For example, Vilimek and Hempel (2005) and Bonebright and Nees (2009) found that speech interferes with concurrent auditory memory tasks, whereas auditory icons and earcons do not. ...
Article
With the era of automated driving approaching, designing an effective auditory takeover request (TOR) is critical to ensure automated driving safety. The present study investigated the effects of speech-based (speech and spearcon) and non-speech-based (earcon and auditory icon) TORs on takeover performance and subjective preferences. The potential impact of the non-driving-related task (NDRT) modality on auditory TORs was considered. Thirty-two participants were recruited in the present study and assigned to two groups, with one group performing the visual N-back task and another performing the auditory N-back task during automated driving. They were required to complete four simulated driving blocks corresponding to four auditory TOR types. The earcon TOR was found to be the most suitable for alerting drivers to return to the control loop because of its advantageous takeover time, lane change time, and minimum time to collision. Although participants preferred the speech TOR, it led to relatively poor takeover performance. In addition, the auditory NDRT was found to have a detrimental impact on auditory TORs. When drivers were engaged in the auditory NDRT, the takeover time and lane change time advantages of earcon TORs no longer existed. These findings highlight the importance of considering the influence of auditory NDRTs when designing an auditory takeover interface. The present study also has some practical implications for researchers and designers when designing an auditory takeover system in automated vehicles.
... Is it ethical to design systems with access to early and primitive information-processing systems of targeted users? It might seem sensible to design notification displays that alert drowsy drivers by exploiting physical properties that signal the approach of threat (e.g., looming intensities) [11]. But where should we stop? ...
Conference Paper
Full-text available
While Human-Computer Interaction (HCI) has contributed to demonstrating that physiological measures can be used to detect cognitive changes, engineering and machine learning will bring these to application in consumer wearable technology. For HCI, many open questions remain, such as: What happens when this becomes a cognitive form of personal informatics? What goals do we have for our daily cognitive activity? How should such a complex concept be conveyed to users to be useful in their everyday life? How can we mitigate potential ethical concerns? These issues are different from physiologically controlled interactions, such as BCIs, to a time when we have new data about ourselves. This workshop will be the first to directly address the future of Cognitive Personal Informatics (CPI), by bringing together design, BCI and physiological data, ethics, and personal informatics researchers to discuss and set the research agenda in this inevitable future before it arrives.
... Underpinning the work already underway at the intersection of CUI and Auto-UI, these communities share interests in multimodal interaction evaluation [10,20,24], multitasking and interruptions as interaction paradigms [5,8,11,22,23], modeling mental workload [8,15,24], and mixed-methods approaches to research ranging from physiological sensing [9,10,15] to in-the-wild observation [2,6]. We aim to bring together the shared goals and compare the different approaches of these communities, establishing a community of practice that can share resources and expertise to better understand automotive conversational user interfaces. ...
Preprint
This work aims to connect the Automotive User Interfaces (Auto-UI) and Conversational User Interfaces (CUI) communities through discussion of their shared view of the future of automotive conversational user interfaces. The workshop aims to encourage creative consideration of optimistic and pessimistic futures, encouraging attendees to explore the opportunities and barriers that lie ahead through a game. Considerations of the future will be mapped out in greater detail through the drafting of research agendas, by which attendees will get to know each other's expertise and networks of resources. The two day workshop, consisting of two 90-minute sessions, will facilitate greater communication and collaboration between these communities, connecting researchers to work together to influence the futures they imagine in the workshop.
Article
Full-text available
The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.
Conference Paper
Full-text available
In this study, we employ EEG methods to clarify why auditory notifications, which were designed for task management in highly automated trucks, resulted in different performance behavior, when deployed in two different test settings: (a) student volunteers in a lab environment, (b) professional truck drivers in a realistic vehicle simulator. Behavioral data showed that professional drivers were slower and less sensitive in identifying notifications compared to their counterparts. Such differences can be difficult to interpret and frustrates the deployment of implementations from the laboratory to more realistic settings. Our EEG recordings of brain activity reveal that these differences were not due to differences in the detection and recognition of the notifications. Instead, it was due to differences in EEG activity associated with response generation. Thus, we show how measuring brain activity can deliver insights into how notifications are processed, at a finer granularity than can be afforded by behavior alone.
Chapter
Full-text available
Auditory event-related potentials are electrical changes recorded from the brain in association with auditory stimuli.
Conference Paper
Modern technologies have the potential to create a paradigm shift in the vehicle-driver relationship with advanced automation changing the driver role from “driving” to “supervising”. To design new driver environments that caters for these emerging technologies, traditional approaches identify current human and technical constraints to system efficiency and create solutions accordingly. However, there are two reasons why such approaches are limited within the technologically-evolving automotive domain. First, despite significant progress in the development of socio-technical systems theory and methods, the application of these methods is largely constrained to the existence of a current system. Second, there are few structured approaches for using the analysis results to support design. In MODAS, an attempt is made to overcome these challenges by developing and implementing a method for analyzing and designing a non-existent sociotechnical system—a highly autonomous truck. The starting point for MODAS is the Goals, Method, Observability, Controllability (GMOC) model (Sandblad, Andersson, Kauppi & Wikström, 2007). In MODAS we also consider safety in human-automation system interaction and identify suitable assessment methods via a systematic analyze of estimated situations, goals, actions and behaviors that are of high importance from a safety perspective. A summary of the project is provided.
Article
The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses — the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.
Book
This volume presents the proceedings of Medicon 2016, held in Paphos, Cyprus. Medicon 2016 is the XIV in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon 2016 is to provide updated information on the state of the art on Medical and Biological Engineering and Computing under the main theme “Systems Medicine for the Delivery of Better Healthcare Services”. Medical and Biological Engineering and Computing cover complementary disciplines that hold great promise for the advancement of research and development in complex medical and biological systems. Research and development in these areas are impacting the science and technology by advancing fundamental concepts in translational medicine, by helping us understand human physiology and function at multiple levels, by improving tools and techniques for the detection, prevention and treatment of disease. Medicon 2016 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and scientific achievements by bridging complementary disciplines into an interactive and attractive forum under the special theme of the conference that is Systems Medicine for the Delivery of Better Healthcare Services. The programme consists of some 290 invited and submitted papers on new developments around the Conference theme, presented in 3 plenary sessions, 29 parallel scientific sessions and 12 special sessions.