ArticlePDF Available

Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort

Authors:

Abstract and Figures

In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.
This content is subject to copyright. Terms and conditions apply.
RESEARCH ARTICLE
Neurodynamic evaluation of hearing aid features using EEG
correlates of listening effort
Corinna Bernarding
1
Daniel J. Strauss
1,2,5
Ronny Hannemann
3
Harald Seidler
4
Farah I. Corona-Strauss
1,5
Received: 9 August 2016 / Revised: 3 February 2017 / Accepted: 7 February 2017 / Published online: 16 February 2017
ÓThe Author(s) 2017. This article is published with open access at Springerlink.com
Abstract In this study, we propose a novel estimate of
listening effort using electroencephalographic data. This
method is a translation of our past findings, gained from the
evoked electroencephalographic activity, to the oscillatory
EEG activity. To test this technique, electroencephalo-
graphic data from experienced hearing aid users with
moderate hearing loss were recorded, wearing hearing aids.
The investigated hearing aid settings were: a directional
microphone combined with a noise reduction algorithm in
a medium and a strong setting, the noise reduction setting
turned off, and a setting using omnidirectional micro-
phones without any noise reduction. The results suggest
that the electroencephalographic estimate of listening effort
seems to be a useful tool to map the exerted effort of the
participants. In addition, the results indicate that a direc-
tional processing mode can reduce the listening effort in
multitalker listening situations.
Keywords Listening effort Hearing loss Hearing aids
EEG
Introduction
‘Listening effort’’ can be described as the exertion listen-
ers experience by processing naturally occurring auditory
signals in demanding environments (Pichora-Fuller and
Singh 2006; McGarrigle et al. 2014). This definition can be
complemented by looking closely at the first part of the
term ’listening effort’’. Kiessling et al. (2003) character-
ized ’’listening’’ as the process of hearing with intention
and attention. Compared to the pure physiological, passive
process of hearing which enables access to the auditory
system, listening requires mental effort and the allocation
of attentional as well as cognitive resources (Hicks and
Tharpe 2002; Kiessling et al. 2003; Hornsby 2013).
Moreover, this goal-directed attentional effort can be
considered as a means to support the optimization of
cognitive processes (Sarter et al. 2006).
In case of a hearing loss, the incoming auditory infor-
mation is degraded by elevated hearing thresholds and a
reduced spectrotemporal resolution (Pichora-Fuller and
Singh 2006; Shinn-Cunningham and Best 2008). As a
result, people with hearing loss have an increased pro-
cessing effort (Downs 1982; Arlinger 2003). Until now,
mainly subjective procedures, like questionnaires (Gate-
house and Noble 2004; Ahlstrom et al. 2013), rating scales
(Humes 1999) or self-reports, are applied to estimate lis-
tening effort in hearing aid (HA) fitting procedures or in
studies related to the assessment of listening effort. Sub-
jective procedures give some indication of the individuals’
perceived listening effort, but it is still uncertain to which
extent the subjective data reflect the real experienced effort
(Zekveld et al. 2010).
An alternative approach to estimate listening effort
objectively are dual task paradigms (Downs 1982;
Sarampalis et al. 2009), which are based on a limited
&Daniel J. Strauss
daniel.strauss@uni-saarland.de
1
Systems Neuroscience and Neurotechnology Unit,
Neurocenter, Saarland University, Medical Faculty &
Saarland University of Applied Sciences, School of
Engineering, Building 90.5, 66421 Homburg/Saar, Germany
2
Leibniz–Institute for New Materials, Saarbru
¨cken, Germany
3
Sivantos GmbH, Erlangen, Germany
4
MediClin Bosenberg Kliniken, St. Wendel, Germany
5
Key Numerics GmbH, Saarbru
¨cken, Germany
123
Cogn Neurodyn (2017) 11:203–215
DOI 10.1007/s11571-017-9425-5
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
capacity model of cognitive resources (Kahneman 1973).
The participants have to perform two competing tasks: a
primary listening task and a secondary task which is mostly
visual or memory related. It is assumed that there is a
competition for single limited resources, so that the per-
formance of the secondary task decreases when more
resources are allocated in the primary task. This reduction
in secondary task efficiency serves as a measure of lis-
tening effort. However, this complex method is influenced
by many factors such as motivation or task strategy
(Hornsby 2013), and requires a considerable cooperation
from the participant. Further indications of listening effort,
for example the pupil response (Zekveld et al. 2010;
Goldwater 1972) and the galvanic skin response (Mack-
ersie and Cones 2011) have been investigated.
Modern HA have settings like noise reduction schemes,
which are assumed to ease the speech understanding in
complex environments. As a result, the listening effort
should be reduced (Lunner et al. 2009). There are a number
of studies examining the effects of HA use on listening
effort (Downs 1982; Sarampalis et al. 2009; Hornsby 2013;
Gatehouse and Gordon 1990; Ahlstrom et al. 2013). The
general finding of these studies was that due to the
amplification of the relevant auditory information, the
audibility of the speech signal was improved resulting in a
decreased listening effort.
In previous studies (Strauss et al. 2010;Bernarding
et al. 2013), we proposed a new method for the quantifi-
cation of listening effort by means of evoked electroen-
cephalographic (EEG) activity, which is based on a
neurodynamical model. Besides other promising models
that can be applied (e.g., Wang et al. 2017), we have used
a neurophysical multiscale model which maps auditory
late responses as large-scale listening effort correlates.
There, we have shown that the instantaneous phase of the
N1 component could serve as an index of the amount of
listeningeffortneededtodetectanauditoryevent,suchas
a target syllable or a toneburst. A higher phase synchro-
nization occurred due to an increased attentional modu-
lation in the range of the theta band, which reflected a
higher cognitive effort to solve the auditory task. For
more information about the theory of theta-regulated
attention, we refer to Haab et al. (2011). In these studies,
the N1 component was taken into accout as this compo-
nent reflects selective attention effects related to an
endogenous modulation of the incoming information
(Hillyard et al. 1973; Rao et al. 2010; Hillyard et al.
1998). Furthermore, the instantaneous phase of single-
trials in the alpha/ theta range was analyzed as it provides
more information on the auditory information processing
as averaged responses (Brockhaus-Dumke et al. 2008;
Ponjavic-Conte et al. 2012). Related to the findings in
these studies, it can be assumed that a measure based on
the cortical response is an appropriate way to estimate the
listening effort. However, there are some limitations in
the study of auditory evoked responses (AERs) regarding
the design of stimulation paradigms, like the limitation of
the auditory stimulation to signals of short duration (Hall
2007, pp. 490ff.) or the dependency on physical stimulus
properties (exogenous effects). Therefore, the AERs
cannot be analyzed during longer listening periods—for
instance during a speech intelligibility test. Furthermore,
the exogenous effects have to be minimized. This mini-
mization causes a constraint on the comparability of the
results that are to be obtained. This means that the dif-
ferent noise types, SNRs or HA settings, which always
modify the incoming auditory signal, cannot be compared
directly to each other. To overcome the limitation to
signals of short duration, the current study deals with the
ongoing oscillatory activity. Here, the EEG can be ana-
lyzed during longer listening periods. Thus, the listening
effort could be extracted by using noise embedded sen-
tences or during a sentence recognition test. As the HA
always alters the auditory signals, different HA features
were tested to have varying hearing impressions. Evalu-
ating the estimated effort by a subjective rating scale, we
expected to see the same pattern in the subjective and the
electroencephalographic estimate. If this would be true,
then the influence of the exogenous effects would be
minor. These degrees of freedom in the design of the
auditory stimulation are essential requirements for a
possible prospective EEG-aided HA adjustment in clinical
settings.
The link between the previous studies investigating the
instantaneous phase of the N1 component and the current
study using the instantaneous phase extracted from the
ongoing EEG can be achieved via the phase reset model
(Sauseng et al. 2007). The phase reset model suggests that
the evoked potentials are generated by a phase reset of the
ongoing EEG activity. A widely debated topic in the EEG
(Kerlin et al. 2010; Ng et al. 2012), electrocorticographic
(ECoG) (Zion Golumbic et al. 2013; Mesgarani and Chang
2012) and magnetoencephalographic (MEG) (Peelle et al.
2013; Ding and Simon 2012) research is the phase
entrainment of cortical oscillations. Two main hypotheses
regarding the functional role of cortical entrainment are
under discussion: (1) The cortical entrainment emerges due
to physical characteristics of the external stimuli; (2) the
phase locking is a modulatory effect on the cortical
response triggered by top-down cognitive functions (Ding
and Simon 2014). The first theory is supported due to the
theta oscillations in the auditory cortex that entrain to the
envelope of sound (Ng et al. 2012; Kerlin et al. 2010;
Weisz and Obleser 2014). This low-frequency activity can
be seen as a reflection of the fluctuations of the speech
envelope (Zion Golumbic et al. 2013). The second aspect
204 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
deals with a modulatory effect on the phase via top-down
processes. Here, the synchronization of the phase in audi-
tory processing regions acts like a mechanism of atten-
tional selection (Peelle et al. 2013). This theory of an
attentional modulation of the neural oscillations at lower
frequencies (4–8 Hz) is supported by studies in the audi-
tory (Kerlin et al. 2010) as well as in the visual domain
(Busch and VanRullen 2010). Regarding such a possible
attentional effortful modulation of the neural responses via
phase locking or synchronization, the proposed method for
the extraction of listening effort correlates relies on the
instantaneous phase information of the ongoing EEG
activity. The hypothesis is that for a non effortful listening
environment the phase is rather uniformly distributed on
the unit circle than for a demanding condition. For the
latter, it is assumed that the phase is more clustered on the
unit circle due to an endogenous effortful modulation
caused by an increased auditory attention to the relevant
auditory signal.
In this work, the proposed EEG method for the extrac-
tion of listening effort correlates in people with moderate
hearing loss was tested. This was done to examine if the
proposed EEG method could serve as a novel measure of
listening effort. The new method was evaluated by the
results of the subjective listening effort and speech intel-
ligibility scales. Additionally, we investigated the effects of
different HA settings on the listening effort. These settings
included a new feature which combines a directional
microphone technique with a noise reduction algorithm and
was tested in a medium and a strong setting. In a further
setting, this feature was turned off and a configuration
using omnidirectional microphones without any noise
reduction was tested.
Methods
Ethics statement and recruitment of the participants
The study was approved as scientific study by the local
ethics committee (A
¨rztekammer des Saarlandes; Medical
Council of the Saarland). The decisions of the ethics
committee are made in accordance with the Declaration of
Helsinki.
The participants were recruited from a hearing rehabil-
itation center. They were informed about the content of the
study in a one-to-one appointment. There, the procedures
were explained aurally and all questions of the participants
related to the procedure and the consent form were
answered in detail. After this, all participants provided
written informed consent for the investigation and the
subsequent data analysis. The participants were compen-
sated for their time by a voucher.
Participants and inclusion criteria
Two listening conditions were tested in a single session
(condition I and II). A total of 14 experienced HA users
with a moderate hearing loss participated in this study. All
participants reported to wear their own HA regularly in
different acoustic environments. We expected that experi-
enced HA users are able to recognize even minor differ-
ences between the different HA settings. Furthermore, Ng
et al. (2014) showed that new hearing aid users need a
higher cognitive processing to understand speech processed
by the HA. All 14 participants were native German
speakers and attended in condition I of this study (mean
age: M ¼65:64 years (SD ¼7:93 years), seven female/
seven male). Two participants quit the experiment after
completing condition I. Thus, a total of 12 participants
(mean age: M ¼66:25 years (SD ¼7:74 years), five
female/seven male) took part in condition II. The partici-
pants were included if they had at least 80% artifact free
EEG data.
At the end, 13 participants were included for condition I
(mean age: M ¼65:54 years (SD ¼8:24 years), six
female/seven male). One participant was excluded due to
artifacts. For condition II, a total of 10 participants were
included (mean age: M ¼67:1 years (SD ¼7:92 years),
four female/six male). Here, one participant could not solve
a part of the auditory task and the other one was excluded
due to artifactual EEG data. Before the EEG session started
the unaided hearing threshold was determined. For this, a
standard audiometric examination using a clinical
audiometer (tested pure tone frequencies: 0.25, 0.5, 1, 1.5,
2, 4, and 8 kHz) was conducted. The pure tones were
presented monaurally via headphones. Figure 1depicts the
mean pure tone audiograms and the corresponding standard
Fig. 1 Mean pure tone audiograms and corresponding standard
deviations of the included participants of both conditions of the study
(condition I = black color, condition II = gray color)
Cogn Neurodyn (2017) 11:203–215 205
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
deviations of the included participants for both parts of the
study.
Hearing aid fitting
Commercially available behind-the-ear HAs connected to
double ear-tips (double domes) were tested. The devices
were fitted according to the hearing loss of the participant
using a proprietary fitting formula. The HA amplification
was set to an experienced level. The effects of the HA
setting directional speech enhancement (DSE) on the par-
ticipants listening effort were examined. The DSE setting is
a combination of a directional microphone technique and a
Wiener filter noise reduction.
Four HA settings were investigated to observe the dif-
ferences regarding the listening effort. For this, the devices
were fitted with the DSE feature set to a strong (DSEstr)
and a medium setting (DSEmed). In a further setting the
DSE feature was turned off (DSEoff), so that only the
directional microphone setting was active. All settings
were compared to an omnidirectional microphone setting
(ODM) without additional noise reduction algorithms.
Additionally, a short training session with each hearing aid
setting was performed before the single tests started. This
was done to guarantee that the participants understood and
could solve the tasks.
Stimulus materials and calibration of the auditory
stimuli
To extract the possible listening effort correlates two
conditions were generated. In condition I, the participants
had to perform a task immediately after each stimulus
presentation. The speech material was taken from a Ger-
man sentence test [Oldenburg Sentence Test (OlSa);
Wagener et al. (1999)], which is principally applied in
clinical settings for the detection of the speech intelligi-
bility threshold. Each sentence is spoken by a male voice
and consists of the following structure: subject–verb–nu-
meral–adjective–object (e.g., Peter buys three red cups).
Additionally, there is no predictability of the context of the
sentences (Wagener et al. 1999). The task is explained in
detail in ‘Experimental design’ section.
In condition II, the participants had to complete the task
after the presentation of the speech material. In this part,
the speech materials were two short stories taken from a
German listening comprehension test [‘‘Der Taubenfu
¨tterer
und andere Geschichten’’; Thoma (2007), level B1 (ac-
cording to the Common European Framework of Reference
for Languages: Learning, Teaching, Assessment; Modern
Language Division (2007)] and also recorded by a male
speaker. Each short story had a duration of approximately
10 min. Two HA features per short story were tested. For
more details regarding the task see ‘Experimental design’’
section.
For both cases, the speech material was embedded in
multitalker babble noise composed of international speech
tokens naturally produced by six female voices (Interna-
tional Speech Test Signal (ISTS; Holube et al. 2010).
Additionally, a cafeteria noise was added to the audio
signals consisting of clattering dishes and cutlery (down-
loaded from a data base of auditory signals; Data
Base: AudioMicro 2013). Furthermore, for condition II, the
intensity of the cafeteria and the multitalker babble noise
varied between two intensity levels in random time inter-
vals between 5 and 15 s. The SNR was equally distributed
over the conditions and the variations were the same for
each participant.
The auditory stimuli were calibrated using a hand-held
sound level meter (type 2250, Bru
¨el & Kjær, Denmark)
connected to a pre-polarized free field 1/2’’ microphone
(type 4189, Bru
¨el & Kjær, Denmark). To measure a single
sound source (signal or noise), the loudspeaker for the
calibration was placed 1 m in front of the sound level meter
at the level of the participant’s head. Overlapping sound
sources were measured at a distance of 1 m in the center of
the loudspeakers. The levels for the OlSa and the short
stories are stated for a single loudspeaker and the levels for
the overlapping noises are given for all speakers.
To assess the fluctuating noise levels of the speech
material, the ‘‘equivalent continuous sound level’’ (Leq)
was selected (Bru
¨el and Kjær 2013). Furthermore, an A-
weighting filter was applied as it is commonly used for the
calibration of test stimuli for the sound field audiometry
(BSA Education Committee 2008). The calibrated inten-
sities were set to the following values: The intensities of
the OlSa and the short stories were fixed at a conversational
speech level of 65 dB LAeq (Schmidt 2012). For the con-
dition I, the ISTS noise had a level of 60 dB LAeq and the
cafeteria noise was set to 67 dB LAFmax . To reveal a dif-
ferent listening environment, the ISTS noise used in con-
dition II fluctuated between 64 and 66 dB LAeq . Likewise
the cafeteria noise changed dynamically either at 64 and at
66 dB LAFmax . These dynamic changes were used to reveal
a realistic listening environment.
Experimental design
To test the DSE feature, a total of four loudspeakers
(Control One, JBL) were used. The speakers were posi-
tioned at a distance of 1 m from the participant’s head at
0, 135, 180, and 225in the horizontal plane.
To generate different listening situations, two conditions
were generated to extract the possible listening effort
correlates.
206 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Condition I
For this part, 50 OlSa sentences together with the ISTS
noise were played at the frontal loudspeaker at 0. For
condition I, a total of 200 OlSa sentences were presented to
test the four HA settings. Additionally, distracting noises
were generated by two time-delayed ISTS and cafeteria
noise sequences on each loudspeaker and played behind the
participant at the positions 135, 180and 225. During the
experiment, the task was to repeat words that were heard in
the sentence played at 0. A sinusoidal tone (1 kHz,
duration: 40 ms) was added after each sentence to indicate
the point of time where the participants’ response was
expected, followed by a gap in the sentence stream with a
duration of 5 s. The gap was only present in the sentence
stream at the loudspeaker 0. during the gap, the distracting
noises were played continuously at 0, 135, 180and
225. The responses were written down by the
experimenter.
Condition II
In this part, the audiobook taken from the German lis-
tening comprehension test was played through the frontal
loudspeaker 0. The loudspeakers at the rear side (at the
positions 135, 180and 225) presented simultaneously
the two time-delayed ISTS noise sequences plus the cafe-
teria noise. The participant’s task was to answer simple
questions related to the short story after the complete
presentation of the audiobook, more precisely after pre-
sentation of all HA settings. This questionnaire consisted of
24 items. For each listening part, the participants answered
between four and seven questions. Here, the participant
was instructed to respond after the listening condition.
Condition I was designed to have a more controllable
part. The participants had to repeat the sentence directly
after its presentation. For this, it was easier to detect a drop
in performance or to note if the participants quit the task. In
condition II, the participants could listen to longer speech
sequences, as it is usually the case in daily situations (e.g.,
listening to the radio or to a talk).
In both conditions, the four different HA configurations
(a) DSEstr, (b) DSEmed, (c) DSEoff, (d) ODM were tested
in a randomized order. Note also, that the presentation of
condition I and II was randomized and the conditions were
presented in separate blocks.
In both cases, the participants were asked to rate their
perceived effort directly after each tested HA setting using
a seven point scale (LE-Scale: no effort very little
effort – little effort – moderate effort – considerable
effort – much effort – extreme effort adapted from Schulte
(2009)) and their experienced speech intelligibility (SI-
Scale: excellent – very good – good – satisfactory – suf-
ficient – unsatisfactory – insufficient; Volberg et al. 2001).
Additionally, the participants were asked to determine their
preferred HA setting for a listening situation like the pre-
sented one after the completion of each part. During both
conditions, the continuous EEG was recorded from the
persons with hearing loss.
Data acquisition and preprocessing
The EEG was recorded using a commercially available
biosignal amplifier (g.tec USBamp, Guger Technologies
Austria) with a sampling frequency of 512 Hz. Sixteen
active electrodes were placed according to the international
10–20 system, with Cz as reference and a ground electrode
placed at the upper forehead. The data were filtered offline
using a linear phase finite impulse response bandpass filter
from 0.5 to 40 Hz (filter order: 1000). For condition I of the
study, a trigger signal indicated the onset and offset of each
sentence. Thus, the EEG data could be analyzed during the
presentation of the sentences (duration approx. 2 s, total of
50 sentences per hearing aid setting). After extraction of
the EEG data for each sentence, artifactual EEG segments
were rejected if the maximum amplitude threshold excee-
ded 70 lV. The artifact free EEG-segments were
recombined into a vector. This procedure was done for
each EEG-channel independently. Finally, the recombined
EEG-vectors were cut to an equal length of 80 s (minimum
of 40 artifact free EEG segments in all EEG-channels 92s
duration of a sentence). In condition II, artifacts were
removed using a moving time window (duration: 2 s) and
the same artifact threshold of 70 lV. The artifact free
EEG-segments were also recombined into a vector. The
length of each EEG-vector was equalized to 320 s (mini-
mum of 160 artifact free EEG segments in all EEG chan-
nels 9window size of 2 s).
Data analysis
The data analysis was performed using software for technical
computing (Matlab2013a and Simulink, MathWorks Inc.,
USA). For the quantification of phase synchronization pro-
cesses of the oscillatory EEG, the distribution of the
instantaneous phase on the unit circle was investigated. The
instantaneous phase /a;bof each artifact free recombined
EEG channel was extracted by the application of the com-
plex continuous wavelet transform. This means, the phase
was extracted over the time samples of each EEG channel.
Before the phase was extracted, the Hilbert transform was
applied to the data to ensure an Hardy-spaced mapping.
Let
wa;bðÞ ¼ jaj1=2wb
a
 ð1Þ
where w2L2ðRÞis the wavelet with
Cogn Neurodyn (2017) 11:203–215 207
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
0\ZR
jWðxÞj2jxj1dx\1ð2Þ
WðxÞis the Fourier transform of the wavelet, and a;b2R,
a0.
The wavelet transform
Ww:L2ðRÞ!L2R2;dadb
a2
 ð3Þ
of a signal x2L2ðRÞwith respect to the wavelet wis given
by the inner L2–product
ðWwxÞða;bÞ¼hx;wa;biL2:ð4Þ
The instantaneous phase of a signal x2L2ðRÞis given by
the complex argument from the complex wavelet transform
with the signal:
/a;b¼argðWwxÞða;bÞ:ð5Þ
For the quantification of listening effort correlates, the
mean resultant vector
Rwas mapped to an exponential
function (Fisher approximation of the Rayleigh equation).
This mapping, was used as it is bounded between 0 and 1
and, compared to the previously examined angular entropy
(Bernarding et al. 2012), it turned to be more robust against
the later described sampling effect.
The mean resultant vector
Rof the phase values can be
determined as follows. Assuming we have a set of unit
vectors x1;...;xNwith the corresponding phase angles
/n;n¼1;...;N, then the mean resultant vector can be
determined by
R¼1
NX
N
n¼1
eı/n
:ð6Þ
The mean resultant vector
Rcan be interpreted as a mea-
sure of concentration of a data set. The two schematics of
Fig. 2depict the phase values of a rather uniform (Fig. 2a)
and a non uniform distribution (Fig. 2b) projected on the
unit circle together with their corresponding mean resultant
vector
R.If
Ris close to 0 (see Fig. 2a), then the phase
values are more dispersed on the unit circle, which means
that the data are distributed uniformly. Otherwise, if
Ris
close to 1 (see Fig. 2b), then the phase is more clustered on
the unit circle and has a common mean direction. Note that
in large data sets the clustered phases are embedded in
rather uniformly distributed phases, which is related to the
sampling of the signal. If the data is sampled at consecutive
and equidistant time points, we have a rather uniform
distribution of the phases. If a phase reset occurs, then we
have a clustering of the phases which is embedded in the
preceding uniformly distributed phases. To be more robust
against this sampling effect, the mean resultant vector is
mapped to an exponential function.
The electroencephalographic correlate of listening effort
can be defined for a specific scale aand a suitable auditory
paradigm by
objective listening effort ðOLEoscÞ/1eNR2:ð7Þ
A high value of the OLEosc corresponds to a higher lis-
tening effort.
To compensate for individual EEG differences, the
individual’s OLEosc was normalized in the range [0,1]
according to
OLEosc0¼OLEosc minðOLEoscÞ
maxðOLEoscÞminðOLEoscÞ:ð8Þ
Statistical analysis
For a statistical comparison of the OLEosc with respect to
the different HA configurations, a repeated measures
analysis of variance (ANOVA) was applied to the data to
detect differences on the listening effort measure regarding
the applied HA settings. As post-hoc test a multiple pair
wise comparison was performed with a Bonferroni
adjustment. The Friedman Test was performed on the
ordinal data of the LE- and the SI-scales as well as on the
percentage of correctly repeated words. The post-hoc
analysis of this data was performed using a multiple pair
wise comparison with a Bonferroni adjustment.
Results
The analysis was performed on the instantaneous phase
extracted from the right mastoid electrode by the wavelet
transform for a scale a¼40, which corresponds to a
pseudo frequency of 7.68 Hz (alpha–theta border). The
scale a¼40 and the electrode channel were identified in
previous studies to reflect best correlates of an attentional
(a) (b)
Fig. 2 Schematic of the phase distribution of two theoretical data sets
(black circles) together with their corresponding mean resultant
vector
Ron the unit circle showing (a) a uniform distribution and
(b) a non uniform distribution
208 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
effortful modulation. In these former studies, the listening
effort correlates were gained from the evoked EEG activity
(Strauss et al. 2010; Bernarding et al. 2010). There, it was
shown that the best result can be obtained in the frequency
range from 6 to 8 Hz. Additionally, in this lower frequency
range were effects of an attentional, effortful modulation
noticeable (cf. ‘Introduction’ section).
For the analysis of the subjective listening effort scale, a
number was assigned to each level of the LE-Scale
(ranging from 1 = very little effort to 7 = extreme effort).
Then, the mean and the standard deviation were calculated.
The same was done to interpret the results of the subjective
speech intelligibility scale. There the numbers were
assigned to each level of the SI-Scale ranged from 1 =
excellent to 7 = insufficient.
Electroencephalographic and subjective listening
effort estimation
A repeated measures ANOVA was conducted on the nor-
malized OLEosc values to test if differences on the lis-
tening effort regarding the applied HA settings existed.
There was a statistically significant effect of HA setting on
the electroencephalographic estimate of listening effort for
condition I [F(3,36) = 2.84, p¼0:05] and for condition II
[F(3,27) = 4.57, p¼0:01]. The results of the post-hoc
multiple pair wise comparison with Bonferroni correction
is shown in Table 1. Furthermore, significant differences
regarding the OLEosc were found between the ODM set-
ting and the DSEstr (p¼0:01) as well as for the DSEoff
(p¼0:04) for condition I; and for condition II, the OLEosc
was significantly different for the ODM and the DSEmed
setting (p¼0:008) as well as for the ODM and the DSEoff
(p¼0:04) setting.
There was also a statistically significant effect on the
subjectively rated listening effort with respect to the tested
HA setting for condition I, v2ð3Þ¼22:04;p\0:001, as
well as for condition II, v2ð3Þ¼20:14;p\0:001. The
multiple pair wise comparison showed significant differ-
ences with respect to the subjectively rated listening effort
between the ODM and the other three HA settings (DSE-
off, DSEmed, DSEstr) for condition I and condition II (cf.
Table 1).
Figure 3illustrates the mean results of the electroen-
cephalographic listening effort measure (black squares; left
y-axis) together with the mean results of the subjective
listening effort rating (gray circles; right y-axis) over the
four tested HA configurations for condition I (Fig. 3a) and
the condition II (Fig. 3b) of the study. Note that higher
values of the OLEosc indicate a higher listening effort.
Table 2shows an overview of the preferred HA settings
for condition I and II. It can be noted, that none of the
participants preferred the ODM condition. Furthermore, in
this preference data, no significant differences were
noticeable (Friedman test). The electroencephalographic
estimate of listening effort was highly correlated (Spear-
man’s correlation) with the subjectively perceived listening
effort in all tested HA settings for condition I (r ¼0:8) and
II (r ¼0:94). In the ODM setting, which should require the
largest listening effort in this study, the participants had the
largest listening effort with respect to the electroen-
cephalographic estimate (OLEosc, condition I: M ¼0:87,
SD ¼1:93; condition II: M ¼0:90, SD ¼1:57) and the
subjectively rated listening effort (LE-Scale, condition I: M
¼6:15, SD ¼0:90; condition II: M ¼5:80, SD ¼1:03).
The subjectively rated listening effort lies on the LE-Scale
between considerable and extreme effort.
Speech intelligibility
The right side of Fig. 4depicts the mean percentage of
correctly repeated words over the four HA configurations
Table 1 Results of the post-hoc multiple pair wise comparison (Bonferroni corrected), alpha level = 0.05
Hearing aid
feature
Normalized OLEosc LE rating SI rating Score
Condition I Condition II Condition I Condition II Condition I Condition II Condition I Condition II
DSEoff 9
DSEstr
p¼1:00 p¼1:00 p¼0:74 p¼1:00 p¼0:96 p¼1:00 p¼1:00 p¼1:00
DSEoff 9
DSEmed
p¼1:00 p¼1:00 p¼1:00 p¼1:00 p¼0:69 p¼1:00 p¼1:00 p¼1:00
DSEoff 9
ODM
p¼0:04 p¼0:04 p¼0:017 p¼0:01 p¼0:017 p¼0:011 p¼0:009 p¼0:246
DSEstr 9
DSEmed
p¼1:00 p¼1:00 p¼1:00 p¼0:83 p¼1:00 p¼1:00 p¼1:00 p¼1:00
DSEstr 9
ODM
p¼0:01 p¼1:00 p¼3:6105p¼0:025 p¼7:31 105p¼0:0014 p¼0:005 p= 1.00
DSEmed 9
ODM
p¼0:07 p¼0:008 p¼0:0064 p¼8:4105p¼3:22 105p¼5:05 105p¼0:0234 p¼0:785
Cogn Neurodyn (2017) 11:203–215 209
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
of condition I of the study. Significant effects for the tested
HA settings were found, v2ð3Þ¼17:58;p\0:001. Here,
the multiple pair wise comparison was significant for
testing the differences between the ODM and all other HA
settings (DSEmed: p¼0:0234, DSEstr: p¼0:005, DSE-
off: p¼0:009). Besides the HA with the ODM setting, the
participants reached a mean percentage of correctly repe-
ated words around 80%for the other three settings.
The electroencephalographic estimate of listening effort
and the word score data were also (negatively) correlated
(Pearson’s correlation, condition I: r ¼0:96). Regarding
the SI-scales, there was a statistically significant effect with
respect to the tested HA setting for condition I, v2ð3Þ¼
26:57;p\0:001 and condition II, v2ð3Þ¼22:88;p\0:001.
On the left side of Fig. 4the mean results of the subjective
speech intelligibility scale over the HA configurations for
the condition I are shown. Again, the ODM achieved the
poorest results. Significant differences between the SI-
scales were found for the ODM setting versus DSEmed,
DSEstr, DSEoff (DSEmed: p¼3:22 105, DSEstr:
p¼7:31 105, DSEoff: p= 0.017). The mean subjective
speech intelligibility rating is between ‘‘sufficient’’ and
‘unsatisfactory’’ (SI-Scale, M ¼5:77, SD ¼1:01). In
Fig. 5(left), a similar behavior of the rated speech intel-
ligibility can be seen for condition II. Again, only the
difference between the ODM and the three other settings
was significant (DSEmed: p¼5:05 105, DSEstr:
p¼0:0014, DSEoff: p¼0:011). Compared to condition I,
the speech intelligibility for the DSEmed, DSEstr and
DSEoff configurations is slightly better rated, the SI is in a
very little effort
no effort
little effort
considerable effort
much effort
moderate effort
extreme effort
very little effort
no effort
little effort
considerable effort
much effort
moderate effort
extreme effort
DSEmed DSEstr DSEoff ODM
DSEmed DSEstr DSEoff ODM
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
csoELO c
s
oELO
el
ac
S
-EL
e
v
it
c
ej
bu
s
elacS-ELevitcejbus
*
*
*
*
*
*
*
*
*
*
*p<0.05
(a)
(b)
*p<0.05
Fig. 3 Mean and standard
deviation values of the
normalized
electroencephalographic
listening effort measure
(OLEosc;black squares;left y-
axis) and the subjective
listening effort rating (gray
circles;right y-axis) from the
(a) condition I (mean over 13
participants) and (b) condition
II (mean over ten participants).
Note that higher values of the
OLEosc indicate a higher
listening effort
Table 2 Overview: number of preferred HA settings for condition I
and II
DSEmed DSEoff DSEstr ODM No preferences
Cond. I 4 4 3 2
Cond. IIa4.5 3.5 2 – –
For these participants, each feature was scored with 0.5 instead of 1
aIn condition II, two participants preferred two HA features
210 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
range between ‘‘good’’ and ‘‘satisfactory’’. On the right
side of Fig. 5, the mean and standard deviations of cor-
rectly answered questions is shown. Here, the differences
between the four hearing aid settings were non significant.
Effects of the presentation order
on the electroencephalographic listening effort
measure
To analyze possible influences of the measurement time on
the OLEosc, like fatigue effects or a decrease of motiva-
tion, the OLEosc values for each participant were sorted
according to the presentation order. After this, the mean
and the standard deviation values were calculated for the
two parts of the study. A repeated measures ANOVA was
conducted on the OLEosc values to test if an effect of the
presentation order on the listening effort measure exists.
Only in condition I was a statistically significant effect
noticeable [condition I: F(3,36) ¼3:85;p¼0:017; condi-
tion II: F(3,27) ¼1:76;p¼0:17]. There, the difference
between the second and the third presentation was
statistically significant (p¼0:03). Note that this analysis
was done additionally to the randomized testing of the HA
settings during the experiments. The results of this analysis
are depicted in Fig. 6.
The upper panel (Fig. 6a) represents the individual and
the mean values of the normalized OLEosc sorted by the
order of the applied HA configurations (x-axis, 1st to 4th
setting, black to white bars) for condition I. The lower
panel (Fig. 6b) shows the same, but for condition II.
Besides participant 1 (condition I, Fig. 6a) and participant
10 (condition II, Fig. 6b), there is no increasing or
decreasing tendency of the electroencephalographic lis-
tening effort measure related to the presentation order. In
the case of the two aforementioned participants, the pre-
sented HA configurations required also an increased degree
of listening effort (cf. Fig. 3, presentation order of partic-
ipant 1: DSEmed, DSEstr, DSEoff, ODM; presentation
order of participant 10: DSEstr, DSEmed, DSEoff, ODM).
This means that the ODM setting was presented last and
was expected to require the largest effort. The statistical
analysis using presentation order as covariate showed
similar results as the uncorrected ANOVA test (see
0
20
40
60
80
100
DSEmed DSEstr DSEoff ODM
wdetaeperyltcerroc]%[sdro
Hearing aid configuration
DSEmed DSEstr DSEoff ODM
unsatisfactory
insufficient
sufficient
satisfactory
good
very good
excellent
e
l
acS-I
S
Hearing aid configuration
*
*
**
*
*
50.0<p*50.0<p*
Fig. 4 Left mean and standard deviation values of the subjective speech intelligibility scale for the condition I. Right mean and standard
deviation values of the percentage of correctly repeated words for each HA setting for the condition I
DSEmed DSEstr DSEoff ODM
unsatisfactory
insufficient
sufficient
satisfactory
good
very good
excellent
elacS
-
IS
Hearing aid configuration
DSEmed DSEstr DSEoff ODM
Hearing aid configuration
w
s
n
at
c
er
r
oc]%[sre
100
80
60
40
20
0
*
*
*
*p < 0.05
Fig. 5 Left mean and standard deviation values of the subjective speech intelligibility scale for condition II. Right mean and standard deviation
of correctly answered questions
Cogn Neurodyn (2017) 11:203–215 211
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 1): For condition I, the DSEoff versus ODM setting
(p¼0:05) and DSEstr versus ODM (p¼0:02) were sig-
nificantly different; as well as for condition II, the DSEmed
versus ODM setting (p¼0:008). Here, the DSEoff versus
ODM setting had a significance level of p¼0:06.
Discussion
The main objectives of this study were: (1) to estimate
listening effort by means of EEG data; and (2) to investi-
gate the effects of different HA configurations on the lis-
tening effort.
The most important finding of this study is that the new
electroencephalographic estimate of listening effort reflects
the subjectively perceived effort of the participants with
hearing loss in both listening conditions.
The results indicate that a higher value of the proposed
listening effort measure OLEosc, mirrors a higher subjec-
tively rated effort. This suggests that the distribution of the
instantaneous phase of the EEG in the range of the theta
band is correlated with cognitive effort, which means that
the phase is more clustered for a demanding condition.
Regarding neuronal entrainment, the cortical oscillations
can be modulated by an exogenous stimuli or an endoge-
nous source (Weisz and Obleser 2014).
12345678910111213MEAN
DSEmed DSEstr DSEoff ODM
±0.31
±0.31
0,0
0,2
0,4
0,6
0,8
1,0
p=0.03
0,0
0,2
0,4
0,6
0,8
1,0
123 45678 910 MEAN
DSEmed DSEstr DSEoff ODM
±0.38
±0.35
±0.42
±0.39
participants:
csoELO
csoELO
(a)
(b)
±0.41
±0.37
1st 2nd
1st 1st 2nd 1st 2nd 1st 2nd 1st 1st 2nd 2nd
2nd 2nd 2nd 1st 1st 1st 1st 1st 1st
2nd
pres. order (condition):
(1 : cond. II 2 : cond. I)
dnts
participants:
pres. order (condition):
(1 : cond. I 2 : cond. II)
dnts
Fig. 6 Individual and mean
results of the normalized
electroencephalographic
listening effort measure sorted
by the presentation order of the
HA settings for (a) condition I
and (b) condition II. Below the
x-axis of each figure, it is also
shown if the participants solved
condition I or II in the first or
second step of the experiment.
Note that the ascent order
tendencies for the participants 1
(condition I and II) and 10
(condition II) were related to the
fact that the ODM condition,
which was expected to require
the largest listening effort, was
presented at the end
212 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Peelle et al. (2013) showed in an MEG study using noise
vocoded speech that slow cortical oscillations become
entrained when linguistic information is available. They
argued that this phase-locking relies not only on sensory
characteristics, but also on the integration of multiple
sources of knowledge, like top-down processes. Similar to
these findings, Kerlin et al. (2010) found in their EEG
study an attentional enhancement of the 4–8 Hz signal in
the auditory cortex. They discussed that for a successful
encoding of the speech, the phase-locked cortical repre-
sentation of the relevant speech stream is enhanced via an
attentional gain mechanism. Regarding these aspects, it can
be interpreted that the EEG phase clustering in the fre-
quency range of the theta band reflected in a high OLEosc
value is due to an increased effortful endogenous
modulation.
Furthermore, we can hypothesize that the defined mea-
sure can be linked to our previous findings of the phase
synchronization stability of evoked responses (ERPs) via
the phase reset theory (Strauss et al. 2010; Low and Strauss
2009; Corona-Strauss and Strauss 2017). In Low and
Strauss (2009) the connection between the ERPs and the
EEG was investigated. There, tone-evoked ERPs were
recorded from participants focusing their attention on a
specific target as well as a recording of an unfocused
condition. It was shown that an artificial phase reset at a
specific frequency in the range of the alpha-theta band of
the unfocused data resulted in an increased N1 amplitude.
These modified N1 amplitude was similar to the one gained
from the attentional condition. Additionally, it was
demonstrated that smaller variations in the instantaneous
phase of the EEG lead to an enhancement of the attention
dependent N1 amplitude (cf. ‘Introduction’’ section).
Regarding this ERP phase clustering due to focused
attention, we can hypothesize that there is a similar atten-
tion related modulation of the ongoing EEG. We assume
that both processes originate from the same attention net-
works (Raz and Buhle 2006).
The results show, that besides the correlation between
the OLEosc and the subjective listening effort rating scale,
also a correlation between the OLEosc and the speech
intelligibility score exists. Furthermore, a benefit of the
directional microphones (with and without noise reduction
algorithm) over omnidirectional microphones was illus-
trated. Ricketts (2005) discussed in a review that the use of
the directional microphone technique can be an advantage
for particular listening environments, for instance, envi-
ronments where an increase of the SNR between 4 and 6
dB leads to an adequate level of speech intelligibility.
Related to the fact that directional microphones effectively
improve the SNR, the audibility of the speech signal is
enhanced which is accompanied by a reduced listening
effort. On the other hand, Hornsby (2013) found no
additional benefit of the usage of a directional processing
mode. There, the listening effort was assessed by subjec-
tive listening effort ratings, word recall and the visual
reaction time gained from a dual-task paradigm. The next
step would be to investigate the OLEosc and the subjective
listening effort rating at an individually adjusted speech
level or at an SNR where the speech is in all the test modes
highly intelligible. In such cases, the listening effort
required to achieve a similar speech level could be exam-
ined (Brons et al. 2013). In addition, significant differences
between the three directional microphone settings, namely
an improvement of the noise reduction algorithm, could not
be shown. Neither by the subjective rating scales and the
speech scores nor by the OLEosc.
Sarampalis et al. (2009) examined a benefit of a noise
reduction algorithm on the listening effort. They tested
people with normal hearing sensitivity with processed and
unprocessed speech samples. However, in this study, solely
the noise reduction setting was tested and not a combina-
tion of a directional microphone and a noise reduction
algorithm. Regarding this aspect, it could be possible that
in the current study the additional effects of the noise
reduction algorithm on the listening effort are not trackable
with the applied experimental paradigm. Additionally, the
results of the individually preferred HA settings, showed
no clear trend of an overall favored HA setting. This could
be related to individual preferences, like a highly individ-
ualized noise annoyance (Brons et al. 2013). It is also
possible, that the differences between the HA settings are
marginal and therefore not detectable with the applied
paradigm. Thus, a general recommendation which of the
tested noise reduction settings reduces the listening effort
maximally cannot be made.
Although a randomized presentation order of the HA
settings was applied, we can not fully exclude possible
order effects on the subjective as well as objective esti-
mates as the randomization was not fully balanced. How-
ever, the (individual) results show no systematic change
over the measurement time, like an increasing or a
decreasing tendency of the OLEosc measure. Such ten-
dencies could be expected due to fatigue effects (Boksem
et al. 2005), stress or a lack of concentration according to
the measurement time. As a result the participants would
either spend an additional effort to solve the auditory task
or they lose the motivation to perform the task (Sarter et al.
2006).
Comparing the perceived speech intelligibility and lis-
tening effort of condition I and II with each other, it can be
noted that there is a tendency of increased values for
condition I. This means, condition I required slightly more
effort and also the audibility was reduced in this case.
Nevertheless, the the difference between condition I and II
for the same participants (ten participants) was not
Cogn Neurodyn (2017) 11:203–215 213
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
significantly different. At a first glance, this result is not
expected as a better SNR was used in condition I. This
means, related to the physical part of the speech discrim-
ination process, the speech intelligbility should be poorer
for condition II. However, if speech information is
inaudible, the cognitive system makes also use of context
and linguistic information to support the speech under-
standing, i.e., the context information can help to interpret
the missing auditory information (Edwards 2007). In con-
dition I, sentences from a speech intelligibility test were
used, which had no predictability of the context of the
sentences (duration approx. 2 s). In the second condition,
the speech material consisted of a continuous audiobook.
There, the participant listened 5 min to each part of the
audiobook. We could interpret, that in the second case, the
participant could make use of the context information to
support the speech understanding. Furthermore, the
responses were expected after listening to the whole part of
the audiobook and not directly after each sentence. Thus,
we could assume, that they realized how much of the
information was inaudible for them. In the other condition,
the listening period was much longer and the participants
had to answer text related questions. With respect to this
aspect, we could assume, that the participants had a more
vague idea of how much of the information they really
missed.
An advantage of the new measure is that we obtain the
listening effort directly during the auditory task. The ben-
efit of such an objective method is, that it is not subjec-
tively biased. Additionally, the listening effort could be
measured continuously on finer levels compared to a dis-
crete rating scale with a limited number of categories.
However, the investigation if the OLEosc can differentiate
marginal effort differences was beyond the scope of this
study.
Nevertheless, we still have to test this measure in dif-
ferent HA configurations and it has also to be validated in
future studies, which are more related to the standard
clinical practice on an individual basis. Further work
should also analyze the temporal progress of this measure
during the listening process.
Conclusion
We have presented in this study a novel electroen-
cephalographic method to estimate listening effort using
ongoing EEG data. The results suggest that the new lis-
tening effort measure, which is based on the distribution of
the instantaneous phase of the EEG, reflects the exerted
listening effort of people with hearing loss. Furthermore,
different directional processing modes of the HAs with
respect to a reduction of the listening effort were tested.
The new estimate of listening effort indicates that a
directional processing mode can reduce the listening effort
in specific listening situations.
Acknowledgements This work has partially been supported by DFG-
Grant STR 994/1-1, BMBF-Grant 03FH036I3, and BMBF-Grant
03FH004IN3.
Open Access This article is distributed under the terms of the
Creative Commons Attribution 4.0 International License (http://crea
tivecommons.org/licenses/by/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided you give
appropriate credit to the original author(s) and the source, provide a
link to the Creative Commons license, and indicate if changes were
made.
References
Ahlstrom JB, Horwitz AR, Dubno JR (2013) Spatial separation
benefit for unaided and aided listening. Ear Hear 35:72–85
Arlinger S (2003) Negative consequences of uncorrected hearing
loss—a review. Int J Audiol 42(Suppl 2):17–20
Bernarding C, Corona-Strauss FI, Latzel M, Strauss DJ (2010)
Auditory streaming and listening effort: an event related
potential study. Conf Proc IEEE Eng Med Biol Soc 2010:
6817–6820
Bernarding C, Strauss D, Hannemann R, Seidler H, Corona-Strauss F
(2013) Neural correlates of listening effort related factors:
influence of age and hearing impairment. Brain Res Bull
91:21–30
Bernarding C, Strauss DJ, Hannemann R, Corona-Strauss FI (2012)
Quantification of listening effort correlates in the oscillatory eeg
activity: a feasibility study. In: Proceedings of the annual
international conference of the IEEE engineering in medicine
and biology society, EMBS, pp. 4615–4618
Boksem MAS, Meijman TF, Lorist MM (2005) Effects of mental
fatigue on attention: an erp study. Cogn Brain Res 25(1):107–116
Bru
¨el, Kjær (2013) Hand-held analyzer types 2250 and 2270–user
manual, Denmark
Brockhaus-Dumke A, Mueller R, Faigle U, Klosterkoetter J (2008)
Sensory gating revisited: Relation between brain oscillations and
auditory evoked potentials in schizophrenia. Schizophr Res
99(1–3):238–249
Brons I, Houben R, Dreschler WA (2013) Perceptual effects of noise
reduction with respects to personal preference, speech intelligi-
bility, and listening effort. Ear Hear 34(1):29–41
BSA Education Committee (2008) Guidelines on the acoustics of
sound field audiometry in clinical audiological applications.
Technical Report, British Society of Audiology (BSA)
Busch NA, VanRullen R (2010) Spontaneous eeg oscillations reveal
periodic sampling of visual attention. Proc Natl Acad Sci
107(37):16048–16053
Corona-Strauss FI, Strauss DJ (2017) Circular organization of the
instantaneous phase in erps and the oscillatory eeg due to
selective attention. IEEE NER (in press)
Data Base: AudioMicro I (2013) Stock audio library. http://soundb
ible.com/. Online—30 Jan 2014
Ding N, Simon JZ (2012) Emergence of neural encoding of auditory
objects while listening to competing speakers. Proc Natl Acad
Sci USA 109(29):11854–11859
Ding N, Simon JZ (2014) Cortical entrainment to continuous speech:
functional roles and interpretations. Front Hum Neurosci
8(MAY):1–7
214 Cogn Neurodyn (2017) 11:203–215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Downs DW (1982) Effects of hearing aid use on speech discrimina-
tion and listening effort. J Speech Hear Disord 47:189–193
Edwards E (2007) The future of hearing aid technology. Trends
Amplif 11:31–45
Gatehouse S, Gordon J (1990) Response times to speech stimuli as
measures of benefit from amplification. Br J Audiol 24(1):63–68
Gatehouse S, Noble W (2004) The speech, spatial and qualities of
hearing scale (ssq). Int J Audiol 43:85–99
Goldwater BC (1972) Psychological significance of pupillary move-
ments. Psychol Bull 77(5):340–355
Haab L, Trenado C, Mariam M, Strauss DJ (2011) Neurofunctional
model of large-scale correlates of selective attention governed by
stimulus-novelty. Cogn Neurodyn 5:103–111
Hall J (2007) New handbook for auditory evoked responses. Pearson
Allyn and Bacon, Boston
Hicks CB, Tharpe AM (2002) Listening effort and fatigue in school-
age children with and without hearing loss. J Speech Lang Hear
Res 45(3):573–584
Hillyard SA, Hink RF, Schwent VL, Picton TW (1973) Electrical
signs of selective attention in the human brain. Science
182:177–180
Hillyard SA, Vogel EK, Luck SJ (1998) Sensory gain control as a
mechanism of selective attention: electrophysiological and
neuroimaging evidence. Philos Trans R Soc Lond B Biol Sci
353(1373):1257–1270
Holube I, Fredelake S, Vlaming M, Kollmeier B (2010) Development
and analysis of an International Speech Test Signal (ISTS). Int J
Audiol 49(12):891–903
Hornsby BW (2013) The effects of hearing aid use on listening effort
and mental fatigue associated with sustained speech processing
demands. Ear Hear 34(5):523–534
Humes LE (1999) Dimensions of hearing aid outcome. J Am Acad
Audiol 10:26–39
Kahneman D (1973) Attention and Effort. Prentice Hall, Englewood
Cliffs, NJ
Kerlin JR, Shahin AJ, Miller LM (2010) Attentional gain control of
ongoing cortical speech representations in a ‘‘cocktail party’’.
J Neurosci 30(2):620–628
Kiessling J, Pichora-Fuller MK, Gatehouse S, Stephens D, Arlinger S,
Chisolm T, Davis AC, Erber NP, Hickson L, Holmes A,
Rosenhall U, von Wedel H (2003) Candidature for and delivery
of audiological services: special needs of older people. Int J
Audiol 42(Suppl 2):2S92–2S101
Low YF, Strauss DJ (2009) Eeg phase reset due to auditory attention:
an inverse time-scale approach. Physiol Meas 30(8):821–832
Lunner T, Rudner M, Ro
¨nnberg J (2009) Cognition and hearing aids.
Scand J Psychol 50(5):395–403
Mackersie CL, Cones H (2011) Subjective and psychophysiological
indexes of listening effort in a competing-talker task. J Am Acad
Audiol 22:113–122
McGarrigle R, Munro KJ, Dawes P, Stewart AJ, Moore DR, Barry JG,
Amitay S (2014) Listening effort and fatigue: what exactly are
we measuring? A british society of audiology cognition in
hearing special interest group ‘white paper’. Int J Audiol
53(7):433–440
Mesgarani N, Chang EF (2012) Selective cortical representation of
attended speaker in multi-talker speech perception. Nature
485(7397):233–236
Modern Language Division S (2007) The common European
framework of reference for languages: learning, teaching,
assessment. Cambridge University Press, Cambridge
Ng BSW, Kayser C, Schroeder T (2012) A precluding but not
ensuring role of entrained low-frequency oscillations for audi-
tory perception. J Neurosci 32(35):12268–12276
Ng E, Classon E, Larsby B, Arlinger S, Lunner T, Rudner M,
Ro
¨nnberg J (2014) Dynamic relation between working memory
capacity and speech recognition in noise during the first 6
months of hearing aid use. Trends Hear 18:1–10
Peelle J, Gross J, Davis M (2013) Phase-locked responses to speech in
human auditory cortex are enhanced during comprehension.
Cereb Cortex 23(6):1378–1387
Pichora-Fuller MK, Singh G (2006) Effects of age on auditory and
cognitive processing: implications for hearing aid fitting and
audiologic rehabilitation. Trends Amplif 10:29–59
Ponjavic-Conte KD, Dowdall JR, Hambrook DA, Luczak A, Tata MS
(2012) Neural correlates of auditory distraction revealed in theta-
band eeg. Neuroreport 23(4):240–245
Rao A, Zhang Y, Miller S (2010) Selective listening of concurrent
auditory stimuli: an event-related potential study. Hear Res
268(1–2):123–132
Raz A, Buhle J (2006) Typologies of attentional networks. Nat Rev
Neurosci 7(5):367–379
Ricketts TA (2005) Directional hearing aids: then and now. J Rehabil
Res Dev 42(4 SUPPL. 2):133–144
Sarampalis A, Kalluri S, Edwards B, Hafter E (2009) Objective
measures of listening effort: effects of background noise and
noise reduction. J Speech Lang Hear Res 52:1230–1240
Sarter M, Gehring W, Kozak R (2006) More attention must be paid:
the neurobiology of attentional effort. Brain Res Rev
51(2):145–160
Sauseng P, Klimesch W, Gruber WR, Hanslmayr S, Freunberger R,
Doppelmayr M (2007) Are event-related potential components
generated by phase resetting of brain oscillations? A critical
discussion. Neuroscience 146:1435–1444
Schmidt M (2012) Musicians and hearing aid design-is your hearing
instrument being overworked? Trends Amplif. 16:140–145
Schulte, M. (2009). Listening effort scaling and preference rating for
hearing aid evaluation. In: Workshop hearing screening and
technology, HearCom, Brussels. http://hearcom.eu/about/ Dis
seminationandExploitation/Workshop.html. Online—29 Jan
2014
Shinn-Cunningham BG, Best V (2008) Selective attention in normal
and impaired hearing. Trends Amplif 12:283–299
Strauss DJ, Corona-Strauss FI, Trenado C, Bernarding C, Reith W,
Latzel M, Froehlich M (2010) Electrophysiological correlates of
listening effort: neurodynamical modeling and measurement.
Cogn Neurodyn 4:119–131
Thoma L (2007) Lesehefte: Deutsch als Fremdsprache—Niveaustufe
B1: Der Taubenfu
¨tterer und andere Geschichten. Hueber Verlag
GmbH & Co. KG
Volberg L, Kulka M, Sust CA, Lazarus H (2001) Ergonomische
Bewertung der Sprachversta
¨ndlichkeit. In: Fortschritte der
Akustik—DAGA 2001, Hamburg
Wagener K, Ku
¨hnel V, Kollmeier B (1999) Entwicklung und
evaluation eines satztests in deutscher sprache I: design des
oldenburger satztests. Z Audiol 38(1):4–15
Wang Y, Wang R, Zhu Y (2017) Optimal path-finding through mental
exploration based on neural energy field gradients. Cogn
Neurodyn 11:99–111
Weisz N, Obleser J (2014) Synchronisation signatures in the listening
brain: a perspective from non-invasive neuroelectrophysiology.
Hear Res 307:16–28
Zekveld AA, Kramer SE, Festen JM (2010) Pupil response as an
indication of effortful listening: the influence of sentence
intelligibility. Ear Hear 31:480–490
Zion Golumbic EM, Ding N, Bickel S, Lakatos P, Schevon CA,
McKhann GM, Goodman RR, Emerson R, Mehta AD, Simon JZ,
Poeppel D, Schroeder CE (2013) Mechanisms underlying
selective neuronal tracking of attended speech at a ‘‘cocktail
party’’. Neuron 77(5):980–991
Cogn Neurodyn (2017) 11:203–215 215
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Several studies have shown that noise reduction and directionality can reduce listening effort [27][28][29][30]. However, Bernarding et al. [27] found no clear preference for different HA settings between subjects based on listening effort. ...
... Several studies have shown that noise reduction and directionality can reduce listening effort [27][28][29][30]. However, Bernarding et al. [27] found no clear preference for different HA settings between subjects based on listening effort. People fitted with the same hearing aids and the same audiograms may use very different amounts of effort in the same situations depending on other factors such as cognitive capacity. ...
Article
Purpose Hearing-impaired individuals experience higher levels of listening effort in challenging situations, affecting their communication in daily life. The purpose of this study was to linguistically and culturally adapt the Effort Assessment Scale (EAS) into Danish (D-EAS) and to investigate its reliability and validity in normal-hearing and hearing-impaired listeners. Methods The translation and cross-cultural adaptation of the EAS aligns with recommendations to adapt hearing-related questionnaires for different languages and cultures. Participants were 157 listeners (85 females) aged 20–86 years (Mage = 62.5, SD = 16.8), with (non-hearing aid and hearing aid users) and without hearing loss. Results Reliability analysis showed good internal consistency for the six items in the D-EAS (Cronbach's α= 0.93). Exploratory and confirmatory factor analyses showed that D-EAS is a one-dimensional instrument. Significant differences were observed across items and overall scores between normal hearing (NH) and hearing loss groups. Conclusions The D-EAS reliably estimates self-perception of listening effort in adults with and without hearing loss and is sensitive to the impact of hearing loss. Thus, the D-EAS can provide hearing care professionals and hearing researchers with valuable insights into people's self-perception of listening effort to help guide clinical and other rehabilitation decisions. • Implications for Rehabilitation • The Effort Assessment Scale (EAS) into Danish (D-EAS) is a reliable tool to estimate self-perception of listening effort in hearing-impaired adults. • The D-EAS could be a helpful tool providing insights about aspects of hearing disability that is not commonly addressed with the traditional hearing assessments. • The D-EAS can be incorporated in the hearing rehabilitation process as a tool for evaluating self-perception of listening effort in daily life situations. • The D-EAS is easy to administer and requires a short time to answer, allowing its use by clinicians and hearing researchers in different settings.
... To conclude, EEG can allow objective measurement of auditory strain in paediatric age by: (1) recording of task-related phasic oscillations of specific activities known to be associated with cognitive engagement and localized in specific cortical regions (e.g., increase of alpha activity in the parietal cortex); (2) examining the extent of lateralisation of cortical activity based on the theory that poor lateralisation corresponds to the recruitment of a larger cortical area and therefore a greater investment of cognitive resource; and (3) investigating the manner of lateralisation of cortical activity: the more this deviates from the norm, the less hemispheric specialization and, as a consequence, the more cognitive energy expended by the cortex during a listening task. It is, however, to be noted that, similarly to the previously described techniques, EEG has not been established as a clinically approved technique for measuring speech processing and listening effort in children with cochlear implants, although EEG correlates of listening effort have been recently proposed as a clinical measure of the exerted effort during various hearing aid configurations in adults [81]. ...
... However, the presented techniques offer a window of opportunity to understand the challenges and limits of attention to speech in young children with Cis, who are generally challenging to test with behavioural measures. In line with recent publications on each of the presented methods [51][52][53]72,81], we believe that a battery of clinical tests that would include an objective physiological assessment would be desirable. Accordingly, it is essential that more studies be undertaken to validate these measures. ...
Article
Full-text available
Very early bilateral implantation is thought to significantly reduce the attentional effort required to acquire spoken language, and consequently offer a profound improvement in quality of life. Despite the early intervention, however, auditory and communicative outcomes in children with cochlear implants remain poorer than in hearing children. The distorted auditory input via the cochlear implants requires more auditory attention resulting in increased listening effort and fatigue. Listening effort and fatigue may critically affect attention to speech, and in turn language processing, which may help to explain the variation in language and communication abilities. However, measuring attention to speech and listening effort is demanding in infants and very young children. Three objective techniques for measuring listening effort are presented in this paper that may address the challenges of testing very young and/or uncooperative children with cochlear implants: pupillometry, electroencephalography, and functional near-infrared spectroscopy. We review the studies of listening effort that used these techniques in paediatric populations with hearing loss, and discuss potential benefits of the systematic evaluation of listening effort in these populations.
... Hearing loss therefore is often associated with an increased listening effortthe increased mental energy that is expended in demanding listening situations (Matthen, 2016;Pichora-Fuller et al., 2016;Rosemann and Thiel, 2018;Rudner, 2016). This mental effort involves the allocation of attentional as well as cognitive resources (Bernarding et al., 2017;Pichora-Fuller et al., 2016) and may decrease resources available for other cognitive operations (Humes et al., 2013). Hence, it seems likely that cognitive domains are affected when hard of hearing participants listen to speech (Griffiths et al., 2020). ...
... Previous research has shown that age-related hearing loss is related to an increased listening effort and hence increased allocation of attentional and cognitive resources (Bernarding et al., 2017;Matthen, 2016;Pichora-Fuller et al., 2016;Rosemann and Thiel, 2018;Rudner, 2016). However, we found no significant correlation between daily life listening effort and the difference between chronological and brain-predicted age, nor was listening effort a significant predictor of the difference between chronological and brain-predicted age. ...
Article
Full-text available
Aging affects the brain's underlying biophysical structure as well as its cellular and molecular functioning. Brain aging varies largely across individuals and is increased in a variety of disease states. Age-related hearing loss affects a large part of the older population and has been shown to correlate with changes in cognition, brain structure and function. The main aim of this study was to investigate whether an increased brain-predicted age is related to age-related hearing loss, an increase in daily listening effort and a decrease in cognitive function. We used structural neuroimaging data from a large sample of elderly subjects (n = 169) with mild to moderate untreated age-related hearing loss or normal hearing. An established machine learning approach was applied to predict brain age from grey and white matter maps. The brain-predicted age and chronological age significantly correlated across all participants. However, the difference between the brain-predicted age and chronological age was neither significantly correlated with high-frequency hearing loss, nor was this difference between brain-predicted age and chronological age significantly associated with general cognitive status or daily life listening effort. A multiple linear regression approach including age, hearing loss, listening effort and MOCA score as independent variables did not reveal any significant predictors of the difference between brain-predicted age and chronological age. We conclude that untreated mild to moderate age-related hearing loss has negligible effects on brain age derived from structural neuroimaging data.
... In keeping with the idea that hearing-impaired people might experience increased listening effort in everyday life, the potential benefits of hearing device features (such as noise reduction) on this cognitive effort were also investigated (see Ohlenforst et al., 2017a, for review as well). As hearing-aid noise reduction does not always provide an improvement in intelligibility (Nordrum et al., 2006), some studies found a significant reduced listening effort in hearing-impaired patients whilst wearing their devices using subjective measures (Ahlstrom, Horwitz, and Dubno, 2014;Bentler et al., 2008;Hällgren et al., 2005;Noble and Gatehouse, 2006); behavioral measures (Downs, 1982;Gatehouse and Gordon, 1990;Picou, Ricketts, and Hornsby, 2013) and physiological measures (Bernarding et al., 2017;Korczak, Kurtzberg, and Stapells, 2005). Again, the evidence of hearing-aid benefits could not always be demonstrated (Bentler and Duve, 2000;Humes et al., 1997;Picou, Aspell, and Ricketts, 2014) and sometimes, listeners reported more listening effort with hearing-aid noise reduction processing (Brons, Houben, and Dreschler, 2014;Neher et al., 2014). ...
Thesis
Our everyday listening environment is a complex acoustic mixture that needs to be processed and filtered in order to access relevant auditory information. Cognitive resources are then required for the selective processing of a particular sound stream, and simultaneous filtering of irrelevant information. The engagement of these cognitive resources to understand an auditory message, leads to listening effort, especially in noisy environments. Listening effort has been investigated in the last two decades, using a large panel of methods. The work of this thesis aims at bringing new insights on the investigation of listening effort, first with the use of pupillometry, then based on the complementarity of different measures (subjective, behavioral and objective). A methodological investigation was first conducted on pupillometry data recorded during a word-in-noise task, among older hearing-impaired patients, with and without hearing-aids. Several analysis methods were compared, including different normalization techniques, baseline periods, and baseline durations. While the different normalization methods and baseline durations showed similar results, the choice of the baseline period turned out to have a crucial influence on conclusions. Indeed, anticipatory, pre-stimulus cognitive processes, such as attention mobilization were observed on pupil dilation when the baseline period was the most anterior, relative to the stimulus onset. The differences in pupil dilation were observed even at perfect intelligibility, highlighting the relevance of pupillometry as an objective measure of listening effort. The second axis of this work focused on the results of empirical studies in which several measures, including pupillometry, were concurrently used to assess listening effort. Empirical studies were conducted (1) in older hearing-impaired patients using subjective measures of effort and pupillometry during a word-in-noise task, (2) in normal-hearing young adults using pupillometry and sclap electroencephalography during a discrimination in noise task. The lack of correlation between self-assessed difficulty of the task and pupil responses in hearing-impaired listeners, suggests that the two measures address different aspects of effortful listening. Pupil responses allowed for the observation of anticipation processes, even at perfect intelligibility, while subjective measures described the overall perceived effort during the task. In normal-hearing young adults, the modulations of the cortical responses observed thanks to electroencephalography, were linked to the processing of the stimulation and the inhibition of the irrelevant sound source during discrimination. Pupillary responses, recorded simultaneously, brought information on participants' arousal state during the task. Results of both studies then suggest that the different measures complement each other, and that their combination can help for the understanding of the different cognitive processes involved during effortful listening. Overall, this PhD work brings insights on the use and processing of the pupillometric signal to explore listening effort. It also underlines the relevance of the use of pupillometry and its contribution for the study of listening effort, among distinct populations. Finally, it shows the complementarity of subjective and objective measures during the assessment of listening effort, supporting the idea that it is a multidimensional construct.
... More insights can be gained if considering changes in the neural responses related to speech processing. The neural effects of hearing-aid signal processing have been investigated in a few studies: In a series of papers, Bernarding and colleagues found that applying hearing-aid noise reduction and directional sound processing, reduced the listening effort quantified by reduced entropy in the phase of the alpha-band activity (Bernarding et al., 2012;Bernarding et al., 2014Bernarding et al., , 2017. Alpha power has been found to increase with higher auditory processing load, e.g. by degrading the auditory signal, increasing the memory load, or altering the task complexity (Strauß et al., 2014). ...
Article
Full-text available
In recent years, a growing body of literature has explored the effect of hearing impairment on the neural processing of speech, particularly related to the neural tracking of speech envelopes. However, only limited work has focused on the potential usage of the method for evaluating the effect of hearing aids designed to amplify and process the auditory input provided to hearing-impaired listeners. The current study investigates how directional sound processing in hearing-aids, denoted directionality, affects the neural tracking and encoding of speech in EEG recorded from 11 older hearing-impaired listeners. Behaviorally, the task performance improved when directionality was applied, while subjective ratings of listening effort were not affected. The reconstruction of the to-be-attended speech envelopes improved significantly when applying directionality, as well as when removing the background noise altogether. When inspecting the modelled response of the neural encoding of speech, a faster transition was observed between the early bottom-up response and the later top-down attentional-driven responses when directionality was applied. In summary, hearing-aid directionality affects both the neural speech tracking and neural encoding of to-be-attended speech. This result shows that hearing-aid signal processing impacts the neural processing of sounds and that neural speech tracking is indicative of the benefits associated with applying hearing-aid processing algorithms.
... Various studies with test subjects or technical measurements for the evaluation of a NRSE can be found in literature [7] which show that a NRSE mostly does not increase speech intelligibility (e.g., [8][9][10][11][12]) but can reduce listening effort. To determine the impact on listening effort, various approaches have been reported, e.g., measuring pupil dilation [13,14], electroencephalography (EEG) [15,16], electrodermal activity [17], response time [18,19], using a dual task paradigm [20][21][22][23][24], or asking for subjective ratings, e.g., using the acceptable noise level (ANL) [25,26], questionnaires [27], or adaptive scaling procedures [28]. For the technical evaluation of a NRSE in hearing aids, the prevalent procedure is the phase inversion method [29], whereas other procedures have also been reported, e.g., computing the modulation transfer function [30], using the percentile analysis [31], or performing a signal separation in the frequency domain [32]. ...
Article
Full-text available
The phase inversion method, a technical measurement procedure, is often used to evaluate the performance of noise reduction algorithms in hearing aids. However, a detailed comparison of these technical measurements with the perceived loudness is missing. Therefore, commercially available hearing aids of six different manufacturers were evaluated technically and in a study with 18 normal-hearing listeners. First, the output signals of the hearing aids with and without activated noise reduction were recorded in a test box. Then, the test subjects evaluated the perceived loudness of these recordings within multiple two alternative forced choice (2-AFC) tasks. During one task, the test subjects had to focus either on the speech or noise signal and were asked to select the louder of two signals, which both contained a mixture of speech and noise. These results provide not only the perceived SNR but also the perceived speech and noise levels. Comparing the results of the 2-AFC tasks and the phase inversion method basically shows good agreement. Nevertheless, a simple computation of the sound pressure level can lead to significant deviations. Therefore, another possibility for the analysis of the results of the phase inversion method to better match the perceived loudness is presented.
... In the last decade, physiological measures have become popular in the literature and research on listening effort. Researchers used various measures like pupil dilation Strand et al., 2018 ;Zekveld and Kramer, 2014 ), electroencephalographic (EEG) activity ( Bernarding et al., 2017 ;Miles et al., 2017 ), pre-ejection period ( Plain et al., 2020 ;Richter, 2016a ), skin conductance ( Alhanbali et al., 2019 ;Mackersie and Calderon-Moultrie, 2016 ;Mackersie and Kearney, 2017 ;Seeman and Sims, 2015 ), electromyographic activity ( Mackersie and Cones, 2011 ), heart rate variability ( Mackersie and Calderon-Moultrie, 2016 ;Mackersie and Kearney, 2017 ;Seeman and Sims, 2015 ), and fMRI responses ( Wild et al., 2012 ) to assess the effort that individuals invest in listening tasks (see Francis and Love, 2020 ;McGarrigle et al., 2014 , for reviews). However, given that the selection of a particular physiological measure in listening effort research was frequently unac-companied by a theoretical rationale, the current psychophysiological literature on the topic is fragmented. ...
Article
Research on listening effort has used various physiological measures to examine the biological correlates of listening effort but a systematic examination of the impact of listening demand on cardiac autonomic nervous system activity is still lacking. The presented study aimed to close this gap by assessing cardiac sympathetic and parasympathetic responses to variations in listening demand. For this purpose, 45 participants performed four speech-in-noise tasks differing in listening demand—manipulated as signal-to-noise ratio varying between +23 dB and -16 dB—while their pre-ejection period and respiratory sinus arrythmia responses were assessed. Cardiac responses showed the expected effect of listening demand on sympathetic activity, but failed to provide evidence for the expected listening demand impact on parasympathetic activity: Pre-ejection period reactivity increased with increasing listening demand across the three possible listening conditions and was low in the very high (impossible) demand condition, whereas respiratory sinus arrythmia did not show this pattern. These findings have two main implications. First, cardiac sympathetic responses seem to be the more sensitive correlate of the impact of task demand on listening effort compared to cardiac parasympathetic responses. Second, very high listening demand may lead to disengagement and correspondingly low effort and reduced cardiac sympathetic response.
Article
Objectives: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. Design: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. Results: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. Conclusions: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.
Article
Full-text available
Purpose This study sought to evaluate the effects of common hearing aid microphone technologies on speech recognition and listening effort, and to evaluate potential predictive factors related to microphone benefits for school-age children with hearing loss in a realistic listening situation. Method Children ( n = 17, ages 10–17 years) with bilateral, sensorineural hearing loss were fitted with hearing aids set to include three programs: omnidirectional, adaptive directional, and omnidirectional + remote microphone. Children completed a dual-task paradigm in a moderately reverberant room. The primary task included monosyllabic word recognition, with target speech presented at 60 dB A from 0° (front) or 180° (back) azimuth. The secondary task was a “go/no-go,” visual shape-recognition task. Multitalker babble noise created a +5 dB SNR. Children were evaluated in two speaker conditions (front, back) using all three hearing aid programs. The remote microphone transmitter remained at the front speaker throughout testing. Speech recognition performance was calculated from the primary task while listening effort was measured as response time during the secondary task. Results Speech presented from the back significantly increased listening effort and caused a reduction in speech perception when directional and remote microphones were used. Considerable variability was found in pattern of benefit across microphones and source location. Clinical measures did not predict benefit patterns with directional or remote microphones; however, child age and performance with omnidirectional microphones did. Conclusions When compared to a traditional omnidirectional setting, the directional and remote microphone configurations evaluated in this study have the potential to provide benefit for some children and increase difficulty for others when used in dynamic environments. A child's performance with omnidirectional hearing aids could be used to better inform clinical recommendations for these technologies.
Article
Full-text available
Rodent animal can accomplish self-locating and path-finding task by forming a cognitive map in the hippocampus representing the environment. In the classical model of the cognitive map, the system (artificial animal) needs large amounts of physical exploration to study spatial environment to solve path-finding problems, which costs too much time and energy. Although Hopfield’s mental exploration model makes up for the deficiency mentioned above, the path is still not efficient enough. Moreover, his model mainly focused on the artificial neural network, and clear physiological meanings has not been addressed. In this work, based on the concept of mental exploration, neural energy coding theory has been applied to the novel calculation model to solve the path-finding problem. Energy field is constructed on the basis of the firing power of place cell clusters, and the energy field gradient can be used in mental exploration to solve path-finding problems. The study shows that the new mental exploration model can efficiently find the optimal path, and present the learning process with biophysical meaning as well. We also analyzed the parameters of the model which affect the path efficiency. This new idea verifies the importance of place cell and synapse in spatial memory and proves that energy coding is effective to study cognitive activities. This may provide the theoretical basis for the neural dynamics mechanism of spatial memory.
Article
Full-text available
The present study aimed to investigate the changing relationship between aided speech recognition and cognitive function during the first 6 months of hearing aid use. Twenty-seven first-time hearing aid users with symmetrical mild to moderate sensorineural hearing loss were recruited. Aided speech recognition thresholds in noise were obtained in the hearing aid fitting session as well as at 3 and 6 months postfitting. Cognitive abilities were assessed using a reading span test, which is a measure of working memory capacity, and a cognitive test battery. Results showed a significant correlation between reading span and speech reception threshold during the hearing aid fitting session. This relation was significantly weakened over the first 6 months of hearing aid use. Multiple regression analysis showed that reading span was the main predictor of speech recognition thresholds in noise when hearing aids were first fitted, but that the pure-tone average hearing threshold was the main predictor 6 months later. One way of explaining the results is that working memory capacity plays a more important role in speech recognition in noise initially rather than after 6 months of use. We propose that new hearing aid users engage working memory capacity to recognize unfamiliar processed speech signals because the phonological form of these signals cannot be automatically matched to phonological representations in long-term memory. As familiarization proceeds, the mismatch effect is alleviated, and the engagement of working memory capacity is reduced.
Article
Full-text available
By comparing the properties of the Marburger, Göttinger and HSM sentence tests available in German,we derived the properties for an »ideal« sentence test, which is meant to combine all the advantages of the existing tests: the test should be applicable for use in noise and should exhibit a steep discrimination function.It should have a large number of sentence lists that can be repeated several times with the same subject. The test should also be presented at a normal rate of speech and in a normal conversational mode. The »Oldenburger Satztest« introduced here is constructed in analogy to the sentence test by Hagerman (1984) and fulfils several of these specifications. Each sentence is composed of five words (name – verb – numeral – adjective – object) and ten possible words exist for each of these five positions that can be combined at random. Since all test lists are composed of the same word material, the test lists can be expected to be very homogenous (i.e. virtually no differences in intelligibility exist across test lists). The same lists may be repeated with the same subject several times, because it is very hard for the subject to memorize the composition of the test lists over a longer time. The test was recorded in a normal conversational mode at an intermediate rate of speech by Dr. Sotscheck (the same speaker used for the »Göttinger Satztest« and the monosyllabie and bisyllabic German rhyme test). From the sentence material, a speech-simulating continuous noise was generated that exactly matches the long-term spectrum of the speech material employed. In order to synthesize natural sounding sentences from a very limited repertoire of recorded words, three different methods for generating and post-processing the test sentences were tested: a) Speech production according to Hagerman with a definite break in between the single words, b) manipulation of the sentence melody using a PSOLA technique, and c) generating each sentence by taking coarticulation effects into account, i.e .recording 100 sentences of the permutations from the 50 words so each coarticulation between a word and the ten following word alternatives is recorded. To generate new sentences each word is cut out (sharply at the beginning and including the coarticulated part to the next word at the end, cf. Table 2). The utterance are used which include the right coarticulation to the next word. A reference test was conducted with 17 subjects who judged the last method to be most similar to natural articulated speech (cf. Fig. 6). By using this method, 18 test lists were generated that are subject to further investigations in a series of subsequent papers. Key words: speech audiometry, speech intelligibility, test design, instrumentation
Article
Full-text available
Auditory cortical activity is entrained to the temporal envelope of speech, which corresponds to the syllabic rhythm of speech. Such entrained cortical activity can be measured from subjects naturally listening to sentences or spoken passages, providing a reliable neural marker of online speech processing. A central question still remains to be answered about whether cortical entrained activity is more closely related to speech perception or non-speech-specific auditory encoding. Here, we review a few hypotheses about the functional roles of cortical entrainment to speech, e.g., encoding acoustic features, parsing syllabic boundaries, and selecting sensory information in complex listening environments. It is likely that speech entrainment is not a homogeneous response and these hypotheses apply separately for speech entrainment generated from different neural sources. The relationship between entrained activity and speech intelligibility is also discussed. A tentative conclusion is that theta-band entrainment (4-8 Hz) encodes speech features critical for intelligibility while delta-band entrainment (1-4 Hz) is related to the perceived, non-speech-specific acoustic rhythm. To further understand the functional properties of speech entrainment, a splitter's approach will be needed to investigate (1) not just the temporal envelope but what specific acoustic features are encoded and (2) not just speech intelligibility but what specific psycholinguistic processes are encoded by entrained cortical activity. Similarly, the anatomical and spectro-temporal details of entrained activity need to be taken into account when investigating its functional properties.
Article
Full-text available
Objective: There is growing interest in the concepts of listening effort and fatigue associated with hearing loss. However, the theoretical underpinnings and clinical meaning of these concepts are unclear. This lack of clarity reflects both the relative immaturity of the field and the fact that research studies investigating listening effort and fatigue have used a variety of methodologies including self-report, behavioural, and physiological measures. Design: This discussion paper provides working definitions for listening effort and listening-related fatigue. Using these definitions as a framework, methodologies to assess these constructs are reviewed. Results: Although each technique attempts to characterize the same construct (i.e. the clinical presentation of listening effort and fatigue), different assumptions are often made about the nature of these phenomena and their behavioural and physiological manifestations. Conclusion: We suggest that researchers consider these assumptions when interpreting their data and, where possible, make predictions based on current theoretical knowledge to add to our understanding of the underlying mechanisms of listening effort and listening-related fatigue. Foreword: Following recent interest in the cognitive involvement in hearing, the British Society of Audiology (BSA) established a Special Interest Group on Cognition in Hearing in May 2013. In an exploratory group meeting, the ambiguity surrounding listening effort and fatigue was discussed. To address this problem, the group decided to develop a 'white paper' on listening effort and fatigue. This is a discussion document followed by an international set of commentaries from leading researchers in the field. An approach was made to the editor of the International Journal of Audiology who agreed to this suggestion. This paper, and the associated commentaries that follow, are the result.
Article
The development of hearing aid technology has accelerated over the past decade. Hearing aids are converging with consumer electronics in the area of hearables, and a recent government law mandating the creation of an over-the-counter hearing aid category will continue to bring a more consumer electronics focus to hearing aids. Meanwhile, new advances in hearing science are redefining the criteria of who needs hearing help. This talk will review these new intersections of technology and hearing need. It will also detail the technological and psychoacoustical challenges that face the ability of these new technologies to meet the needs of current hearing aid wearers, the needs of this emerging segment of the hearing impaired, and changes to hearing health delivery.