ArticlePDF Available

Abstract and Figures

Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuips TM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.
Content may be subject to copyright.
CASE REPORT
published: 21 February 2017
doi: 10.3389/fnagi.2017.00030
Neuromodulatory Effects of Auditory
Training and Hearing Aid Use on
Audiovisual Speech Perception in
Elderly Individuals
Luodi Yu1,Aparna Rao 2,Yang Zhang1*, Philip C. Burton 3,Dania Rishiq4
and Harvey Abrams 4
1Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota,
Minneapolis, MN, USA, 2Department of Speech and Hearing Sciences, Arizona State University, Tempe, AZ, USA, 3Office
of the Associate Dean for Research, College of Liberal Arts, University of Minnesota, Minneapolis, MN, USA, 4Department
of Speech Pathology and Audiology, University of South Alabama, Mobile, AL, USA
Edited by:
Aurel Popa-Wagner,
University of Rostock, Germany
Reviewed by:
Erin Ingvalson,
Florida State University, USA
Eliane Schochat,
University of São Paulo, Brazil
Ahmad Nazlim Bin Yusoff,
Universiti Kebangsaan Malaysia,
Malaysia
*Correspondence:
Yang Zhang
zhanglab@umn.edu
Received: 21 October 2016
Accepted: 06 February 2017
Published: 21 February 2017
Citation:
Yu L, Rao A, Zhang Y, Burton PC,
Rishiq D and Abrams H
(2017) Neuromodulatory Effects of
Auditory Training and Hearing Aid
Use on Audiovisual Speech
Perception in Elderly Individuals.
Front. Aging Neurosci. 9:30.
doi: 10.3389/fnagi.2017.00030
Although audiovisual (AV) training has been shown to improve overall speech perception
in hearing-impaired listeners, there has been a lack of direct brain imaging data to
help elucidate the neural networks and neural plasticity associated with hearing aid
(HA) use and auditory training targeting speechreading. For this purpose, the current
clinical case study reports functional magnetic resonance imaging (fMRI) data from
two hearing-impaired patients who were first-time HA users. During the study period,
both patients used HAs for 8 weeks; only one received a training program named
ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study
period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end
of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for
uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were
identified for each person through independent functional localizer task. The results
showed experience-dependent changes involving ROIs of auditory cortex, STS and
functional connectivity between uni-sensory ROIs and STS from pretest to posttest
in both cases. These data provide initial evidence for the malleable experience-driven
cortical functionality for AV speech perception in elderly hearing-impaired people and
call for further studies with a much larger subject sample and systematic control to fill in
the knowledge gap to understand brain plasticity associated with auditory rehabilitation
in the aging population.
Keywords: brain plasticity, auditory training, hearing aid, audiovisual integration, speech perception, fMRI,
functional connectivity
INTRODUCTION
Hearing loss is common among older people. Over 30% of the adult population between the ages
of 65 and 74 and nearly 50% of people older than 75 have a hearing loss that affects communication
and consequently psychosocial health (National Institute on Deafness and Other Communication
Disorders, https://www.nidcd.nih.gov). Despite gains achieved through advanced signal processing
technology of hearing aids (HAs), users report persistent problems in speech perception in
the presence of noise relative to premorbid experience (Kochkin, 2007), and rehabilitative
training has been proposed to address these problems (Boothroyd, 2007; Moore and Amitay, 2007).
Frontiers in Aging Neuroscience | www.frontiersin.org 1February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
A topic of current interest in audiology and aging
neuroscience is the benefits and neuromodulatory effects
from HA use and auditory training (Pichora-Fuller and Levitt,
2012; Anderson et al., 2013; Ferguson and Henshaw, 2015;
Morais et al., 2015; Rao et al., 2017). Electroencephalography
(EEG) studies have shown mixed results at the subcortical
(Philibert et al., 2005; Dawes et al., 2013) and cortical levels
(Bertoli et al., 2011; Dawes et al., 2014). Although functional
magnetic resonance imaging (fMRI) can provide millimeter
spatial resolution for investigating neuroanatomical basis
of auditory plasticity (Hall, 2006), only one fMRI study has
documented neuromodulatory effects after 3 months of HAs
use in eight adults aged 30–53 who had congenital sensorineural
hearing loss (SNHL; Hwang et al., 2006).
As speech perception is inherently a multi-sensory process
(McGurk and MacDonald, 1976; see review in Rosenblum, 2008),
aural rehabilitation involving speechreading can be designed to
better utilize visual articulation cues. Speech training including
visual articulation has been found to facilitate second language
learning in adulthood (Zhang et al., 2009). In particular,
addition of visual cues can improve speech recognition by 60%
depending on the materials used (Erber, 1969; Summerfield,
1979; Middelweerd and Plomp, 1987; Bernstein et al., 2013),
which is equivalent to an increase of 5–18 dB in signal-to-
noise ratio (S/N). However, there has been no imaging data
from individuals with age-related SNHL to elucidate the cortical
mechanisms mediating the auditory rehabilitation process.
In this report, we present fMRI data from two patients with
age-related SNHL to examine effects of HA use and audiovisual
(AV) training. Our experiment adopted the well-known McGurk
effect of perceiving a fused /da/ from visual articulation of
/ga/ dubbed with /ba/ sound (McGurk and MacDonald, 1976).
Previous research on normal hearing listeners has shown that
posterior superior temporal gyrus (pSTS) as the cortical locus
for McGurk perception (Beauchamp et al., 2010; Matchin
et al., 2014), and activity within the left pSTS correlated
with magnitude of the McGurk effect (Nath and Beauchamp,
2011, 2012). Moreover, connectivity between superior temporal
sulcus (STS) and sensory regions were found to be dynamically
correlated with S/N of the sensory input. Based on these
findings and the exploratory nature of the current case report,
we expect to see neuromodulatory effects associated with
three regions of interest (ROIs) within the left hemisphere,
including auditory ROI within Heschl’s gyrus, visual ROI within
occipitotemporal lobe representing uni-sensory regions, and the
AV ROI within pSTS.
PATIENTS AND METHODS
Subjects and Hearing Aids
Two volunteers were recruited from an audiology clinic. Both
were part of a larger-scale behavioral study (Rishiq et al.,
2016). Case 1 (C1) was a 68-year-old male with bilateral
normal thresholds through 1 kHz, precipitously sloping to
moderately-severe SNHL in the left ear and severe in the right
ear. Case 2 (C2) was a 52-year-old female with bilateral mild
to moderate relatively flat SNHL (for audiometric thresholds,
see Figure 1 and Table 1). C1 only received HA trial,
and C2 received HA trial as well as AV training. These
treatment(s) were implemented in Rishiq et al. (2016) on
HA use with and without ReadMyQuipsTM (RMQ) training,
of which C1 and C2 were participants. Both patients were
first-time HA users, native speakers of American English, right-
handed as measured using the Edinburgh Handedness Inventory
(EHI; Oldfield, 1971). Behavioral screening with a protocol
FIGURE 1 | Air-conduction audiometric thresholds in dB HL for the two cases. Red circle represents right ear, and blue cross represents left ear.
Frontiers in Aging Neuroscience | www.frontiersin.org 2February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
TABLE 1 | Air-conduction audiometric thresholds in dB HL for the two cases.
Participant Mean thresholds in dB HL
Frequency 250 500 750 1000 1500 2000 3000 4000 6000 8000
Case 1 R 20 20 15 15 20 35 70 65 70 75
L 20 20 20 20 45 65 60 60 60 60
Case 2 R 15 15 30 35 45 40 40 35 40 50
L 10 15 25 35 40 35 35 35 40 50
R stands for right ear, and L stands for left ear.
from Nath and Beauchamp (2012) showed that neither of them
was a perfect McGurk perceiver. Medical histories showed no
cognitive, speech-language, or other chronic medical disorders.
They passed the safety screening requirements for the fMRI
procedure at the Center for Magnetic Resonance Research of the
University of Minnesota, MN, USA and informed consent was
obtained from each participant following a protocol approved by
the Institutional Review Board of the University of Minnesota.
Both patients were fitted with binaural three Series
i110 receiver-in-the canal (RIC) 13 Starkey HAs (Eden Prairie,
MN, USA) according to National Acoustic Laboratories
Non-Linear two prescription targets, which were verified with
real-ear probe microphone measurements. The participants
wore the HAs for 1 week after which parameters were adjusted
as needed based upon the participants’ feedback. Both patients
wore the HAs for at least 6 h/day throughout the study period,
which was verified using the HA data logging feature. C2 was
instructed to use the computerized training program for at least
30 min/day for 5 days/week during the second 4 weeks of the
whole 8-week study period. Compliance was logged daily using a
journal.
Training Program
The auditory rehabilitation used ‘‘RMQ’’1. RMQ is a
computerized program designed to improve speech
understanding through AV training in the presence of
background noise. RMQ training has been shown to improve
HA users’ speech-in-noise perception as well as confidence in
target detection in auditory selective attention task (Abrams
et al., 2015; Rao et al., 2017).
Stimuli and fMRI Data Collection
The event-related fMRI experiment contained the following
stimuli presented in five runs: 50 auditory-only /ba/ and /ga/
syllables (AO condition), 50 visual-only /ba/ and /ga/ syllables
(VO condition), 50 AV /ba/ and /ga/ syllables (congruent
condition), 50 McGurk incongruent AV syllables (i.e., visual
/ga/ with auditory /ba/; McGurk incongruent condition), 50
non-McGurk incongruent AV syllables (i.e., visual /ba/ with
auditory /ga/; non-McGurk incongruent condition). Other than
these, 25 AV /la/ syllables were presented randomly as decoy
trials to maintain participant’s attention. The participant was
instructed to watch and listen to the stimuli carefully and
press a button whenever hearing a /la/ sound. Each 1-s trial
1http://sensesynergy.com/
contained one syllable with random inter-stimulus interval of
2 s, 4 s and 6 s. Auditory stimuli were delivered through Avotec
Silent Scanrheadphones (Avotec, Inc., Stuart, FL, USA) at the
participants’ comfortable level (about 108 dB SPL). Visual stimuli
were presented through a projector screen.
C1’s fMRI data were collected before (pretest) and after
8 weeks (posttest) of HA use. The same time frame of data
collection applied to C2 with the identical protocol. fMRI
scans were acquired using Siemens 3-Tesla MR Scanner with
a 12-channel head coil. For each session, the participants
underwent eight scans: a T1-weighted MPRAGE anatomical scan
to obtain structural volume (TR = 2600 ms, TE = 3.02 ms, flip
angle = 8) with 176 sagittal slices; an independent functional
localizer for identification of ROIs; five main experimental T2-
weighted gradient-echo-planar imaging (EPI) scans for detection
of McGurk related BOLD effects; a reversed-phase EPI scan
for distortion correction (Smith et al., 2004). EPI parameters
were as follows: TR = 2000 ms, TE = 28 ms, flip angle = 80,
34 axial slices/volume, 150 volumes for the functional localizer,
138 volumes/run for the main experiment.
To determine individualized ROIs, an independent functional
localizer task was adapted from Nath and Beauchamp (2012)
study, which included five blocks of stimuli consisting of words
presented visually and auditorily (five auditory-only and five
visual-only in random order) of duration 20 s with 10 s of fixation
baseline between each block. Each block contained 10 2-s trials
with one word per trial. The participants were instructed to watch
and listen to the stimuli carefully.
fMRI Data Analysis
Analyses were performed using the Analysis of Functional
NeuroImages software (AFNI; Cox, 1996). The data were
analyzed individually following the procedures described
below. Pre- to post-test changes were examined through two
levels of analyses: ROI analysis and functional connectivity
analysis.
All EPI data underwent standard preprocessing steps
including registration to the T1-weighted anatomical scan,
smoothing with a Gaussian blur of 4 mm FWHM, and
distortion correction using FSL’s topup tool (Smith et al., 2004).
Functional localizers from two sessions were combined for ROI
definition (Figure 2). Specifically, clusters of significant voxels
(corrected for multiple comparison using False Discovery Rate
thresholding at q<0.05) were used to functionally define
ROIs for each participant separately within left hemisphere
using FreeSurfer (Dale et al., 1999) and AFNI’s Surface Mapper
Frontiers in Aging Neuroscience | www.frontiersin.org 3February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
FIGURE 2 | (A) Functionally-defined regions of interest (ROIs) identified through the functional localizer of the two cases. The audiovisual (AV) ROI (red) contains
voxels responsive to both auditory and visual words in the posterior STS (pSTS). The auditory ROI (green) contains voxels responsive to auditory words within
Heschl’s gyrus. The visual ROI (yellow) contains voxels responsive to visual words within extrastriate lateral occipitotemporal cortex. (B) The patients’ surface
mapping showing activity in each condition. Clusters were identified through voxel-wise statistics corrected for multiple comparison using the False Discovery Rate
algorithm with q(adjusted p)<0.05.
(SUMA; Saad and Reynolds, 2012). Three ROIs were chosen
based on previous literature on McGurk perception: the AV
ROI included voxels responsive to both auditory and visual
words in the posterior STS; the auditory (A) ROI included
voxels responsive to auditory words only within Heschl’s
gyrus; and the visual (V) ROI included voxels responsive to
visual words only within extrastriate lateral occipitotemporal
cortex.
Beta coefficients were first obtained using the General Linear
Modeling (GLM) for each stimulus condition, scaled such that
units were percentage signal change relative to the voxel mean,
were averaged across voxels within each ROI. These mean
beta values served as the dependent variables in ROI analyses.
Then we performed voxel-wise functional connectivity analyses
between the multi-sensory ROI and uni-sensory ROIs using a
beta series method (Rissman et al., 2004) where the multi-sensory
(AV) ROI served a seed time series.
To better quantify changes from pretest to posttest, individual
level statistics were obtained by bootstrapping beta series within
each ROI across trials for each condition. For example, to test
if the auditory ROI in the AO condition showed significant
change from pretest to posttest, we would resample the beta
coefficients across trials for 1000×with replacement for pretest
and posttest separately, and then compare if the distributions
Frontiers in Aging Neuroscience | www.frontiersin.org 4February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
of the two test sessions differ significantly. Based on the
overall pattern of increased activity and functional connectivity
from pretest to posttest in both cases, one-tailed test with a
significance level of 0.05 was used for the current case report (see
Tables 2,3).
RESULTS
Case 1
The only significant change of ROI activity from pretest to
posttest was in the AO condition that activity within the AV ROI
significantly increased from pretest to posttest (AV: p<0.05;
Table 2 and Figure 2).
In functional connectivity analysis, both uni-sensory ROIs
became significantly more synchronized with the multi-sensory
ROI from pretest to posttest in the AO condition (A-AV:
p<0.001; V-AV: p<0.01). However, due to the fact that
the visual ROI showed barely positive activation in the AO
condition, the observed V-AV connectivity in this condition
might just reflect an artifact of increased activity in the AV
ROI instead of functional connectivity change between the two
ROIs. Similarly, in the VO condition, the trend of increasing
synchronization between the auditory ROI and the AV ROI
from pretest to posttest (A-AV: p= 0.075)might just reflect
slight stimulus-driven changes in the same direction in both
ROIs. In the AV congruent condition, only the visual ROI
became significantly more synchronized with the AV ROI
from pretest to posttest (V-AV: p<0.05). In the McGurk
incongruent condition, only the visual ROI displayed a trend
of increased synchronization with the AV ROI from pretest to
posttest (V-AV: p= 0.075). In the non-McGurk incongruent
condition, both uni-sensory ROIs became significantly more
synchronized with the AV ROI (A-AV: p<0.01; V-AV:
p<0.01).
Case 2
In the AO condition, activities in the auditory ROI and the
AV ROI showed significant increase from pretest to posttest
(A: p<0.05; AV: p<0.01) with no significant change in
the visual ROI (Table 3 and Figure 2). In the AV congruent
condition, activity in the auditory ROI increased significantly
from pretest to posttest (A: p<0.001) with no significant
change in the visual and AV ROIs. In the McGurk incongruent
condition, activities in the auditory ROI and the AV ROI showed
significant increase from pretest to posttest (A: p<0.001; AV:
p<0.001) with no significant change in the visual ROI. In
the non-McGurk incongruent condition, activity in the AV ROI
increased significantly from pretest to posttest (AV: p<0.01) and
a trend of increased activity in the auditory ROI (A: p= 0.068)
with no significant change in the visual ROI.
In the AO condition, no significant change in functional
connectivity was observed. In the VO condition, the auditory
ROI showed a trend of increased synchronization with the
AV ROI (A-AV: p= 0.056). But again, this trend might
simply reflect slight stimulus-driven changes in both ROIs
going in the same direction rather than connectivity change.
In the McGurk incongruent condition, the auditory ROI
became significantly more synchronized with the AV ROI
from pretest to posttest (A-AV: p<0.01), and the visual
ROI showed a trend of increasing synchronization with the
AV ROI (V-AV: p= 0.099). In the non-McGurk incongruent
condition, both uni-sensory ROIs became significantly more
synchronized with the AV ROI (A-AV: p<0.01; V-AV:
p<0.01).
DISCUSSION
Case 1: Cortical Plasticity Associated with
Hearing Aid Use
The results showed that C1’s AV ROI became more responsive
during listening to AO syllables after HA use. This finding
is novel as our report is the first to examine effects related
to HA use from the perspective of AV speech perception or
neural plasticity involving multi-sensory integration. Moreover,
whether there is ‘‘acclimatization’’ effect in terms of change in
electrophysiological responses to acoustic input after HA use
still bares controversies (Dawes et al., 2014). We suggest that
the observed enhancement in the STS following HA use might
reflect an increased tendency in matching the speech sounds with
corresponding abstract phonological representations in multi-
sensory forms (Barraclough et al., 2005). Although speculative,
TABLE 2 | Case 1 (hearing aid (HA) use) data showing activities of the three regions of interest (ROIs)—auditory ROI, visual ROI, audiovisual (AV) ROI, and
functional connectivity between uni-sensory ROIs and AV ROI, in the five stimulus conditions at pretest and posttest.
Condition Auditory ROI Visual ROI AV ROI Auditory-AV Visual ROI-AV
Pre Post pPre Post pPre Post pPre Post pPre Post p
Auditory 0.39 0.43 0.528 0.05 0.15 0.704 0.23 0.39 <0.050.19 0.72 <0.001∗∗∗ 0.18 0.47 <0.01∗∗
Visual 0.09 0.00 0.295 0.21 0.15 0.713 0.24 0.36 0.185 0.49 0.58 0.075 0.25 0.30 0.242
Congruent 0.38 0.49 0.260 0.19 0.11 0.293 0.33 0.38 0.189 0.49 0.58 0.221 0.24 0.39 <0.05
McGurk incongruent 0.42 0.45 0.626 0.30 0.13 0.910 0.29 0.45 0.175 0.48 0.56 0.346 0.14 0.30 0.075
Non-McGurk incongruent 0.42 0.61 0.136 0.31 0.20 0.512 0.39 0.46 0.198 0.59 0.78 <0.01∗∗ 0.22 0.46 <0.01∗∗
For each ROI, numbers in the first two columns present percentage signal change in activity relative to baseline at pretest and posttest; the third column presents
significance of change in activity from pretest to posttest obtained from bootstrapping. For each pair of ROIs, numbers in the first two columns present connectivity
measured by averaged correlation coefficient between voxels within the uni-sensory ROI and the AV ROI at pretest and posttest; the third column contains significance
(p value) of change in functional connectivity from pretest to posttest obtained from bootstrapping. p<0.05; ∗∗p<0.01; ∗∗∗ p<0.001.
Frontiers in Aging Neuroscience | www.frontiersin.org 5February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
TABLE 3 | Case 2 (HA use + AV training) data showing activities of the three ROIs—auditory ROI, visual ROI, AV ROI and functional connectivity between
uni-sensory ROIs and AV ROI, in the five stimulus conditions at pretest and posttest.
Condition Auditory ROI Visual ROI AV ROI Auditory-AV Visual ROI-AV
Pre Post pPre Post pPre Post pPre Post pPre Post p
Auditory 0.19 0.42 <0.050.06 0.06 0.496 0.20 0.35 <0.01∗∗ 0.29 0.45 0.222 0.14 0.31 0.115
Visual 0.00 0.09 0.991 0.26 0.20 0.875 0.24 0.20 0.768 0.09 0.29 0.056 0.14 0.15 0.255
Congruent 0.30 0.49 <0.001∗∗∗ 0.25 0.22 0.883 0.38 0.39 0.351 0.23 0.31 0.363 0.17 0.18 0.347
McGurk 0.25 0.42 <0.001∗∗∗ 0.24 0.25 0.334 0.32 0.44 <0.001∗∗∗ 0.19 0.39 <0.01∗∗ 0.09 0.17 0.099
incongruent
Non-McGurk 0.21 0.39 0.068 0.18 0.19 0.451 0.26 0.40 <0.01∗∗ 0.22 0.42 <0.01∗∗ 0.11 0.26 <0.01∗∗
incongruent
For each ROI, numbers in the first two columns present percentage signal change in activity relative to baseline at pretest and posttest; the third column presents
significance of change in activity from pretest to posttest obtained from bootstrapping. For each pair of ROIs, numbers in the first two columns present connectivity
measured by averaged correlation coefficient between voxels within the uni-sensory ROI and the AV ROI at pretest and posttest; the third column contains significance
(p value) of change in functional connectivity from pretest to posttest obtained from bootstrapping. p<0.05; ∗∗p<0.01; ∗∗∗ p<0.001.
this finding reminds us to consider the role of multi-sensory
representation of speech sounds in aural rehabilitation via
amplification device. For example, adding visual cues to speech
signals benefits elderly HA users but not elderly normal
hearing listeners during speech identification (Moradi et al.,
2016).
Functional connectivity results showed more pervasive effects
across conditions. Specifically, all three AV conditions showed
significant or suggestive increase of V-AV connectivity after
HA use. Study of AV perception has shown that the modality
with higher S/N tended to show greater connectivity with
STS compared to the modality with lower S/N (Nath and
Beauchamp, 2011). Note that C1 had severe hearing loss over
higher frequencies, which means input from the visual modality
can be more reliable to him than input from the auditory
modality. Given that HA users oftentimes rely on visual cues in
noisy environment, the observed increase in V-AV connectivity
might reflect a greater perceptual cue weighting of visual
information for AV speech processing as an adaptive strategy
to HA use.
Moreover, in the non-McGurk incongruent condition with
visual /ba/ and auditory /ga/, the A-AV connectivity was also
strengthened. In this condition, although the auditory and visual
cues were unmatched, there was low fusibility between the
two modalities because the auditory cue typically dominate the
percept (listeners will hear auditory /ga/ despite the visual /ba/).
Therefore, the strengthened A-AV connectivity may suggest
more efficient use of auditory cues under auditory-dominant
listening situation due to adaption to acoustic amplification
through HA.
Case 2: Cortical Plasticity Associated with
Hearing Aid Use and Rehabilitative Training
This patient showed significant or suggestive increase in activity
within the auditory ROI from pretest to posttest in all conditions
except for the VO condition. This pattern may reflect greater
involvement of the auditory modality in response to acoustic
signals after HA use and auditory training. In addition, the
AV ROI showed significantly increased responsiveness to the
AO syllables, McGurk incongruent syllables and non-McGurk
incongruent syllables, which might indicate a greater tendency
of matching the speech sounds with corresponding abstract
phonological representations in multi-sensory forms when
visual cue is not available or when AV incongruity is
present.
Functional connectivity results revealed a clear pattern that
the uni-sensory ROIs became more synchronized with the multi-
sensory ROI from pretest to posttest in the two AV incongruent
conditions. Recall that in the RMQ training, the presence of noise
forces the listener to rely on lip movement for successful speech
understanding. The current observation of enhanced A/V—AV
connectivity might indicate that uni-sensory modalities were
involved to a greater extent with the AV integration mechanism
in the presence of AV incongruity, which might be associated
with the explicit practice of speechreading through the RMQ
training in addition to adaptation to HA use.
In addition to the fMRI data, we have sought to examine
behavioral plasticity through the Multimodal Lexical Sentence
Test for Adults (MLST-A; Kirk et al., 2012), however, neither
of the listeners showed noticeable improvement from pretest to
posttest (see Table S1 in Supplementary Material), suggesting a
potential dissociation between neural plasticity and behavioral
plasticity measured by the MLST-A. For those interested, the
current cases were part of a larger-scale behavioral study with
similar finding on behavioral plasticity at a group level (Rishiq
et al., 2016).
The current two-case report adds to the literature that has
consistently demonstrated substantial brain plasticity induced
by auditory training (including musical training) across the
lifespan beyond the early sensitive period of learning (Zhang
and Wang, 2007; Anderson and Kraus, 2013; Penhune and
de Villers-Sidani, 2014; Yotsumoto et al., 2014). In particular,
our fMRI data provided insights to the neural plasticity related
to HA use and auditory training, as well as the role of AV
integration in the rehabilitation process. The experimental design
with identical protocols allowed us to examine the pre-to-post
changes with each participant as their own baseline, which
allowed fine-grained comparison at individual level. Despite the
analytical approach and novelty of the findings, we need to
Frontiers in Aging Neuroscience | www.frontiersin.org 6February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
acknowledge the limitations of the current case report. First, it is
not able to tease apart the effects related to HA use and auditory
training given the overlapping timeline of the two treatments.
Second, speculative interpretation of results should be noted.
For instance, potential residual hearing of the trained C2 might
have contributed to her responsiveness to HA and auditory
training. As the two volunteer subjects did not match in subject
characteristics such as age, gender, degrees of hearing loss, it is
impossible to make direct comparisons. Given the nature and
scope of the current case report, we need to exercise caution
and not overgeneralize the findings about cortical plasticity
associated with HA use and AV training.
CONCLUSION
This is the first fMRI report that has examined neural plasticity
associated with HA use and auditory training targeting AV
speech processing. Our data provide the initial evidence of
cortical plastic change involving auditory cortex, STS and
functional connectivity between auditory and visual regions
and STS from two patients. As auditory training has been
shown to be an effective rehabilitative tool that can potentially
optimize speech processing and systematically improve speech
communication in elderly individuals (Pichora-Fuller and Levitt,
2012; Ferguson and Henshaw, 2015; Morais et al., 2015),
future investigation is warranted to investigate the neural basis
for the short-term and long-term effects of specific auditory
training protocols and the real world benefits. Our case report
results underscore the malleable brain functionality of elderly
hearing-impaired people, and AV speech perception as a topic
for future research and practice in aging neuroscience and aural
rehabilitation.
AUTHOR CONTRIBUTIONS
YZ and AR conceived this study. LY, AR, PCB and YZ
designed the study. DR and HA recruited participants. LY, AR,
PCB collected data. LY and PCB analyzed data. LY prepared
the first draft and all co-authors contributed to writing the
manuscript.
ACKNOWLEDGMENTS
This project received funding from Starkey Hearing
Technologies (AR and YZ), the University of Minnesota’s
(UMN) Brain Imaging Research Project Award from the
College of Liberal Arts (YZ) and the UMN Grand Challenges
Exploratory Research Grant Award (YZ). We would like to
thank our study volunteers for their contribution, the Center for
Magnetic Resonance Research of the University of Minnesota
for providing facilities of fMRI data collection, and Martin
McKinney for his assistance with statistics.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found
online at: http://journal.frontiersin.org/article/10.3389/fnagi.
2017.00030/full#supplementary-material
REFERENCES
Abrams, H. B., Bock, K., and Irey, R. L. (2015). Can a remotely delivered auditory
training program improve speech-in-noise understanding? Am. J. Audiol. 24,
333–337. doi: 10.1044/2015_aja-15-0002
Anderson, S., and Kraus, N. (2013). Auditory training: evidence for neural
plasticity in older adults. Perspect. Hear. Hear. Disord. Res. Res. Diagn. 17,
37–57. doi: 10.1044/hhd17.1.37
Anderson, S., White-Schwoch, T., Choi, H. J., and Kraus, N. (2013). Training
changes processing of speech cues in older adults with hearing loss. Front. Syst.
Neurosci. 7:97. doi: 10.3389/fnsys.2013.00097
Barraclough, N. E., Xiao, D., Baker, C. I., Oram, M. W., and Perrett, D. I. (2005).
Integration of visual and auditory information by superior temporal sulcus
neurons responsive to the sight of actions. J. Cogn. Neurosci. 17, 377–391.
doi: 10.1162/0898929053279586
Beauchamp, M. S., Nath, A. R., and Pasalar, S. (2010). fMRI-Guided transcranial
magnetic stimulation reveals that the superior temporal sulcus is a cortical locus
of the McGurk effect. J. Neurosci. 30, 2414–2417. doi: 10.1523/JNEUROSCI.
4865-09.2010
Bernstein, L. E., Auer, E. T. Jr., Eberhardt, S. P., and Jiang, J. (2013).
Auditory perceptual learning for speech perception can be enhanced
by audiovisual training. Front. Neurosci. 7:34. doi: 10.3389/fnins.2013.
00034
Bertoli, S., Probst, R., and Bodmer, D. (2011). Late auditory evoked potentials in
elderly long-term hearing-aid users with unilateral or bilateral fittings. Hear
Res. 280, 58–69. doi: 10.1016/j.heares.2011.04.013
Boothroyd, A. (2007). Adult aural rehabilitation: what is it and does it work?
Trends Amplif. 11, 63–71. doi: 10.1177/1084713807301073
Cox, R. W. (1996). AFNI: software for analysis and visualization of functional
magnetic resonance neuroimages. Comput. Biomed. Res. 29, 162–173.
doi: 10.1006/cbmr.1996.0014
Dale, A. M., Fischl, B., and Sereno, M. I. (1999). Cortical surface-based
analysis: I. Segmentation and surface reconstruction. Neuroimage 9, 179–194.
doi: 10.1006/nimg.1998.0395
Dawes, P., Munro, K. J., Kalluri, S., and Edwards, B. (2013). Brainstem processing
following unilateral and bilateral hearing-aid amplification. Neuroreport 24,
271–275. doi: 10.1097/WNR.0b013e32835f8b30
Dawes, P., Munro, K. J., Kalluri, S., and Edwards, B. (2014). Auditory
acclimatization and hearing aids: late auditory evoked potentials and speech
recognition following unilateral and bilateral amplification. J. Acoust. Soc. Am.
135, 3560–3569. doi: 10.1121/1.4874629
Erber, N. P. (1969). Interaction of audition and vision in the recognition of oral
speech stimuli. J. Speech Hear. Res. 12, 423–425. doi: 10.1044/jshr.1202.423
Ferguson, M. A., and Henshaw, H. (2015). Auditory training can improve working
memory, attention and communication in adverse conditions for adults with
hearing loss. Front. Psychol. 6:556. doi: 10.3389/fpsyg.2015.00556
Hall, D. A. (2006). ‘‘fMRI of the auditory cortex,’’ in Functional MRI: Basic
Principles and Clinical Applications, eds S. H. Faro and F. B. Mohamed (New
York, NY: Springer), 364–393.
Hwang, J. H., Wu, C. W., Chen, J. H., and Liu, T. C. (2006). Changes
in activation of the auditory cortex following long-term amplification: an
fMRI study. Acta Otolaryngol. 126, 1275–1280. doi: 10.1080/000164806007
94503
Kirk, K., Prusick, L., French, B., Eisenberg, L., Young, N., and Giuliani, N. (2012).
‘‘Evaluating Multimodal Speech Perception in Adults with Cochlear Implants
and Hearing Aids,’’ in Paper Presented at the 12th Conference on Cochlear
Implant and Other Implantable Auditory Technology (Baltimore).
Kochkin, S. (2007). Increasing hearing aid adoption through multiple
environmental listening utility. Hear. J. 60, 28–31. doi: 10.1097/01.hj.
0000299169.03743.33
Matchin, W., Groulx, K., and Hickok, G. (2014). Audiovisual speech
integration does not rely on the motor system: evidence from articulatory
Frontiers in Aging Neuroscience | www.frontiersin.org 7February 2017 | Volume 9 | Article 30
Yu et al. Neuromodulatory Effects in Elderly Individuals
suppression, the McGurk effect and fMRI. J. Cogn. Neurosci. 26, 606–620.
doi: 10.1162/jocn_a_00515
McGurk, H., and MacDonald, J. (1976). Hearing lips and seeing voices. Nature
264, 746–748. doi: 10.1038/264746a0
Middelweerd, M., and Plomp, R. (1987). The effect of speechreading on the
speech-reception threshold of sentences in noise. J. Acoust. Soc. Am. 82,
2145–2147. doi: 10.1121/1.395659
Moore, D. R., and Amitay, S. (2007). Auditory training: rules and applications.
Semin. Hear. 28, 99–109. doi: 10.1055/s-2007-973436
Moradi, S., Lidestam, B., and Rönnberg, J. (2016). Comparison of gated
audiovisual speech identification in elderly hearing aid users and elderly
normal-hearing individuals: effects of adding visual cues to auditory speech
stimuli. Trends Hear. 20. doi: 10.1177/2331216516653355
Morais, A. A., Rocha-Muniz, C. N., and Schochat, E. (2015). Efficacy of auditory
training in elderly subjects. Front. Aging Neurosci. 7:78. doi: 10.3389/fnagi.2015.
00078
Nath, A. R., and Beauchamp, M. S. (2011). Dynamic changes in superior temporal
sulcus connectivity during perception of noisy audiovisual speech. J. Neurosci.
31, 1704–1714. doi: 10.1523/JNEUROSCI.4853-10.2011
Nath, A. R., and Beauchamp, M. S. (2012). A neural basis for interindividual
differences in the McGurk effect, a multisensory speech illusion. Neuroimage
59, 781–787. doi: 10.1016/j.neuroimage.2011.07.024
Oldfield, R. C. (1971). The assessment and analysis of handedness: the Edinburgh
inventory. Neuropsychologia 9, 97–113. doi: 10.1016/0028-3932(71)90067-4
Penhune, V., and de Villers-Sidani, E. (2014). Time for new thinking
about sensitive periods. Front. Syst. Neurosci. 8:55. doi: 10.3389/fnsys.2014.
00055
Philibert, B., Collet, L., Vesson, J. F., and Veuillet, E. (2005). The auditory
acclimatization effect in sensorineural hearing-impaired listeners: evidence for
functional plasticity. Hear. Res. 205, 131–142. doi: 10.1016/j.heares.2005.03.013
Pichora-Fuller, M. K., and Levitt, H. (2012). Speech comprehension training and
auditory and cognitive processing in older adults. Am. J. Audiol. 21, 351–357.
doi: 10.1044/1059-0889(2012/12-0025)
Rao, A., Rishiq, D., Yu, L., Zhang, Y., and Abrams, H. (2017). Neural correlates
of selective attention with hearing aid use followed by readmyquips auditory
training program. Ear Hear. 38, 28–41. doi: 10.1097/AUD.00000000000
00348
Rishiq, D., Rao, A., Koerner, T., and Abrams, H. (2016). Can a commercially
available auditory training program improve audiovisual speech performance?
Am. J. Audiol. 25, 308–312. doi: 10.1044/2016_AJA-16-0017
Rissman, J., Gazzaley, A., and D’Esposito, M. (2004). Measuring functional
connectivity during distinct stages of a cognitive task. Neuroimage 23, 752–763.
doi: 10.1016/j.neuroimage.2004.06.035
Rosenblum, L. D. (2008). Speech perception as a multimodal phenomenon. Curr.
Dir. Psychol. Sci. 17, 405–409. doi: 10.1111/j.1467-8721.2008.00615.x
Saad, Z. S., and Reynolds, R. C. (2012). Suma. Neuroimage 62, 768–773.
doi: 10.1016/j.neuroimage.2011.09.016
Smith, S. M., Jenkinson, M., Woolrich, M. W., Beckmann, C. F., Behrens, T. E.,
Johansen-Berg, H., et al. (2004). Advances in functional and structural MR
image analysis and implementation as FSL. Neuroimage 23, S208–S219.
doi: 10.1016/j.neuroimage.2004.07.051
Summerfield, Q. (1979). Use of visual information for phonetic perception.
Phonetica 36, 314–331. doi: 10.1159/000259969
Yotsumoto, Y., Chang, L. H., Ni, R., Pierce, R., Andersen, G. J., Watanabe, T., et al.
(2014). White matter in the older brain is more plastic than in the younger
brain. Nat. Commun. 5:5504. doi: 10.1038/ncomms6504
Zhang, Y., Kuhl, P. K., Imada, T., Iverson, P., Pruitt, J., Stevens, E. B.,
et al. (2009). Neural signatures of phonetic learning in adulthood: a
magnetoencephalography study. Neuroimage 46, 226–240. doi: 10.1016/j.
neuroimage.2009.01.028
Zhang, Y., and Wang, Y. (2007). Neural plasticity in speech learning and
acquisition. Bilingualism 10, 147–160. doi: 10.1017/s1366728907002908
Conflict of Interest Statement: The authors declare that the research was
conducted in the absence of any commercial or financial relationships that could
be construed as a potential conflict of interest.
Copyright © 2017 Yu, Rao, Zhang, Burton, Rishiq and Abrams. This is an
open-access article distributed under the terms of the Creative Commons Attribution
License (CC BY). The use, distribution and reproduction in other forums is
permitted, provided the original author(s) or licensor are credited and that the
original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply
with these terms.
Frontiers in Aging Neuroscience | www.frontiersin.org 8February 2017 | Volume 9 | Article 30

Supplementary resource (1)

... Studies with new HA users reported auditory training benefits. For example, Yu et al. (2017) recruited two new HA users and worked with them on auditory cognitive training ReadMyQuips (RMQ) software for 8 weeks. The stimuli were syllables/ba//ga//la/, which were presented in different conditions (auditory only, visual only, auditory and visual congruent, and auditory and visual incongruent). ...
... Although in the current study, there was no group of new HA users who did not undergo the program, previous reports have shown that speech in noise performance after using HAs for 6 months did not show large perceptual improvements (Karawani et al., 2018a). Even though it is evident that enhanced technology can partially address specific hearing difficulties, additional auditory exercises could be another means of improving speech-in-noise recognition (Anderson and Kraus, 2013;Lessa et al., 2013;Kuchinsky et al., 2014;Yu et al., 2017;Sattari et al., 2020). A recent study by Kang et al. (2020) also showed that after an 8-week auditory training program, with 10 HA users who wore their HAs for more than 10 months, improvements were observed in speech recognition in noisy situations, and in subjective measurements of HA satisfaction. ...
Article
Full-text available
Older adults with age-related hearing loss often use hearing aids (HAs) to compensate. However, certain challenges in speech perception, especially in noise still exist, despite today’s HA technology. The current study presents an evaluation of a home-based auditory exercises program that can be used during the adaptation process for HA use. The home-based program was developed at a time when telemedicine became prominent in part due to the COVID-19 pandemic. The study included 53 older adults with age-related symmetrical sensorineural hearing loss. They were divided into three groups depending on their experience using HAs. Group 1: Experienced users (participants who used bilateral HAs for at least 2 years). Group 2: New users (participants who were fitted with bilateral HAs for the first time). Group 3: Non-users. These three groups underwent auditory exercises for 3 weeks. The auditory tasks included auditory detection, auditory discrimination, and auditory identification, as well as comprehension with basic (syllables) and more complex (sentences) stimuli, presented in quiet and in noisy listening conditions. All participants completed self-assessment questionnaires before and after the auditory exercises program and underwent a cognitive test at the end. Self-assessed improvements in hearing ability were observed across the HA users groups, with significant changes described by new users. Overall, speech perception in noise was poorer than in quiet. Speech perception accuracy was poorer in the non-users group compared to the users in all tasks. In sessions where stimuli were presented in quiet, similar performance was observed among new and experienced uses. New users performed significantly better than non-users in all speech in noise tasks; however, compared to the experienced users, performance differences depended on task difficulty. The findings indicate that HA users, even new users, had better perceptual performance than their peers who did not receive hearing aids.
... There has been empirical evidence suggesting that the accuracy rate of a listening comprehension task can be improved by 60% in noisy environments with auditory stimuli and visual aids simultaneously presented, as compared with an auditory-only experimental condition (Erber, 1969;Summerfield, 1979). Similarly, it has been found that MSI is conducive to the daily communication of a number of groups of participants bearing disadvantages in language comprehension and production, including secondor foreign-language learners (Hardison, 2010;Hazan et al., 2005;Zhang et al., 2009), people with hearing impairments (Holt et al., 2011;Moradi et al., 2017;Rao et al., 2017;Yu et al., 2017), and children with language or learning disabilities (Irwin and DiBlasi, 2017;Veuillet et al., 2007). In addition to speech communication, multimodal approaches have been demonstrated to facilitate various research and practical fields ranging from educational, forensic, and financial to engineering domains (Bornik et al., 2018;Egger et al., 2019;Hassett and Curwood, 2009;Mohr et al., 2010;Vilko and Hallikas, 2012). ...
... To our knowledge, the feasibility and efficacy of multimodal training approaches have been demonstrated in several studies in the field of second language acquisition and speech pathology (Hardison, 2010;HeinonenGuzejev et al., 2014;Moradi et al., 2017;Rao et al., 2017;Yu et al., 2017;Zhang et al., 2009). However, there is very little application of cross-modal emotion integration tests in clinical practice for schizophrenics, despite numerous efforts contributed to examining multimodal emotion processing in mentally impaired patients. ...
Article
Full-text available
Multisensory integration (MSI) of emotion has been increasingly recognized as an essential element of schizophrenic patients’ impairments, leading to the breakdown of their interpersonal functioning. The present review provides an updated synopsis of schizophrenics’ MSI abilities in emotion processing by examining relevant behavioral and neurological research. Existing behavioral studies have adopted well-established experimental paradigms to investigate how participants understand multisensory emotion stimuli, and interpret their reciprocal interactions. Yet it remains controversial with regard to congruence-induced facilitation effects, modality dominance effects, and generalized vs. specific impairment hypotheses. Such inconsistencies are likely due to differences and variations in experimental manipulations, participants’ clinical symptomatology, and cognitive abilities. Recent electrophysiological and neuroimaging research has revealed aberrant indices in event-related potential (ERP) and brain activation patterns, further suggesting impaired temporal processing and dysfunctional brain regions, connectivity and circuities at different stages of MSI in emotion processing. The limitations of existing studies and implications for future MSI work are discussed in light of research designs and techniques, study samples and stimuli, and clinical applications.
... This suggests that hearing aid intervention alone is not beneficial enough to improving SiN listening (Johnson et al., 2011). More recent studies have suggested that a combination of auditory and cognitive training might be a beneficial route in improving SiN listening in aided listening Rudner, 2016;Tremblay et al., 2016;Yu et al., 2017). However, further research is needed in understanding the link between auditory and cognitive training and SiN perception. ...
... Therefore, it appears that younger and older listeners deploy different listening strategies and these strategies depend on SiN listening condition and hearing sensitivity.These findings have possible implications for hearing intervention and highlight an explanation as to why hearing restored by a hearing aid cannot necessarily fully restore SiN perception, i.e., if there is cognitive decline and/or other contributing common factors. In this light, there is growing evidence and call for auditory and cognitive training to be administered alongside hearing interventionRudner, 2016;Tremblay et al., 2016;Yu et al., 2017). Furthermore, if sensory deprivation hypothesis accounts for even a proportion age-related declines this would support an argument for intervention as early as possible. ...
Article
Full-text available
This thesis investigated the role of cognition and hearing sensitivity in Speech-in-Noise (SiN) perception across different listener groups and SiN listening conditions. A typical approach to investigating the contribution of cognition is correlating cognitive ability to SiN intelligibility in populations controlled for or varied in age and/or hearing sensitivity. However, using this approach to advance our understanding of the contribution of cognition, and its potential interaction with age and hearing loss, for SiN perception has been limited by a combination of: A lack of systematicity in selection of SiN perception tests and a lack of theoretical rigor in selection of cognitive tests, a lack of comparability across studies due to differences in both cognitive test and SiN perception test selections, and in differences in age or hearing sensitivity ranges among tested populations, and the limitations of using a correlation study approach. Therefore, the main focus of the thesis will be to generate evidence to overcome these limitations in three purpose-designed investigations, discussed in chapters two, three and four respectively. In chapter two I report a systematic review and meta-analysis which took a systematic and theory driven approach to comprehensively and quantitatively assess published evidence for the role of cognition in SiN perception. The results of this chapter suggest a general association of r~.3 between cognitive performance and SiN perception, although some variability in association appeared to exist depending on cognitive domain and SiN target or masker assessed. In chapter three I present a study which used a theory-driven and systematic approach to investigate the contribution of cognition and listener characteristics (namely age and hearing sensitivity differences across younger and older listener groups) for SiN perception in different SiN conditions, using an association study design. The study revealed that the Central Executive contributed to SiN perception performance in older, but not younger listeners, regardless of SiN condition. Phonological Loop processing was important for both listener groups, but with a different role depending on age group and masker type. Episodic Buffer ability only contributed to SiN performance for older listeners, and was modulated by hearing sensitivity and background masker. In chapter four, building on the association study findings, I report a dual-task study that manipulated the availability of specific cognitive abilities for SiN perception for younger adult listeners. Here I provided further evidence to show Phonological Loop ability is more important than Central Executive ability and Episodic Buffer ability for SiN perception for this listener group, using a carefully controlled experimental design. In summary, the evidence from this thesis indicates that the role of different cognitive abilities for SiN perception can differ depending on age, hearing sensitivity and listening condition. Additionally, using a systematic approach and combining multiple methodological techniques has been informative in investigating these roles to a greater extent than has previously been achieved in the literature.
... A recent study using cortical visual evoked potentials reported that after hearing aid use for a period of 6 months, reduced cortical activation in temporal and frontal regions with increased activation in visual regions were observed for visual stimuli processing (compared to baseline non-hearing aid use), suggesting neural plastic changes in the cortex after the use of hearing aids for 6 months (Glick and Sharma, 2020). Using Functional magnetic resonance imaging (fMRI), Yu et al. (2017) reported neural changes assessed by fMRI in a clinical case study. An older adult with bilateral sensorineural hearing loss was fit for the first time with hearing aids and was tested at baseline and after 8 weeks. ...
Article
Full-text available
Age-related hearing loss is one of the most prevalent health conditions in older adults. Although hearing aid technology has advanced dramatically, a large percentage of older adults do not use hearing aids. This untreated hearing loss may accelerate declines in cognitive and neural function and dramatically affect the quality of life. Our previous findings have shown that the use of hearing aids improves cortical and cognitive function and offsets subcortical physiological decline. The current study tested the time course of neural adaptation to hearing aids over the course of 6 months and aimed to determine whether early measures of cortical processing predict the capacity for neural plasticity. Seventeen (9 females) older adults (mean age = 75 years) with age-related hearing loss with no history of hearing aid use were fit with bilateral hearing aids and tested in six testing sessions. Neural changes were observed as early as 2 weeks following the initial fitting of hearing aids. Increases in N1 amplitudes were observed as early as 2 weeks following the hearing aid fitting, whereas changes in P2 amplitudes were not observed until 12 weeks of hearing aid use. The findings suggest that increased audibility through hearing aids may facilitate rapid increases in cortical detection, but a longer time period of exposure to amplified sound may be required to integrate features of the signal and form auditory object representations. The results also showed a relationship between neural responses in earlier sessions and the change predicted after 6 months of the use of hearing aids. This study demonstrates rapid cortical adaptation to increased auditory input. Knowledge of the time course of neural adaptation may aid audiologists in counseling their patients, especially those who are struggling to adjust to amplification. A future comparison of a control group with no use of hearing aids that undergoes the same testing sessions as the study’s group will validate these findings.
... Concerning cognition, beneficial effects of auditory training were demonstrated in processing speed, working memory, and attention [60,61]. The observed benefits persisted as short as two to four weeks [62] or even as long as two to six months [63]. ...
Objectives: The present study explored the auditory benefits of abacus-training using a battery of tests (auditory acuity, clarity, and cognition). The study also aimed to identify the relative contributions of auditory processing tests that are most sensitive to the effects of abacus-training. Materials and methods: The study was conducted on 60 children aged between 9 – 14 years. These participants were divided into two groups (abacus trained and untrained) of 30 each, who underwent a series of auditory functioning tests. The battery of tests included: auditory acuity (frequency, intensity, temporal, binaural and spatial resolution), auditory clarity (speech perception in noise), and auditory cognition (working digit and syllable memory). Results: Statistically (t-test and Mann Whitney U test), significant changes were observed in the spatial resolution, auditory clarity, and cognition tests, suggestive of positive outcomes of abacus training at the higher-order auditory processing. This finding was complemented by the discriminant function (DF) analyses, which showed that clarity and cognitive measures helped for effective group segregation (abacus-trained and un-trained). These measures had significantly higher contributions to the DF. Conclusions: The findings of the study provide evidence of the multi-component benefits of abacus training in children and the transferability of learning effects to the auditory modality
... For instance, individuals with hearing loss have degraded auditory input and can benefit more from combining visual information in emotion perception and speech processing. However, much more work is needed to discover how to optimize audiovisual training to mitigate the negative effects of hearing impairment (Picou et al., 2018;Yu et al., 2017). If successful, clinical applications have the potential to shape the intervention trajectory of emotion cognition and advance speech communication and social life for a large number of special populations such as patients with psychotic disorders (e.g., schizophrenia, autism, Alzheimer's dementia), people with hearing impairments (e.g., severe-profound hearing loss, recipients of cochlear implants), the elderly people, and children with learning disabilities (e.g., dyslexia; Agustí et al., 2017;de Jong et al., 2009;Diamond & Zhang, 2016;Irwin & DiBlasi, 2017). ...
Article
Full-text available
Purpose: Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method: Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results: Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion: Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
... Growing evidence has demonstrated that audiovisual integration can be influenced by development, aging, attention, training and listening experience (Lippert et al., 2007;McNorgan and Booth, 2015;Paraskevopoulos et al., 2015;Yu et al., 2017). For example, Koelewijn et al. (2010) focused on the question of whether multisensory integration is an automatic process, suggesting that multisensory integration is accompanied by attentional processes and that the two can interact in multiple areas of the brain. ...
Article
Full-text available
Audiovisual integration significantly changes over the lifespan, but age-related functional connectivity in audiovisual temporal asynchrony integration tasks remains underexplored. In the present study, electroencephalograms (EEGs) of 27 young adults (22–25 years) and 25 old adults (61–76 years) were recorded during an audiovisual temporal asynchrony integration task with seven conditions [auditory (A), visual (V), AV, A50V, A100V, V50A and V100A]. We calculated the phase lag index (PLI)-weighted connectivity networks modulated by the audiovisual tasks and found that the PLI connections showed obvious dynamic changes after stimulus onset. In the theta (4–7 Hz) and alpha (8–13 Hz) bands, the AV and V50A conditions induced stronger functional connections and higher global and local efficiencies, reflecting a stronger audiovisual integration effect, which was attributed to the auditory information arriving at the primary auditory cortex earlier than the visual information reaching the primary visual cortex. Importantly, the functional connectivity and network efficiencies of old adults revealed higher global and local efficiencies and higher degree in both the theta and alpha bands. These larger network efficiencies indicated that old adults might experience more difficulties in attention and cognitive control during the audiovisual integration task with temporal asynchrony than young adults. There were significant associations between network efficiencies and peak time of integration only in young adults. We propose that an audiovisual task with multiple conditions might arouse the appropriate attention in young adults but would lead to a ceiling effect in old adults. Our findings provide new insights into the network topography of old adults during audiovisual integration and highlight higher functional connectivity and network efficiencies due to greater cognitive demand.
... Auditory training programs are designed to exploit brain plasticity in order to improve speech perception in complex listening situations. Brain imaging tools can be useful in tracking these neurophysiological changes induced by perceptual learning, including measures of neural activation, oscillations, and functional connectivity patterns in the neural substrate dedicated to speech processing (Miller, Zhang, & Nelson, 2016;Rao et al., 2017;Song, Skoe, Banai, & Kraus, 2012;Yu et al., 2017;Zhang & Wang, 2010). Additionally, electrophysiological measures may provide useful information for the development of effective auditory training strategies, as they could track improvements in sensory or cognitive processes underlying speech perception in background noise. ...
... Furthermore, there is evidence of visual activity in audiovisual tasks, which increases when the auditory input is distorted (Schepers et al. 2015). Moreover, hearing aid users who received audiovisual training (Yu et al. 2017) and normal hearing adults who learned American Sign Language (ASL) as a multisensory training (Williams et al. 2016) showed increased use of lip reading after the training and at the same time increased connectivity between auditory and visual cortices. All this evidence together, therefore, suggests that during difficult hearing situations, even in the absence of visual cues, the visual cortex may play a role in hearing. ...
Article
Full-text available
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants’ speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245–255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Article
Visual cues usually play a vital role in social interaction. As well as being the primary cue for identifying other people, visual cues also provide crucial non-verbal social information via both facial expressions and body language. One consequence of vision loss is the need to rely on non-visual cues during social interaction. Although verbal cues can carry a significant amount of information, this information is often not available to an untrained listener. Here, we review the current literature examining potential ways that the loss of social information due to vision loss might impact social functioning. A large number of studies suggest that low vision and blindness is a risk factor for anxiety and depression. This relationship has been attributed to multiple factors, including anxiety about disease progression, and impairments to quality of life that include difficulties reading, and a lack of access to work and social activities. However, our review suggests a potential additional contributing factor to reduced quality of life that has been hitherto overlooked: blindness may make it more difficult to effectively engage in social interactions, due to a loss of visual information. The current literature suggests it might be worth considering training in voice discrimination and/or recognition when carrying out rehabilitative training in late blind individuals.
Article
Full-text available
The present study compared elderly hearing aid (EHA) users (n ¼ 20) with elderly normal-hearing (ENH) listeners (n ¼ 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.
Article
Full-text available
Auditory training (AT) has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test–retest effects. Thus, we investigated the efficacy of short-term AT [acoustically controlled auditory training (ACAT)] in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with auditory processing disorder (APD) received an initial evaluation (evaluation 1 – E1) consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds) to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group (n = 8) underwent placebo training. The Passive Control Group (n = 8) did not receive any intervention. After 12 weeks, the subjects were revaluated (evaluation 2 – E2). Then, all of the subjects underwent ACAT. Following another 12 weeks (eight training sessions), they underwent the final evaluation (evaluation 3 – E3). There was no significant difference between E1 and E2 in the behavioral test [F(9.6) = 0.06, p = 0.92, λ de Wilks = 0.65)] or P300 [F(8.7) = 2.11, p = 0.17, λ de Wilks = 0.29] (discarding the presence of placebo effects and test–retest). A significant improvement was observed between the pre-and post-ACAT conditions (E2 and E3) for all auditory skills according to the behavioral methods [F(4.27) = 0.18, p = 0.94, λ de Wilks = 0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.
Article
Full-text available
Auditory training (AT) helps compensate for degradation in the auditory signal. A series of three high-quality training studies are discussed, which include, (i) a randomized controlled trial (RCT) of phoneme discrimination in quiet that trained adults with mild hearing loss (n = 44), (ii) a repeated measures study that trained phoneme discrimination in noise in hearing aid (HA) users (n = 30), and (iii) a double-blind RCT that directly trained working memory (WM) in HA users (n = 57). AT resulted in generalized improvements in measures of self-reported hearing, competing speech, and complex cognitive tasks that all index executive functions. This suggests that for AT related benefits, the development of complex cognitive skills may be more important than the refinement of sensory processing. Furthermore, outcome measures should be sensitive to the functional benefits of AT. For WM training, lack of far-transfer to untrained outcomes suggests no generalized benefits to real-world listening abilities. We propose that combined auditory-cognitive training approaches, where cognitive enhancement is embedded within auditory tasks, are most likely to offer generalized benefits to the real-world listening abilities of adults with hearing loss.
Article
Full-text available
Visual perceptual learning (VPL) with younger subjects is associated with changes in functional activation of the early visual cortex. Although overall brain properties decline with age, it is unclear whether these declines are associated with visual perceptual learning. Here we use diffusion tensor imaging to test whether changes in white matter are involved in VPL for older adults. After training on a texture discrimination task for three daily sessions, both older and younger subjects show performance improvements. While the older subjects show significant changes in fractional anisotropy (FA) in the white matter beneath the early visual cortex after training, no significant change in FA is observed for younger subjects. These results suggest that the mechanism for VPL in older individuals is considerably different from that in younger individuals and that VPL of older individuals involves reorganization of white matter.
Article
Purpose: The goal of this study was to determine whether hearing aids in combination with computer-based auditory training improve audiovisual (AV) performance compared with the use of hearing aids alone. Method: Twenty-four participants were randomized into an experimental group (hearing aids plus ReadMyQuips [RMQ] training) and a control group (hearing aids only). The Multimodal Lexical Sentence Test for Adults (Kirk et al., 2012) was used to measure auditory-only (AO) and AV speech perception performance at three signal-to-noise ratios (SNRs). Participants were tested at the time of hearing aid fitting (pretest), after 4 weeks of hearing aid use (posttest I), and again after 4 weeks of RMQ training (posttest II). Results: Results did not reveal an effect of training. As expected, interactions were found between (a) modality (AO vs. AV) and SNR and (b) test (pretest vs. posttests) and SNR. Conclusion: Data do not show a significant effect of RMQ training on AO or AV performance as measured using the Multimodal Lexical Sentence Test for Adults.
Article
The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.
Article
Objectives: The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Design: Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distrac-tors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. Results: After 4 weeks of hearing aid use but before auditory training , HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d′) in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d′ in the selective attention task. After training, this correlation between P3b and d′ remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Conclusions: Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.
Chapter
During the last two decades, auditory neuroscience has made significant progress in understanding the functional organization of the auditory system in both normally hearing listeners and patients with sensorineural hearing impairments. Modern brain imaging techniques have made an enormous contribution to that progress by enabling the in vivo study of human central auditory function.
Article
Purpose: The aims of this study were to determine if a remotely delivered, Internet-based auditory training (AT) program improved speech-in-noise understanding and if the number of hours spent engaged in the program influenced postintervention speech-in-noise understanding. Method: Twenty-nine first-time hearing aid users were randomized into an AT group (hearing aids + 3 week remotely delivered, Internet-based auditory training program) or a control group (hearing aids alone). The Hearing in Noise Test (Nilsson, Soli, & Sullivan, 1994) and the Words-in-Noise test (Wilson, 2003) were administered to both groups at baseline + 1 week and immediately at the completion of the 3 weeks of auditory training. Results: Speech-in-noise understanding improved for both groups at the completion of the study; however, there was not a statistically significant difference in postintervention improvement between the AT and control groups. Although the number of hours the participants engaged in the AT program was far fewer than prescribed, time on task influenced the postintervention Words-in-Noise but not Hearing in Noise Test scores. Conclusion: Although remotely delivered, Internet-based AT programs represent an attractive alternative to resource-intensive, clinic-based interventions, their demonstrated efficacy continues to remain a challenge due in part to issues associated with compliance.
Article
Improvements in digital amplification, cochlear implants, and other innovations have extended the potential for improving hearing function; yet, there remains a need for further hearing improvement in challenging listening situations, such as when trying to understand speech in noise or when listening to music. Here, we review evidence from animal and human models of plasticity in the brain's ability to process speech and other meaningful stimuli. We considered studies targeting populations of younger through older adults, emphasizing studies that have employed randomized controlled designs and have made connections between neural and behavioral changes. Overall results indicate that the brain remains malleable through older adulthood, provided that treatment algorithms have been modified to allow for changes in learning with age. Improvements in speech-in-noise perception and cognition function accompany neural changes in auditory processing. The training-related improvements noted across studies support the need to consider auditory training strategies in the management of individuals who express concerns about hearing in difficult listening situations. Given evidence from studies engaging the brain's reward centers, future research should consider how these centers can be naturally activated during training.