ThesisPDF Available

Behavioral and Electrophysiological Measures of Speech-in-Noise Perception in Normal Hearing and Hearing Impaired Adults

Authors:

Figures

Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Article
Full-text available
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuips TM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.
Article
Full-text available
This magnetoencephalography (MEG) study investigated evoked ON and OFF responses to ramped and damped sounds in normal-hearing human adults. Two pairs of stimuli that differed in spectral complexity were used in a passive listening task; each pair contained identical acoustical properties except for the intensity envelope. Behavioral duration judgment was conducted in separate sessions, which replicated the perceptual bias in favour of the ramped sounds and the effect of spectral complexity on perceived duration asymmetry. MEG results showed similar cortical sites for the ON and OFF responses. There was a dominant ON response with stronger phase-locking factor (PLF) in the alpha (8–14 Hz) and theta (4–8 Hz) bands for the damped sounds. In contrast, the OFF response for sounds with rising intensity was associated with stronger PLF in the gamma band (30–70 Hz). Exploratory correlation analysis showed that the OFF response in the left auditory cortex was a good predictor of the perceived temporal asymmetry for the spectrally simpler pair. The results indicate distinct asymmetry in ON and OFF responses and neural oscillation patterns associated with the dynamic intensity changes, which provides important preliminary data for future studies to examine how the auditory system develops such an asymmetry as a function of age and learning experience and whether the absence of asymmetry or abnormal ON and OFF responses can be taken as a biomarker for certain neurological conditions associated with auditory processing deficits.
Book
How does the brain code and process incoming information, how does it recog­ nize a certain object, how does a certain Gestalt come into our awareness? One of the key issues to conscious realization of an object, of a Gestalt is the attention de­ voted to the corresponding sensory input which evokes the neural pattern underly­ ing the Gestalt. This requires that the attention be devoted to one set of objects at a time. However, the attention may be switched quickly between different objects or ongoing input processes. It is to be expected that such mechanisms are reflected in the neural dynamics: Neurons or neuronal assemblies which pertain to one object may fire, possibly in rapid bursts at a time. Such firing bursts may enhance the synaptic strength in the corresponding cell assembly and thereby form the substrate of short-term memory. However, we may well become aware of two different objects at a time. How can we avoid that the firing patterns which may relate to say a certain type of move­ ment (columns in V5) or to a color (V 4) of one object do not become mixed with those of another object? Such a blend may only happen if the presentation times be­ come very short (below 20-30 ms). One possibility is that neurons pertaining to one cell assembly fire syn­ chronously. Then different cell assemblies firing at different rates may code different information.
Article
This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental- and sentence-level speech perception in noise. The data were collected from 16 normal-hearing participants in a double-oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech-babble background at a -3 dB signal-to-noise ratio) conditions. Time-frequency analysis was applied to obtain inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response. Behavioral measures included percent correct phoneme detection and reaction time as well as percent correct IEEE sentence recognition in quiet and in noise. Linear mixed-effects models were applied to determine possible brain-behavior correlates. A significant noise-induced reduction in P3 amplitude was found, accompanied by significantly longer P3 latency and decreases in ITPC across all frequency bands of interest. There was a differential effect of noise on consonant discrimination and vowel discrimination in both ERP and behavioral measures, such that noise impacted the detection of the consonant change more than the vowel change. The P3 amplitude and some of the ITPC and ERSP measures were significant predictors of speech perception at segmental- and sentence-levels across listening conditions and stimuli. These data demonstrate that the P3 response with its associated cortical oscillations represents a potential neurophysiological marker for speech perception in noise.
Article
Purpose: The purpose of this study was to evaluate the effects of hearing loss and age on subjective ratings of emotional valence and arousal in response to nonspeech sounds. Method: Three groups of adults participated: 20 younger listeners with normal hearing (M = 24.8 years), 20 older listeners with normal hearing (M = 55.8 years), and 20 older listeners with mild-to-severe acquired hearing loss (M = 65.6 years). Stimuli were presented via headphones at either 35 and 65 dB SPL or 50 and 80 dB SPL on the basis of random assignment within each group. Participants rated the emotional valence and arousal for previously normed nonspeech auditory stimuli. Results: Linear mixed model analyses were conducted separately for ratings of valence and arousal. Results revealed that listeners with hearing loss exhibited a reduced range of emotional ratings. Furthermore, for stimuli presented at 80 dB SPL, valence ratings from listeners with hearing loss were significantly lower than ratings from listeners with normal hearing. Conclusions: Acquired hearing loss, not increased age, affected emotional responses by reducing the range of subjective ratings and by reducing the reported valence of the highest intensity stimuli. These results have potentially important clinical implications for aural rehabilitation.
Article
Objectives: The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Design: Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distrac-tors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. Results: After 4 weeks of hearing aid use but before auditory training , HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d′) in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d′ in the selective attention task. After training, this correlation between P3b and d′ remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Conclusions: Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.