Article

What can the pure-tone audiogram tell us about a patient's SNR loss?

Authors:
  • ETYMOTIC RESEARCH
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, speech-in noise (SPIN) test [2] and auditory figure-ground subtest of the SCAN-A. [3] The outcome measure of such tests is a score expressed in percentage. Ability to understand speech in the presence of noise has also been measured using adaptive tests. ...
... The SNR is varied in the adaptive tests to estimate the poorest SNR at which the listener can sustain 50% performance and this is referred to as SNR-50. [3] The advantage of SNR-50 is the elimination of floor and ceiling effects, both of which present procedural and interpretative limitations on traditional tests. The Hearing in Noise Test [4] and QuickSIN [5] are examples of such tests. ...
... Killion and Niquette stated that SNR loss and sensitivity loss are independent measures. [3] Grant and Walden [6] reported that pure tone thresholds accounted for only 50% of variance in SNR loss. They postulated that Wagener et al. [25] Name-verb-number-adjective-noun -8.43±0.9 ...
Article
Introduction There is a dearth of standardized recorded tests for the assessment of speech-in-noise in Marathi. This study aims to fill this lacuna by developing a computerized Marathi open-set sentence-in-noise test; to investigate the significance of the difference between signal-to-noise ratio-50 (SNR-50) of adults with hearing loss (AWHL) and adults with normal hearing sensitivity (AWNH); and to investigate the difference between the aided versus unaided SNR-50 in AWHL. Method A multi-centric study was conducted to develop normative data for SNR-50 in 130 AWNH using MISHA-Random Adaptive Marathi Sentence in Noise (M-RAMSIN). SNR-50 was compared in AWHL and their age-matched controls, and SNR loss was determined. Thirty AWHL were tested for SNR-50 in unaided and aided conditions to determine if there was a significant difference in the two conditions. Results Normative values for SNR-50 under headphones and in sound field were 2 dB and 0 dB, respectively. There was a significant difference in SNR-50 of AWNH and AWHL. The median SNR loss of AWHL was 6 dB. There was a low positive correlation between SNR-50 and audiometric thresholds in AWHL. Aided SNR-50 was significantly better than the unaided SNR-50 in AWHL. Aided SNR-50 was better with binaural hearing aids than monaural hearing aids. Conclusion M-RAMSIN is time efficient and reliable tool with good construct validity since it documented the difference in performance of AWNH and AWHL. Poor correlation of SNR-50 with audiometric thresholds signifies the differential effects of SNHL on audibility and distortional effects. The test can document the benefit from hearing aid fitting; thus has the potential to be incorporated into the hearing aid validation process for Marathi-speaking AWHL.
... In comparison to pure-tone audiometry, speech-in-noise tests can be performed in noisy environments outside the clinic, and they can be self-administered in the absence of calibrated equipment and with no supervision by hearing healthcare providers [9][10][11][12][13]. The increasing use of speech-in-noise tests raised awareness of the limitations of diagnostic validity in specific testing conditions and applications. ...
... The VCV stimuli used in this study could yield optimal screening for a number of reasons. Firstly, decreased VCV recognition may indicate decreased ability to recognize consonants and fast transients, which are among the first clues of the presence of age-related high-frequency hearing loss [11]. Moreover, nonsense syllable recognition tests such as VCV tests present good test-retest reliability and precision under multiple experimental conditions, such as stimulus randomization and uncontrolled background noise [30]. ...
Article
Full-text available
The purpose of this study is to characterize the intelligibility of a corpus of Vowel–Consonant–Vowel (VCV) stimuli recorded in five languages (English, French, German, Italian and Portuguese) in order to identify a subset of stimuli for screening individuals of unknown language during speech-in-noise tests. The intelligibility of VCV stimuli was estimated by combining the psychometric functions derived from the Short-Time Objective Intelligibility (STOI) measure with those derived from listening tests. To compensate for the potential increase in speech recognition effort in non-native listeners, stimuli were selected based on three criteria: (i) higher intelligibility; (ii) lower variability of intelligibility; and (iii) shallower psychometric function. The observed intelligibility estimates show that the three criteria for application in multilingual settings were fulfilled by the set of VCVs in English (average intelligibility from 1% to 8% higher; SRT from 4.01 to 2.04 dB SNR lower; average variability up to four times lower; slope from 0.35 to 0.68%/dB SNR lower). Further research is needed to characterize the intelligibility of these stimuli in a large sample of non-native listeners with varying degrees of hearing loss and to determine the possible effects of hearing loss and native language on VCV recognition.
... Although widespread, such a univariate approach can present limitations. First, there is a well-known mismatch between pure-tone thresholds and SRTs as individuals with normal pure-tone thresholds may have difficulties in speech understanding and, vice versa, individuals with hearing loss may be able to reach satisfactory speech recognition performance (Humes, 2013;Killion & Niquette, 2000). Second, other features, in addition to SRT, might be valid predictors of hearing loss, for example, the subject's age or the average reaction time (Humes, 2013;Nuesse et al., 2018;Polo, Zanet, Lenatti, et al., 2021;. ...
... and .72 between pure-tone hearing thresholds at 0.5, 1, 2, and 4 kHz and SRTs extracted from three different speech-innoise tests, the Dutch version of the digit triplet test, Earcheck, and Occupational Earcheck, respectively, in 98 subjects, half of whom had different degrees of noise-induced hearing loss. Decreased consonant recognition is one of the first signs of age-related hearing loss (Killion & Niquette, 2000), and lower speech recognition abilities with age have been widely demonstrated in the literature (e.g., Heidari et al., 2018). The correlation between SRT and pure-tone hearing thresholds, as derived from this study, is equal to .63. ...
Article
Full-text available
Purpose The aim of this study was to analyze the performance of multivariate machine learning (ML) models applied to a speech-in-noise hearing screening test and investigate the contribution of the measured features toward hearing loss detection using explainability techniques. Method Seven different ML techniques, including transparent (i.e., decision tree and logistic regression) and opaque (e.g., random forest) models, were trained and evaluated on a data set including 215 tested ears (99 with hearing loss of mild degree or higher and 116 with no hearing loss). Post hoc explainability techniques were applied to highlight the role of each feature in predicting hearing loss. Results Random forest (accuracy = .85, sensitivity = .86, specificity = .85, precision = .84) performed, on average, better than decision tree (accuracy = .82, sensitivity = .84, specificity = .80, precision = .79). Support vector machine, logistic regression, and gradient boosting had similar performance as random forest. According to post hoc explainability analysis on models generated using random forest, the features with the highest relevance in predicting hearing loss were age, number and percentage of correct responses, and average reaction time, whereas the total test time had the lowest relevance. Conclusions This study demonstrates that a multivariate approach can help detect hearing loss with satisfactory performance. Further research on a bigger sample and using more complex ML algorithms and explainability techniques is needed to fully investigate the role of input features (including additional features such as risk factors and individual responses to low-/high-frequency stimuli) in predicting hearing loss.
... The ability to understand speech in noise depends upon multiple factors including; the characteristics of the speech signal, signal to noise ratio (SNR ratio) and the listener's degree of hearing impairment. The measurement of signal to noise ratio loss (SNR loss) -Which is defined as the dB increase in signal to noise ratio required by a hearing impaired person to understand speech in noise as well as a normal hearer -is important because speech understanding in noise cannot be reliably predicted from the pure tone audiogram (Killion and Niquette, 2000). ...
... Sentence length materials may redress limitations associated with single word-length tests. Specifically, single word materials do not include the co articulation effects or dynamic range of conversational speech, and single words lack the real world relevance provided by sentence-length stimuli (Killion & Niquette, 2000;Nilsson, Sullivan, & Soli, 1990). ...
Thesis
Abstract Introduction: Speech-in-noise measures are gaining relevance, as audiologists understand the advantages of using outcome measures that demonstrate the need for and benefit from amplification. Two such speech in- noise measures are the Arabic Hearing in Noise Test (HINT) and the Arabic Quick Speech-In-Noise (QuickSIN) test. This work aimed to: Compare between the performance of two adaptive Speech in Noise tests (Arabic QuickSIN test and Arabic HIN test) in adults with sensorineural hearing loss. Subjects and method: This study included control Group: 25 normal hearing subjects between the ages of 18-05 years and study group: 50 subjects were further divided into three subgroups. Subgroup (IIa): 20 subjects with sensorineural hearing loss. Subgroup (IIb): 20 subjects with HA use. Subgroup (IIc): 10 subjects with unilateral Cochlear implantation. Materials: Arabic QuickSIN, Arabic HINT sentences lists and Arabic Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire. Results: Results showed that the QuickSIN was found to have some advantages over the HINT in terms of clinical use. The correlations for the QuickSIN test with APHAP background noise subscale were higher than the correlations observed for the HINT with the same questionnaire for background noise subscale in HL group and HA group. However, both tests were not correlated with the APHAB subscales in CI group. In addition, the sensitivity for QuickSIN test was higher than the sensitivity for HINT in all subgroups, while the specificity of HINT was higher in subgroup IIb. So, we conclude that the QuickSIN Signal to noise ratio Loss is more sensitive objective measure of speech perception in noise than HINT and is more reflective of the patients’ subjective perceptions in real life based on their responses on self-assessment questionnaire (APHAB) in both unaided and aided conditions. Keywords: Hearing in Noise Test (HINT), Quick speech in noise test (QuickSIN), Arabic Abbreviated Profile of Hearing Aid Benefit (APHAB), Hearing Loss (HL) and Hearing Aid (HA).
... Speech-in-noise (SiN) recognition difficulties in older adults can partly be explained by the highly prevalent pure-tone hearing loss (i.e., elevated pure-tone thresholds, particularly in higher frequencies) (Cruickshanks et al., 1998;Dubno et al., 1984;Gordon-Salant and Fitzgibbons, 1999;Killion and Niquette, 2000). Age-related pure-tone hearing loss often results from a damage to cochlear outer hair cells and the stria vascularis in the auditory periphery (Dubno et al., 2013;Mills et al., 2006). ...
... In our sample, and in previous research (Dubno et al., 1984;Giroud et al., 2018;Gordon-Salant and Fitzgibbons, 1993;Killion and Niquette, 2000), evidence suggests a strong decline of SiN recognition in older compared to younger adults, independently of pure-tone hearing loss. In our sample, the age group difference in SiN recognition equals about 3.2 dB SNR on average. ...
Article
Full-text available
Many older adults are struggling with understanding spoken language, particularly when background noise interferes with comprehension. In the present study, we investigated a potential interaction between two well-known factors associated with greater speech-in-noise (SiN) reception thresholds in older adults, namely a) lower working memory capacity and b) age-related structural decline of frontal lobe regions. In a sample of older adults (N=25) and younger controls (N=13) with normal pure-tone thresholds, SiN reception thresholds and working memory capacity were assessed. Furthermore, T1-weighted structural MR-images were recorded to analyze neuroanatomical traits (i.e., cortical thickness (CT) and cortical surface area (CSA)) of the cortex. As expected, the older group showed greater SiN reception thresholds compared to the younger group. We also found consistent age-related atrophy (i.e., lower CT) in brain regions associated with SiN recognition namely the superior temporal lobe bilaterally, the right inferior frontal and precentral gyrus, as well as the left superior frontal gyrus. Those older participants with greater atrophy in these brain regions also showed greater SiN reception thresholds. Interestingly, the association between CT in the left superior frontal gyrus and SiN reception thresholds was moderated by individual working memory capacity. Older adults with greater working memory capacity benefitted more strongly from thicker frontal lobe regions when it comes to improve SiN recognition. Overall, our results fit well into the literature showing that age-related structural decline in auditory- and cognition-related brain areas is associated with greater SiN reception thresholds in older adults. However, we highlight that this association changes as a function of individual working memory capacity. We therefore believe that future interventions to improve SiN recognition in older adults should take into account the role of the frontal lobe as well as individual working memory capacity.
... The background noises that are present in everyday life can sometimes make listening difficult, especially when trying to understand speech. In this respect, Speech-in-Noise tests are designed to mimic real-life circumstances [1] . As a person with sensorineural hearing loss may be unable to understand speech, especially in noisy situations, Speech-in-Noise tests can provide valuable information about a person's hearing ability [2] . ...
... All participants were subjected to: (1) Full audiological history, otological examination and basic audiological evaluation, including pure tone audiometry, speech audiometry and immittancemetry. ...
Article
Full-text available
Objectives: The purpose of this study was to compare between the two newly developed Arabic speech in noise tests (QuickSIN and HINT) to study the clinical utility of both tests in adults with sensorineural hearing loss. Patients and Methods: Seventy five subjects, aged 18-50 years, were divided into two groups: Control group consisted of 25 normal hearing subjects and study group consisted of 50 subjects, who were further divided into three subgroups. Subgroup (IIa): 20 subjects with moderate and moderately severe sensorineural hearing loss. Subgroup (IIb): 20 subjects with moderate and moderately severe sensorineural hearing loss who were HAs users. Subgroup (IIc): 10 subjects with unilateral Cochlear implantation (CI). Materials: Arabic QuickSIN, Arabic HINT and Arabic Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire. Results: The QuickSIN test had some advantages over HINT in terms of clinical use. The QuickSIN test showed better separation in recognition performances between normal hearing and hearing loss than HINT. The sensitivity for QuickSIN was higher than HINT in all subgroups. Correlation for the QuickSIN test with APHAP background noise (BN) subscale was higher than the correlation for the HINT in HL and HA subgroups. However, both tests were not correlated with APHAB (BN) subscale in CI group. Conclusion: Both tests explain the listener’s experience of hearing in background noise. However, QuickSIN test is a more sensitive measure of speech perception in noise than HINT does in both unaided and aided conditions. CI subjects had the lowest performance for both tests.
... Speech-in-noise (SiN) recognition difficulties in older adults can partly be explained by the highly prevalent pure-tone hearing loss (i.e., elevated pure-tone thresholds, particularly in higher frequencies) (Cruickshanks et al., 1998;Dubno et al., 1984;Gordon-Salant and Fitzgibbons, 1999;Killion and Niquette, 2000). Age-related pure-tone hearing loss often results from a damage to cochlear outer hair cells and the stria vascularis in the auditory periphery (Dubno et al., 2013;Mills et al., 2006). ...
... In our sample, and in previous research (Dubno et al., 1984;Giroud et al., 2018;Gordon-Salant and Fitzgibbons, 1993;Killion and Niquette, 2000), there is a strong decline of SiN recognition in older compared to younger adults independently of pure-tone hearing loss. In our sample, the age group difference equals about 3.2 dB SNR on average. ...
Preprint
Full-text available
Many older adults are struggling with understanding spoken language, particularly when background noise interferes with comprehension. In the present study, we investigated a potential interaction between two well-known factors associated with greater speech-in-noise (SiN) reception thresholds in older adults, namely a) lower working memory capacity and b) age-related structural decline of frontal lobe regions. In a sample of older adults (N=25) and younger controls (N=13) with normal pure-tone thresholds, SiN reception thresholds and working memory capacity were assessed. Furthermore, T1-weighted structural MR-images were recorded to analyze neuroanatomical traits (i.e., cortical thickness (CT) and cortical surface area (CSA)) of the cortex. As expected, the older group showed greater SiN reception thresholds compared to the younger group. We also found consistent age-related atrophy (i.e., lower CT) in brain regions associated with SiN recognition namely the superior temporal lobe bilaterally, the right inferior frontal and precentral gyrus, as well as the left superior frontal gyrus. Those older participants with greater atrophy in these brain regions also showed greater SiN reception thresholds. Interestingly, the association between CT in the left superior frontal gyrus and SiN reception thresholds was moderated by individual working memory capacity. Older adults with greater working memory capacity benefitted more strongly from thicker frontal lobe regions when it comes to improve SiN recognition. Overall, our results fit well into the literature showing that age-related structural decline in auditory- and cognition-related brain areas is associated with greater SiN reception thresholds in older adults. However, we highlight that this association changes as a function of individual working memory capacity. We therefore believe that future interventions to improve SiN recognition in older adults should take into account the role of the frontal lobe as well as individual working memory capacity. Highlights Speech-in-noise (SiN) reception thresholds are significantly increased with higher age, independently of pure-tone hearing loss Greater SiN reception thresholds are associated with cortical thinning in several auditory-, linguistic-, and cognitive-related brain areas, irrespective of pure-tone hearing loss Greater cortical thinning in the left superior frontal lobe is detrimental for SiN recognition in older, but not younger adults Older adults with greater working memory capacity benefit more strongly from structural integrity of left superior frontal lobe for SiN recognition
... The background noises that are present in everyday life can sometimes make listening difficult, especially when trying to understand speech. In this respect, Speech-in-Noise tests are designed to mimic real-life circumstances [1] . As a person with sensorineural hearing loss may be unable to understand speech, especially in noisy situations, Speech-in-Noise tests can provide valuable information about a person's hearing ability [2] . ...
... All participants were subjected to: (1) Full audiological history, otological examination and basic audiological evaluation, including pure tone audiometry, speech audiometry and immittancemetry. ...
Article
Full-text available
.ABSTRACTObjectives: The purpose of this study was to compare between the two newly developed Arabic speech in noise tests (QuickSIN and HINT) to study the clinical utility of both tests in adults with sensorineural hearing loss.Patients and Methods: Seventy five subjects, aged 18-50 years, were divided into two groups: Control group consisted of 25 normal hearing subjects and study group consisted of 50 subjects, who were further divided into three subgroups. Subgroup (IIa): 20 subjects with moderate and moderately severe sensorineural hearing loss. Subgroup (IIb): 20 subjects with moderate and moderately severe sensorineural hearing loss who were HAs users. Subgroup (IIc): 10 subjects with unilateral Cochlear implantation (CI). Materials: Arabic QuickSIN, Arabic HINT and Arabic Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire.Results: The QuickSIN test had some advantages over HINT in terms of clinical use. The QuickSIN test showed better separation in recognition performances between normal hearing and hearing loss than HINT. The sensitivity for QuickSIN was higher than HINT in all subgroups. Correlation for the QuickSIN test with APHAP background noise (BN) subscale was higher than the correlation for the HINT in HL and HA subgroups. However, both tests were not correlated with APHAB (BN) subscale in CI group.Conclusion: Both tests explain the listener’s experience of hearing in background noise. However, QuickSIN test is a more sensitive measure of speech perception in noise than HINT does in both unaided and aided conditions. CI subjects had the lowest performance for both tests. Key Words: Arabic abbreviated profile of hearing aid benefit, hearing in noise test, quick speech in noise test.Received: 15 May 2020, Accepted: 2090-0740, November 2020 Vol.21, No. 2
... The value of speech-in-noise (SIN) tests for adult hearing screening is well known. SIN tests can support implementation of widespread hearing screening in adults and can be helpful to identify real-life communication problems and to promote awareness (Humes, 2013;Killion & Niquette, 2000;Smits et al., 2004). Moreover, SIN tests can overcome some limitations of pure-tone audiometry (e.g., need for experienced operator, high cost of audiometers, and need for low-noise environment), can be implemented in an automated way on user interfaces, and can be self-administered either locally, via handheld devices or personal computers, or at a distance via web applications or smartphone apps (De Sousa et al., 2018;Paglialonga et al., 2014Paglialonga et al., , 2015. ...
... Stimuli in the form of VCV (intervocalic consonants) can be helpful in adult screening because decreased consonant recognition performance is among the first clues of age-related hearing loss (Killion & Niquette, 2000). Moreover, VCV recognition is largely independent on semantics, and also, effort to encode the meaning of stimuli is not required, especially in a multiple-choice task that can be executed by individuals with limited knowledge of the spoken language, as far as they are familiar with the written transcription of stimuli. ...
Article
Full-text available
Purpose The aim of this study was to develop and evaluate a novel, automated speech-in-noise test viable for widespread in situ and remote screening. Method Vowel–consonant–vowel sounds in a multiple-choice consonant discrimination task were used. Recordings from a professional male native English speaker were used. A novel adaptive staircase procedure was developed, based on the estimated intelligibility of stimuli rather than on theoretical binomial models. Test performance was assessed in a population of 26 young adults (YAs) with normal hearing and in 72 unscreened adults (UAs), including native and nonnative English listeners. Results The proposed test provided accurate estimates of the speech recognition threshold (SRT) compared to a conventional adaptive procedure. Consistent outcomes were observed in YAs in test/retest and in controlled/uncontrolled conditions and in UAs in native and nonnative listeners. The SRT increased with increasing age, hearing loss, and self-reported hearing handicap in UAs. Test duration was similar in YAs and UAs irrespective of age and hearing loss. The test–retest repeatability of SRTs was high (Pearson correlation coefficient = .84), and the pass/fail outcomes of the test were reliable in repeated measures (Cohen's κ = .8). The test was accurate in identifying ears with pure-tone thresholds > 25 dB HL (accuracy = 0.82). Conclusion This study demonstrated the viability of the proposed test in subjects of varying language in terms of accuracy, reliability, and short test time. Further research is needed to validate the test in a larger population across a wider range of languages and hearing loss and to identify optimal classification criteria for screening purposes.
... The degree to which PTA accurately measures functional hearing has long been questioned (Musiek et al., 2017, Tremblay et al., 2015, Killion and Niquette, 2000, Ferman et al., 1993, Middelweerd et al., 1990. PTA is a measure of hearing threshold levels, which, for diagnostic and rehabilitative purposes, is fundamental to audiology. ...
... This is partly explained by the role of audibility in functional hearing; a listener's hearing acuity needs to be sufficient that signal of interest can be accessed (Pavlovic, 2006). However, hearing acuity alone cannot predict functional hearing on an individual basis, especially in noisy environments and for listeners with sensorineural hearing loss (Tremblay et al., 2015, Killion and Niquette, 2000, Plack et al., 2014, Musiek et al., 2017, Ferman et al., 1993, Middelweerd et al., 1990, Phatak et al., 2018. This widely accepted view is based on the involvement of additional factors when listening to complex sounds in complex environments (Plomp, 1986, Houtgast and Festen, 2008, Surprenant and Watson, 2001. ...
Thesis
Military personnel sometimes have to operate without being detected. In these situations, it is important that the individual has an awareness of their own detectability, both visually and acoustically, in order to operate effectively. Acoustic stealth awareness (ASA) refers to an individual’s judgement of their aural detectability with respect to a detector (target). It has been suggested that hearing impairment might affect ASA due to reduced auditory feedback; however, there has been very little research on ASA, including how to measure it and the factors that influence it. Given the potential implications of hearing impairment for an individual’s auditory fitness for duty (AFFD), a better understanding of the role of hearing in ASA and how hearing impairment impacts an individual’s AFFD is required. The aim of this study was to develop a method for investigating ASA, explore factors that affect judgements and assess the accuracy of judgements. A number of potential experimental approaches were considered to balance the need to control variables that might influence ASA (background noise, wind noise, etc.) and ecological validity. Experiment 1 investigated egocentric visual distance estimation in virtual reality (VR) and reality in an outdoor open field environment. The results showed that distance estimation was similar on average in the two viewing environments over 25 – 125 m. This suggested that VR could be used in applications where similar distance estimation to reality over these ranges is likely to be important, such as ASA. Using the same VR environment as for Experiment 1, a novel method for measuring aural detectability judgements was developed. This required subjects to: 1) view a distant target, 2) listen to a sound produced near them, and 3) judge, yes or no, whether the target would be able to detect that sound. Experiment 2 used this developed method to investigate aural detectability judgements for various subject-target distances (25, 50, 100 m) and stimulus types (Gaussian noise, pine cone crunching, whispered digits), measured using normal-hearing civilians. The results showed that judgements were repeatable, sensitive to sound level, sensitive to subject-target distance and dependent on the stimulus type. Experiment 3 measured the absolute thresholds for each of the sounds that people judged in the previous experiment. The results of these two experiments were combined in order to assess the accuracy of aural detectability judgements. In general, people did not make accurate judgements, rather subjects tended to report sounds as undetectable when they probably were detectable. Judgements were found to get less accurate as subject-target distance increased, and were markedly poorer for whispered digits, suggesting that prior experience of sounds might affect judgements. It is concluded that aural detectability judgements are sensitive to relevant factors such as distance and sound level, but are generally inaccurate, at least for normal-hearing civilians; the degree of error associated with judgements is variable between people, but mostly in the direction that suggests people do not have accurate ASA. This may have implications for the military; further research is required in order to understand if these findings are replicated by military personnel and in real-life acoustic stealth situations, and how hearing impairment affects judgements.
... Many individuals report of difficulty to understand speech or they face communication breakdown in situations like busy restaurants or in a cocktail party where there are multiple sound sources in the background despite having clinically normal hearing thresholds. This could be because hearing threshold at different frequencies is not a good measure to accurately predict the speech recognition performance in the presence of competing background despite the age of the participant (Killion and Niquette 2000;Souza et al. 2011). In a cocktail party effect, one needs to actively attend to the relevant messages, simultaneously filter out and ignore the irrelevant background noise. ...
... The results of the present study showed that there was a significant effect of aging on each of the processes assessed including the working memory capacity. As expected, older adults showed poorer performance when compared to younger adults in all the tests administered despite of having normal hearing thresholds which suggests that the role of peripheral hearing sensitivity to understand speech in the presence of background noise is limited, or in other words, it does not convey much information to it (Killion and Niquette 2000). ...
Article
Full-text available
Present study aimed to investigate the effect of age and suprathreshold processing on cocktail party listening in individuals with normal hearing sensitivity. A total of 92 participants with normal hearing sensitivity were included in the study. They were divided into two groups based on their age. Fifty two young normal hearing adults in the age range of 20–40 years and 40 older normal hearing adults in the age range of 60-80 years. Tests administered included speech perception in noise test, spatial selective attention, gap detection thresholds, temporal modulation transfer function, inter-aural time difference, differential limen of frequency and ripple noise discrimination. Results showed that older adults performed poorer than younger adults in all the tests. Also, temporal cues showed a better relation with speech perception in noise compared to the spectral cues. This can be attributed to the disrupted neural synchrony which is due to poor frequency selectivity as observed through ripple noise discrimination. Individuals rely more on temporal cues due to poorer frequency resolution and phase locking mechanism and also on top down processes such as selective attention too. A degraded speech input would lead them to rely more on their higher cognition.
... Participants' pure tone thresholds correlated positively with performance in the QSIN and WIN, such that greater acuity in hearing was associated with better performance in the speech-in-noise tests. This is consistent with prior research demonstrating poorer speech perception in hearing-impaired populations (Saunders, Odgear, Cosgrove, & Frederick, 2018), although differences in hearing sensitivity cannot completely account for differences in speech perception (Humes & Dubno, 2010;Humes & Roberts, 1990;Killion & Niquette, 2000). Interestingly, the WIN, but not the QSIN, was significantly correlated with cognitive status as measured by the MoCA; participants with diminished word-in-noise comprehension tended to score lower on the MoCA, consistent with earlier research (Lee, Park, Kim, & Kim, 2018;Panza, Solfrizzi, & Logroscino, 2015). ...
... Large effect sizes were found in both the speech-in-noise assessments (QSIN, WIN) between younger and older adults, suggesting an overall age deficit in speech-in-noise comprehension as expected. As previously mentioned, greater hearing acuity was associated with more accurate performance in the speech-in-noise tests, consistent with prior research demonstrating poorer speech perception in hearing-impaired populations, and that differences in hearing sensitivity cannot completely account for differences in speech perception (Humes & Dubno, 2010;Humes & Roberts, 1990;Killion & Niquette, 2000;Saunders et al., 2018). Future studies should investigate reflective attention in older adults with moderate to severe hearing loss; this population may find this retro-cue paradigm challenging as hearing loss should further decrease cognitive resources available for reflective attention. ...
Article
Background/Study Context: Attention can be reflectively oriented to a visual or auditory representation in short-term memory, but it is not clear how aging and hearing acuity affects reflective attention. The purpose of the present study was to examine whether performance in auditory and visual reflective attention tasks varies as a function of participants’ age and hearing status. Methods: Young (19 to 33 years) and older adults with normal or mild to moderate hearing loss (62–90 years) completed a delayed match-to-sample task in which participants were first presented with a memory array of four different digits to hold in memory. Two digits were presented visually (left and right hemifield), and two were presented aurally (left and right ears simultaneously). During the retention interval, participants were presented with a cue (dubbed retro-cue), which could be either uninformative or indicated to the participants to retrospectively orient their attention to either auditory short-term memory (ASTM) or visual short-term memory (VSTM). The cue was followed by another delay, after which a single item was presented (i.e., test probe) for comparison (match or no match) with the items held in ASTM and/or VSTM. Results: Overall, informative retro-cue yielded faster response time than uninformative retro-cue. The retro-cue benefit in response time was comparable for auditory and visual-orienting retro-cue and similar in young and older adults. Regression analyses showed that only the auditory-orienting retro-cue benefit was predicted by hearing status rather than age per se. Conclusion: Both younger and older adults can benefit from visual and auditory-orienting retro-cues, but the auditory-orienting retro-cue benefit decreases with poorer hearing acuity. This finding highlights changes in cognitive processes that come with age even in those with just mild-to-moderate hearing loss, and suggest that older adults’ performance in working memory tasks is sensitive to low level auditory scene analysis (i.e., concurrent sound segregation).
... an increased number of older adults and, as a result, an increased number of people with hearing loss, which often goes undiagnosed and unaddressed. The most common form of hearing loss among adults is sensorineural loss, arising from hair cell damage in the cochlea stemming from aging, noise exposure, illness, disease, and injury [6]. Cochlear damage results in a loss of sensitivity to sound, or a decrease in hearing acuity. ...
Article
Full-text available
Age-related hearing loss is becoming more prevalent as the aging population continues to rise worldwide. Left untreated, hearing loss is a significantly under-reported concern that negatively impacts quality of life including mental health, cognition, and healthcare communication. Since many older adults may not report hearing concerns to their primary physicians, allied healthcare providers (AHPs) have an important role in recognizing communication challenges due to potential hearing loss, screening for hearing issues, and making referrals as needed. Moreover, AHPs may need to address hearing loss, at least temporarily, to provide their services when communication problems are present. The purpose of this study was to examine knowledge and practice patterns of AHPs regarding hearing loss among their patients. Results of a national survey indicated that many AHPs understand the negative implications of unaddressed hearing loss and the importance of hearing screening, but they are unsure of who, when, and how to address it. Consequently, immediate and innovative solutions are offered to AHPs to enhance communication with patients who might have unaddressed hearing loss. Moreover, findings can be used to develop training and policies to ensure that professionals are well positioned to address the complex needs of individuals with unaddressed hearing loss.
... Las primeras se utilizan para obtener los niveles mínimos de intensidad a los que un sujeto es capaz de percibir estímulos acústicos presentados en forma de tonos puros, estableciendo así la existencia o la ausencia de una posible hipoacusia, su grado y la localización inicial de la lesión causante (Asociación Española de Audiología AEDA, 2002); las segundas, para valorar cualitativamente la audición de un sujeto al evaluar la capacidad para discriminar, identificar, reconocer y comprender auditivamente la palabra hablada (Huarte y Girón, 2014). Sin embargo, no existe una correlación entre el rendimiento en estas pruebas y el desempeño del paciente en entornos reales donde el ruido de fondo está presente (Killion & Niquette, 2000;Taylor, 2003;Vermiglio et al., 2012;Wilson & Weakley, 2005). Esto se debe a que no existe una relación directa entre el audiograma tonal y la capacidad de discriminación de un sujeto debido a que los mecanismos de percepción son mucho más complejos que la función neurosensorial medida en la audiometría tonal (Huarte y Girón, 2014). ...
Article
Full-text available
La dificultad para reconocer el habla en presencia de ruido de fondo es una de las principales quejas de las personas con pérdida auditiva y/o de edad avanzada, convirtiendo esta queja en uno de los principales motivos de consulta auditiva de esta población. Este es uno de los motivos por los cuales las pruebas auditivas de habla en ruido son una herramienta útil en la evaluación, el diagnóstico y la intervención de pacientes con pérdida auditiva. Este estudio tiene como objetivo describir las principales características de las pruebas auditivas de habla en ruido, así como las diferentes pruebas disponibles para la población hispanohablante. Para ello se realizó una revisión bibliográfica mediante una búsqueda en la base de datos Web of Science y Google Académico en la que se incluyeron los términos «habla», «prueba», «ruido» y «español» tanto en español como en inglés. La búsqueda mostró la existencia de 12 pruebas de habla en ruido para población hispanohablante, 11 de ellas para población adulta. Estas pruebas se diferencian unas de otras por las características definitorias de las pruebas de habla en ruido, así como por sus posibilidades de uso.
... Plomp (1978) demonstrated that hearing loss impacts speech perception in quiet and noisy environments [25,26,27]. Attenuation relates to the inaudibility of speech signals due to pure-tone hearing loss, while distortion concerns clarity in auditory processing [28]. ...
... This would suggest dependency on the pure tone audiogram to interpret a patient's ability to hear speech sounds or reliance on disability type questionnaires (Goh et al., 2018;Rajan Devesahayam et al., 2018). The use of pure tone thresholds is especially problematic to make assumptions about hearing speech sounds particularly in noise as discussed in Killion & Niquette, (2000). Another method of using audiometric data to estimate accessibility to speech sounds is called the articulation index (AI). ...
Article
Full-text available
Introduction and objective: Aided pure-tone audiometry is often performed on cochlear implant (CI) users to evaluate speech sound accessibility. This study examines the relationship of speech recognition thresholds (SRT) of CI users using 1) Bisyllabic Malay Speech Audiometry (BMSA) and 2) Malay Matrix Sentence Test (MMST) with pure-tone audiometry aided thresholds (PTAAT) as well as the articulation index (AI). Methods: In this cross-sectional study, SRT measurements for all three speech tests were collected from nineteen (average device age of 4.2 ± 3.7 years) post-lingual adult CI users. Participants had a median age of 37 years old (IQR = 17.5) and PTAAT of 34 dB HL (IQR = 5.5). Results: Median SRT of BMSA and MMST were 45 dB SPL (IQR = 7.5 dB) and-4 dB SNR (IQR = 4.9 dB SNR), respectively. Spearmen's rank-order correlation revealed no statistically significant correlations between average PTAAT and the SRT of BMSA (rs(19) = 0.396, p = 0.09) and MMST (rs(19) = 0.135, p = 0.582). Spearmen's rank-order correlation also revealed no statistically significant correlations between average AI and the SRT of BMSA (rs(19) =-0.169, p = 0.489) and MMST (rs(19) = 0.035, p = 0.887). Conclusion: Both PTAAT and AI are poor estimators of speech perception abilities with and without competing noise. Speech tests should be routinely performed on CI users as neither aided thresholds nor AI are reliable measures of speech-sounds accessibility.
... Currently, the individualized fitting of advanced hearing-aid parameters, such as the degree of noise reduction, is typically estimated by the clinician based on conversations with the patient, patient characteristics (e.g., age, lifestyle) and intuition, or based on the conventional speech audiometry in quiet (ISO 8253-3, 2012). However, a number of studies suggest poor predictive power of the speech-in-quiet measures with respect to any speech-in-noise measure (Duquesnoy, 1983;Killion et al., 2004;Killion and Niquette, 2000;Nilsson et al., 1994;Smoorenburg, 1992). Thus, speech-in-quiet tests have limited utility for the individualization of the hearingaid fitting. ...
Preprint
Full-text available
Over the last decade, multiple studies have shown that hearing-impaired listeners’ speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented binaurally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient’s threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise “waves” (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the “normalized Contrast Level” (in dB nCL) scale was defined, where 0±4 dB nCL corresponds to normal performance and the greater the positive value of dB nCL, the greater the audible contrast loss. Overall, the results of the present study indicate that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process. CITE AS 1) In text: Zaar/Simonsen et al. (2023) 2) In reference list: Zaar, J./Simonsen L. B., Sanchez-Lopez, R., and Laugesen, S. (2023): “The Audible Contrast Threshold (ACT ™ ) test: a clinical spectro-temporal modulation detection test,” medRxiv.
... Özellikle 2000 yılında Killon ve diğ.'nin (Killion & Niquette, 2000) saf ses eşikleri ile gürültü varlığında işitme peformansı arasında bir korelasyonun bulunmadığı dolayısıyla işitme kayıplı bireylerin değerlendirilmesinde saf ses odyogramların yetersiz kaldığına yönelik çalışmasından sonra fonksiyonel işitmenin tam anlamıyla değerlendirilebilmesine yönelik olarak birçok çalışma literatüre sunulmuştur. Fonksiyonel işitme, normal günlük yaşantıdaki işitsel performans olarak tanımlanabilir ve bu performans, sesin algılanması, ayırt edilmesi, lokalizasyonu gibi becerileri içerir. ...
Article
Amaç: Bu çalışmanın amacı yetişkin işitme tarama testlerinde kullanılmak üzere hızlı ve pratik bir Türkçe Gürültüde Konuşmayı Anlama Testi (GKAT) geliştirmektir. Gereç ve Yöntem: GKAT, sabit sinyal gürültü oranlarında fonetik dengeli 25 adet tek heceli kelimenin çoklu konuşmacı gürültüsü içerisine eklenmesiyle oluşturulmuştur. Sinyal 70 dB ses basınç seviyesinde sabit olup, gürültü şiddeti değiştirilerek 6 farklı Sinyal gürültü oranı (SGO) oluşturulmuştur: +10 dB, +6 dB, +3 dB, 0 dB, -3 dB, -6 dB. 20-60 yaş arasındaki 106 birey çalışmamıza dahil edilmiştir. Bulgular: 20-29, 30-39, 40-49, 50-60 yaş gruplarının +10, +6, +3, 0, -3, -6 SGO oranlarında GKAT ortalama değerleri oluşturulmuştur. Yaş grupları içerisinde SGO değişimine göre GKAT skorlarının değişimi değerlendirilmiş, tüm yaş gruplarında sinyal gürültü oranının azalması ile konuşmayı ayırt etme skorlarındaki düşüş istatistiksel olarak anlamlı bulunmuştur (p<0.01). Sonuç: Yetişkin işitme taramalarında kullanılmak üzere, günlük hayattaki işitsel performansı yansıtacak şekilde oluşturulan Türkçe Gürültüde Konuşmayı Anlama Testi alana sunulmuştur.
... In addition, we know hearing threshold levels are not good predictors of some aspects of hearing, especially the common task of listening to speech in the presence of background noise (the latter often disproportionately affected by neural hearing loss). 19,33 This is because the impaired auditory system not only results in decreased audibility, as revealed by elevated hearing thresholds, but also poorer discrimination. 34 The significance of this is that even when speech and noise have different frequencies, the brain is unable to untangle the speech from the noise. ...
Article
Full-text available
Background Evidence on hearing outcome measures when assessing hearing preservation following stereotactic radiosurgery (SRS) for adults with vestibular schwannoma (VS) has not previously been collated in a structured review. Objective The objective of the present study was to perform a scoping review of the evidence regarding the choice of hearing outcomes and other methodological characteristics following SRS for adults with VS. Methods The protocol was registered in the International Platform of Registered Systematic Review and Meta-Analysis Protocols (INPLASY) and reported according to the Preferred Reporting Items for Systematic Review and Meta-Analyses extension guidelines for scoping reviews. A systematic search of five online databases revealed 1,591 studies, 247 of which met the inclusion criteria. Results The majority of studies (n = 213, 86%) were retrospective cohort or case series with the remainder (n = 34, 14%) prospective cohort. Pure-tone audiometry and speech intelligibility were included in 222 (90%) and 158 (64%) studies, respectively, often summarized within a classification scheme and lacking procedural details. Fifty-nine (24%) studies included self-report measures. The median duration of follow-up, when reported, was 43 months (interquartile range: 29, 4–150). Conclusion Evidence on hearing disability after SRS for VS is based on low-quality studies which are inherently susceptible to bias. This review has highlighted an urgent need for a randomized controlled trial assessing hearing outcomes in patients with VS managed with radiosurgery or radiological observation. Similarly, consensus and coproduction of a core outcome set to determine relevant hearing and communication outcome domains is required. This will ensure that patient priorities, including communication abilities in the presence of background noise and reduced participation restrictions, are addressed.
... It is suitable for clinic as it is fast to administer and provides intuitive, standardised results that can be used to categorise a hearing loss. However, it is suggested in many studies that PTA is a poor predictor of a person's speech recognition performance in noise (Killion and Niquette, 2000;Middelweerd et al., 1990;Carhart and Tillman, 1970). ...
Thesis
Hearing aid and cochlear implant users struggle to understand speech in noisy places, such as classrooms and busy workplaces, with their performance typically being significantly worse than for normal-hearing listeners. This thesis details development of two new methods for improvement of speech-in-noise performance outcomes. The first addresses shortcomings in current techniques for assessing speech-in-noise performance and the second proposes a new intervention to improve performance. Chapters 3 and 4 present modifications to a new electrophysiological assessment method, using the temporal response function (TRF), for prediction of speech-in-noise performance. The TRF offers information not provided by behavioural speech-in-noise measures (the gold standard for speech-in-noise research and clinical assessment), which may be used for automated intervention fitting and further analysis of the mechanisms of speech-in-noise performance. Alterations to methodology for applying the TRF are proposed, which may provide the groundwork for further development of the TRF as a method for assessing speech-in-noise performance. Chapters 5 and 6 investigate the efficacy of a new intervention to improve speech-in-noise performance in cochlear implant users by providing missing sound-information through tactile stimulation on the wrists. This section focuses on developing and testing initial prototype devices that could rapidly be adapted for real-world use. These prototypes represent the first step towards the realisation of a wearable device, with accompanying results demonstrating the potential for their use in improving speech-in-noise performance. This thesis highlights two techniques that could be further developed for assessing and enhancing speech-in-noise performance, and outlines future steps to be taken for the realisation and combination of these techniques for improved treatment of the hearing impaired.
... SNR loss is not reflected, nor can be predicted, by means of tonal audiometry [6][7][8]. However, there does appear to be a relationship between the degree of hearing loss and the SNR loss. ...
Article
Full-text available
Background: There are few hearing tests in Spanish that assess speech discrimination in noise in the adult population that take into account the Lombard effect. This study presents the design and development of a Spanish hearing test for speech in noise (Prueba Auditiva de Habla en Ruido en Español (PAHRE) in Spanish). The pattern of the Quick Speech in Noise test was followed when drafting sentences with five key words each grouped in lists of six sentences. It was necessary to take into account the differences between English and Spanish. Methods: A total of 61 people (24 men and 37 women) with an average age of 46.9 (range 18–84 years) participated in the study. The work was carried out in two phases. In the first phase, a list of Spanish sentences was drafted and subjected to a familiarity test based on the semantic and syntactic characteristics of the sentences; as a result, a list of sentences was selected for the final test. In the second phase, the selected sentences were recorded with and without the Lombard effect, the equivalence between both lists was analysed, and the test was applied to a first reference population. Results: The results obtained allow us to affirm that it is representative of the Spanish spoken in its variety in peninsular Spain. Conclusions: In addition, these results point to the usefulness of the PAHRE test in assessing speech in noise by maintaining a fixed speech intensity while varying the intensity of the multi-speaker background noise. The incorporation of the Lombard effect in the test shows discrimination differences with the same signal-to-noise ratio compared to the test without the Lombard effect.
... All adults performed within the normal range on the BKB-SIN scale, with SNR loss values less than 3 dB. This scale is provided in the BKB-SIN manual produced by Etymotic Research (2005) and is based on data published by Killion and Niquette (2000). The child group performed significantly worse on the BKB-SIN measure than adults, requiring a higher average SNR of 3.7 dB (vs. the adult −1.75 dB) to correctly repeat 50% of the target words (SNR 50; U = 132, Z = 4.07, p < .001; ...
Article
Full-text available
Purpose This study investigated whether sensory inhibition in children may be associated with speech perception-in-noise performance. Additionally, gating networks associated with sensory inhibition were identified via standardized low-resolution brain electromagnetic tomography (sLORETA), and the detectability of the cortical auditory evoked potential (CAEP) N1 response was enhanced using a 4- to 30-Hz bandpass filter. Method CAEP gating responses, reflective of inhibition, were evoked via click pairs and recorded using high-density electroencephalography in neurotypical 5- to 8-year-olds and 22- to 24-year-olds. Amplitude gating indices were calculated and correlated with speech perception in noise. Gating generators were estimated using sLORETA. A 4- to 30-Hz filter was applied to detect the N1 gating component. Results Preliminary findings indicate children showed reduced gating, but there was a correlational trend between better speech perception and decreased N2 gating. Commensurate with decreased gating, children presented with incomplete compensatory gating networks. The 4- to 30-Hz filter identified the N1 response in a subset of children. Conclusions There was a tenuous relationship between children's speech perception and sensory inhibition. This may suggest that sensory inhibition is only implicated in atypically poor speech perception. Finally, the 4- to 30-Hz filter settings are critical in N1 detectability. Significance Gating may help evaluate reduced sensory inhibition in children with clinically poor speech perception using the appropriate methodology. Cortical gating generators in typically developing children are also newly identified.
... Older listeners suffer from impaired consonant identification, even when using hearing aids [33][34][35]. Although the speech signal is fully audible, intelligibility might not be restored entirely [31,36], and deficits remain at a relatively high SNR [18]. The common complaint, 'I can hear you, but I cannot understand what you said,' indicates SIN deficits in audible speech. ...
Article
Full-text available
Acoustic-phonetic speech training mitigates confusion between consonants and improves phoneme identification in noise. A novel training paradigm addressed two principles of perceptual learning. First, training benefits are often specific to the trained material; therefore, stimulus variability was reduced by training small sets of phonetically similar consonant–vowel–consonant syllables. Second, the training is most efficient at an optimal difficulty level; accordingly, the noise level was adapted to the participant’s competency. Fifty-two adults aged between sixty and ninety years with normal hearing or moderate hearing loss participated in five training sessions within two weeks. Training sets of phonetically similar syllables contained voiced and voiceless stop and fricative consonants, as well as voiced nasals and liquids. Listeners identified consonants at the onset or the coda syllable position by matching the syllables with their orthographic equivalent within a closed set of three alternative symbols. The noise level was adjusted in a staircase procedure. Pre–post-training benefits were quantified as increased accuracy and a decrease in the required signal-to-noise ratio (SNR) and analyzed with regard to the stimulus sets and the participant’s hearing abilities. The adaptive training was feasible for older adults with various degrees of hearing loss. Normal-hearing listeners performed with high accuracy at lower SNR after the training. Participants with hearing loss improved consonant accuracy but still required a high SNR. Phoneme identification improved for all stimulus sets. However, syllables within a set required noticeably different SNRs. Most significant gains occurred for voiced and voiceless stop and (af)fricative consonants. The training was beneficial for difficult consonants, but the easiest to identify consonants improved most prominently. The training enabled older listeners with different capabilities to train and improve at an individual ‘edge of competence’.
... The primary complaint of hearing-impaired persons is difficulty hearing in background noise. Measuring the signal to noise ratio loss (SNR loss) is important because speech understanding in noise cannot be reliably predicted from the puretone audiogram [1] . ...
... Historically, hearing problems that affect the processing of primarily suprathreshold sounds have been referred to by a variety of names, depending on hypotheses regarding the different underlying causes or specific source locations of the dysfunction. Thus, peripheral distortion of a suprathreshold nature has been commonly referred to as signal to noise ratio (SNR) loss (Plomp 1986;Killion & Niquette 2000), hidden hearing loss or synaptopathy (Kujawa & Liberman 2009) when the hypothesized problem resides at the synaptic junction of the hair cell and the inner ear to the primary auditory neuron of the eighth cranial nerve, and auditory processing disorder [also referred to as central auditory processing disorder (CAPD)] when the central nervous system has problems processing information that comes through the peripheral auditory system. Abnormalities in any one or more stages of auditory processing have been implicated in impaired sound localization, reduced frequency resolution, hyperacusis, tinnitus, reduced speech understanding in noise, and the distorted perception of sounds. ...
Article
Objectives: Over the past decade, U.S. Department of Defense and Veterans Affairs audiologists have reported large numbers of relatively young adult patients who have normal to near-normal audiometric thresholds but who report difficulty understanding speech in noisy environments. Many of these service members also reported having experienced exposure to explosive blasts as part of their military service. Recent studies suggest that some blast-exposed patients with normal to near-normal-hearing thresholds not only have an awareness of increased hearing difficulties, but also poor performance on various auditory tasks (sound source localization, speech recognition in noise, binaural integration, gap detection in noise, etc.). The purpose of this study was to determine the prevalence of functional hearing and communication deficits (FHCD) among healthy Active-Duty service men and women with normal to near-normal audiometric thresholds. Design: To estimate the prevalence of such FHCD in the overall military population, performance of roughly 3400 Active-Duty service members with hearing thresholds mostly within the normal range were measured on 4 hearing tests and a brief 6-question survey to assess FHCD. Subjects were subdivided into 6 groups depending on the severity of the blast exposure (3 levels: none, far away, or close enough to feel heat or pressure) and hearing thresholds (2 levels: audiometric thresholds of 20 dB HL or better, slight elevation in 1 or more thresholds between 500 and 4000 Hz in either ear). Results: While the probability of having hearing difficulty was low (≈4.2%) for the overall population tested, that probability increased by 2 to 3 times if the service member was blast-exposed from a close distance or had slightly elevated hearing thresholds (>20 dB HL). Service members having both blast exposure and mildly elevated hearing thresholds exhibited up to 4 times higher risk for performing abnormally on auditory tasks and more than 5 times higher risk for reporting abnormally low ratings on the subjective questionnaire, compared with service members with no history of blast exposure and audiometric thresholds ≤20 dB HL. Blast-exposed listeners were roughly 2.5 times more likely to experience subjective or objective hearing deficits than those with no-blast history. Conclusions: These elevated rates of abnormal performance suggest that roughly 33.6% of Active-Duty service members (or approximately 423,000) with normal to near-normal-hearing thresholds (i.e., H1 profile) are at some risk for FHCD, and about 5.7% (approximately 72,000) are at high risk, but are currently untested and undetected within the current fitness-for-duty standards. Service members identified as "at risk" for FHCD according to the metrics used in the present study, in spite of their excellent hearing thresholds, require further testing to determine whether they have sustained damage to peripheral and early-stage auditory processing (bottom-up processing), damage to cognitive processes for speech (top-down processing), or both. Understanding the extent of damage due to noise and blast exposures and the balance between bottom-up processing deficits and top-down deficits will likely lead to better therapeutic strategies.
... In addition to their extensive role in spatial localisation, the integration of ITD and ILD cues is also essential for segregating speech in diverse hearing environments or "Cocktail Party". are not sensitive enough to predict SNR loss especially in environments with competing noise [161]. Patients with UHL present difficulties in segregating speech in noisy environments, this requires higher SRTs for better speech understanding [162]. ...
Thesis
This thesis investigates different spatial hearing functions in 3 types of populations: Normal Hearing Subjects (NHS), Unilateral Hearing Loss patients (UHL) and Bilateral Hearing Loss patients ( BHL). To discover the mechanisms underlying the adaptive strategies that are observed in UHL with acquired deafness. The main aim of the thesis is to verify whether spatial Mismatch Negativity (MMN) could be a neuronal marker of spatial auditory plasticity observed in UHL patients, and to verify whether these neural correlates are consistent with the spatial auditory performance. Two types of investigations were applied to 20 NHS, 21 UHL and 14 BHL. The first investigation is a sound source identification task measured by the root mean square error (RMS). The second assessment is an electroencephalography (EEG) study where we analyzed the amplitude and latency of the MMN. MMN is defined as an auditory evoked potential that reflects the brain's ability to detect a change in one physical property of a sound. We used a standard sound in a reference position (50°) with three deviations from the standard (10° , 20°, and 100°), in binaural and monaural conditions. UHL patients were divided into 3 groups according to their spatial performances. The group of good performers (UHL {low rms}) showed better RMS scores in comparison with NHS with earplugs (NHS-mon), with performances similar to those of NHS subjects in binaural condition. A progressive increase of the MMN with the angle of deviation from the standard was noted in all groups. With a significant reduction of MMN amplitude in monaural NHS when the ear plug was applied on the ipsilateral side of the standard. MMN showed consistent variation with the behavioral observations, where UHL {low rms} patients had larger MMN amplitudes than those of monaural NHS and similar to those of binaural NHS. UHL patients have adaptive spatial auditory strategies. Our study was able to demonstrate that spatial auditory plasticity that occurs after deafness can be reflected by the MMN. Neural observations (i.e. the MMN) are correlated with behavioral observations of spatial source identification. This means that the spatial cortical plasticity, that took place in these subjects, is not limited to the functions of identification of the sound source, but exceeds these capacities towards more complex mechanisms such as deviance detection and short-term memory, that are involved in the spatial discrimination function.
... Although a correlation could be found between PTA thresholds and SRT in noise, the variation in the SRT between individuals who have similar PTA configuration cannot be fully explained by the PTA thresholds (e.g. Lyregaard, 1982;Killion and Niquette, 2000) Non-psychoacoustic factors that are not related to auditory abilities such as cognitive abilities, including attention and working memory, also play a role when listening to SIN. These factors could not be accounted for in PTA, which only uses pure tones. ...
Thesis
Sensorineural hearing loss (SNHL) in children could have serious long-term effects if not identified early after onset. To identify SNHL in children early, childhood hearing screening programmes at birth and later ages have been recommended. In the Kingdom of Saudi Arabia (KSA), until recently, there was no nationwide commitment to the universal neonatal hearing screening programme (UNHS). Even if full coverage was achieved, UNHS might be inadequate to identify a high percentage of cases of SNHL, so more options for hearing screening later in childhood are needed.To better understand the situation of children with SNHL in the KSA, the first two studies of this research estimated the age of identification (AOI) of SNHL in children in the KSA prior to the UNHS, which has not been investigated before, and investigated the characteristics of the affected children. The two cross-sectional studies were either a review of children’s medical records (n=1226) or surveys of parents (n=174). The main findings included: (1) a high AOI of SNHL in children (around 3 years old, range from around 0.1-10 years); (2) a strong association between consanguinity, which is known to cause late-onset SNHL, and SNHL was found (in >70% of the children with SNHL); (3) parental concern about child’s hearing identified for the first time as a predictor of SNHL in Saudi children; and (4) parents finding it difficult to access audiology clinics. These findings indicate that late-onset SNHL is expected to be prevalent among children in the KSA, and the difficulty of accessing audiology clinics may play a role in delaying the identification of those children. This motivated the development of a hearing screening tool that is suitable to young children, is easily accessible, can be used by non-audiologists, and is sensitive to SNHL. The developed test, called the Paediatric Arabic Auditory Speech Test (PAAST), which is a speechin-noise (SIN) test, was inspired by the McCormick Toy Discrimination Test, which suits children from the age of 2 years onwards. It was implemented in a downloadable iPad application that ran the test automatically to widen the possibilities of implementing a hearing screening test. The development of the PAAST included the conducting of five studies to determine the following: (1) pre-recorded speech material equalised for intelligibility; (2) the test-retest reliability of the PAAST with normal-hearing Arabic-speaking adults (n=30); (3) the normal-range and test-retest reliability of the PAAST with normal-hearing Arabic-speaking children (n=40, 3-12 years old); (4) typical results in children with mild to severe SNHL (n=16, 6-14 years old) in the KSA; and (5) the usability and feasibility of the tablet application at home and school by parents (n=26) and teachers (n=24) in the KSA. The studies also sought to explore the normal developmental trajectory of speech intelligibility and the supra-threshold effects of SNHL in Arabic-speaking children. The PAAST showed good test-retest reliability when tested with adults, older children (>6-12 years), and young children (3-6 years) (e.g. intra-class correlation coefficient= 0.7, 0.8, 0.7 respectively). It could differentiate between normal-hearing children and hearing-impaired children. A high system usability score (>80/100) was found for parents and teachers. It seems feasible to use the PAAST as a hearing test in schools in the KSA. It was also found that there was probably a developmental ageeffect on the performance of Arabic-speaking normal-hearing children on SIN tests and that there were substantial difficulties in performing SIN tests for Arabic-speaking children who have SNHL, which has not been documented previously. In general, it could be concluded that the PAAST provides a useable platform for speech intelligibility testing in noise, and the SIN test seems to provide a useful assessment of speech intelligibility in Arabic-speaking children with SNHL.
... Most studies of HFHL and NIHL have used pure-tone audiometry as the sole basis of measurement. However, pure-tone audiometry may not be the best, and should not be the only predictor of the difficulty a person will have listening to speech in a challenging environment (Killion & Niquette 2000;Vermiglio et al. 2012;Moore et al. 2014Moore et al. , 2017. Speech signals have a spectrotemporal complexity that changes with a limited predictability over time and accurate speech coding and recognition requires multiple auditory discrimination skills (Summers et al. 2013). ...
Article
Objectives: Hearing loss is most commonly observed at high frequencies. High-frequency hearing loss (HFHL) precedes and predicts hearing loss at lower frequencies. It was previously shown that an automated, self-administered digits-in-noise (DIN) test can be sensitized for detection of HFHL by low-pass filtering the speech-shaped masking noise at 1.5 kHz. This study was designed to investigate whether sensitivity of the DIN to HFHL can be enhanced further using low-pass noise filters with higher cutoff frequencies. Design: The US-English digits 0 to 9, homogenized for audibility, were binaurally presented in different noise maskers including one broadband and three low-pass (cutoff at 2, 4, and 8 kHz) filtered speech-shaped noises. DIN-speech reception thresholds (SRTs) were obtained from 60 normal hearing (NH), and 40 mildly hearing impaired listeners with bilateral symmetric sensorineural hearing loss. Standard and extended high-frequency audiometric pure-tone averages (PTAs) were compared with the DIN-SRTs. Results: Narrower masking noise bandwidth generally produced better (more sensitive) mean DIN-SRTs. There were strong and significant correlations between SRT and PTA in the hearing impaired group. Lower frequency PTALF 0.5,1, 2, 4 kHz had the highest correlation and the steepest slope with SRTs obtained from the 2-kHz filter. Higher frequency PTAHF 4,8,10,12.5 kHz correlated best with SRTs obtained from 4- and 8-kHz filtered noise. The 4-kHz low-pass filter also had the highest sensitivity (92%) and equally highest (with the 8-kHz filter) specificity (90%) for detecting an average PTAHF of 20 dB or more. Conclusions: Of the filters used, DIN sensitivity to higher frequency hearing loss was greatest using the 4-kHz low-pass filter. These results suggest that low-pass filtered noise may be usefully substituted for broadband noise to improve earlier detection of HFHL using DIN.
... The effects of cochlear damage are multifaceted, including impairments in absolute sensitivity, frequency selectivity, loudness perception and intensity discrimination, temporal resolution, temporal integration, pitch perception and frequency discrimination, as well as sound localization and other aspects of binaural and spatial hearing (Moore 1996). People with hearing loss have a higher susceptibility to noise or competing source interference, often requiring 5 up to 10 dB better signal-to-noise ratios (SNRs) compared to the normal hearing (NH) listener with the same speech-in-noise performance (e.g., Killion & Niquette 2000), even if the loss of sensitivity/audibility is rectified (Humes 2007). ...
Article
Full-text available
An augmented reality (AR) platform combines several technologies in a system that can render individual "digital objects" that can be manipulated for a given purpose. In the audio domain, these may, for example, be generated by speaker separation, noise suppression, and signal enhancement. Access to the "digital objects" could be used to augment auditory objects that the user wants to hear better. Such AR platforms in conjunction with traditional hearing aids may contribute to closing the gap for people with hearing loss through multimodal sensor integration, leveraging extensive current artificial intelligence research, and machine-learning frameworks. This could take the form of an attention-driven signal enhancement and noise suppression platform, together with context awareness, which would improve the interpersonal communication experience in complex real-life situations. In that sense, an AR platform could serve as a frontend to current and future hearing solutions. The AR device would enhance the signals to be attended, but the hearing amplification would still be handled by hearing aids. In this article, suggestions are made about why AR platforms may offer ideal affordances to compensate for hearing loss, and how research-focused AR platforms could help toward better understanding of the role of hearing in everyday life.
... The neurophysiological factors that influence SIN recognition are not well understood. Regardless of age, the audiogram -the conventional behavioral test of hearing-fails to always predict speech perception skills, especially in background noise [7]. Event-related potentials (ERPs), phase-locking value, power spectral density (PSD), and connectivity analysis are commonly used for understanding the brain functionality and identify normal and disorders status [8]. ...
Conference Paper
Speech-in-noise (SIN) comprehension decreases with age, and these declines have been related to social isolation, depression, and dementia in the elderly. In this work, we build models to distinguish the normal hearing (NH) or mild hearing impairment (HI) using the different genres of machine learning. We compute band wise power spectral density (PSD) of source- derived EEGs as features in building models using support vector machine (SVM), k-nearest neighbors (KNN), and AdaBoost classifiers and compare their performance while listeners perceived clear or noise-degraded sounds. Combining all frequency bands features obtained from the whole-brain, the SVM registered the best performance. The group classification accuracy was found to be 94.90% [area under the curve (AUC) 94.75%; F1-score 95.00%] perceived the clear speech, and for noise- degraded speech perception, accuracy was found to be 92.52% (AUC 91.12%, and F1-score 93.00%). Remarkably, individual frequency band analysis on whole-brain data showed that γ frequency band segregated groups with a best accuracy of 96.78%, AUC 96.79% for clear speech data and noise-degraded speech data yielded slightly less accuracy of 93.62% with AUC 93.17% by using SVM. A separate analysis using the left hemisphere (LH) and right hemisphere (RH) data showed that the LH activity is a better predictor of groups compared to RH. These results are consistent with the dominance of LH in auditory-linguistic processing. Our results demonstrate that spectral features of the γ-band frequency could be used to differentiate NH and HI older adults in terms of their ability to process speech sounds. These findings would be useful to model attentional and listening assistive devices to amplify a more specific pitch than others
... Most studies of HFHL and NIHL have used pure-tone audiometry as the sole basis of measurement. However, pure-tone audiometry may not be the best, and should not be the only predictor of the difficulty a person will have listening to speech in a challenging environment (Killion & Niquette 2000;Vermiglio et al. 2012;Moore et al. 2014Moore et al. , 2017. Speech signals have a spectrotemporal complexity that changes with a limited predictability over time and accurate speech coding and recognition requires multiple auditory discrimination skills (Summers et al. 2013). ...
Article
Objectives: Hearing loss is most commonly observed at high frequencies. High-frequency hearing loss (HFHL) precedes and predicts hearing loss at lower frequencies. It was previously shown that an automated, self-administered digits-in-noise (DIN) test can be sensitized for detection of HFHL by low-pass filtering the speech-shaped masking noise at 1.5 kHz. This study was designed to investigate whether sensitivity of the DIN to HFHL can be enhanced further using low-pass noise filters with higher cutoff frequencies. Design: The US-English digits 0 to 9, homogenized for audibility, were binaurally presented in different noise maskers including one broadband and three low-pass (cutoff at 2, 4, and 8 kHz) filtered speech-shaped noises. DIN-speech reception thresholds (SRTs) were obtained from 60 normal hearing (NH), and 40 mildly hearing impaired listeners with bilateral symmetric sensorineural hearing loss. Standard and extended high-frequency audiometric pure-tone averages (PTAs) were compared with the DIN-SRTs. Results: Narrower masking noise bandwidth generally produced better (more sensitive) mean DIN-SRTs. There were strong and significant correlations between SRT and PTA in the hearing impaired group. Lower frequency PTA-LF (0.5,1, 2, 4 kHz) had the highest correlation and the steepest slope with SRTs obtained from the 2-kHz filter. Higher frequency PTA-HF (4,8,10,12.5 kHz) correlated best with SRTs obtained from 4-and 8-kHz filtered noise. The 4-kHz low-pass filter also had the highest sensitivity (92%) and equally highest (with the 8-kHz filter) specificity (90%) for detecting an average PTA-HF of 20 dB or more. Conclusions: Of the filters used, DIN sensitivity to higher frequency hearing loss was greatest using the 4-kHz low-pass filter. These results suggest that low-pass filtered noise may be usefully substituted for broadband noise to improve earlier detection of HFHL using DIN.
... The standard audiometric test battery does not measure speech intelligibility in noise (8). The SRT is a test based on the signal-to-Polat and Ataş Speech Intelligibility in Noise Although they are currently used to diagnose hearing loss, speech tests with single-syllable word lists do not reflect everyday listening conditions. ...
... Because of that, the short words with no meaning have to be used to avoid combination, rules of grammar and semantics of the language used. The intelligibility of speech communication cannot be predicted by the tone audiogram for the real, living conditions (in environments where there is background noise) (Killion and Niquette, 2000). Therefore, numerous methods for measuring hearing loss were developed in which the primary indicator is speech intelligibility in noisy conditions. ...
Chapter
Full-text available
Recently, a lot of so-called HINT (Hearing in Noise Test) screening methods, for hearing check, have appeared. Common to all these methods is that they are trying to simulate real-life conditions, i.e. speech communication in the presence of ambient noise. QuickSIN is one of the HINT methods which is very simple to use. Because of that simplicity, QuickSIN can be used as a self-testing method for hearing impairment check (personal testing methods). The method does not require expensive, professional audio equipment or complex ambient conditions and therefore can be performed at home. Personal computer with an audio card and headphones is quite enough for performing QuickSIN test. The method is suitable for the implementation over the internet, or as an application for mobile phones. In addition to these benefits, the method suffers from some disadvantages. One of the disadvantages of the QuickSIN test is that for every particular language a corpus of test sentences must be created in order to check the degree of hearing impairment. In the process of creation of the test sentences corpus for the Serbian language a whole series of problems appeared. The most important among them are the problems related to the creation of a test sentence for QuickSIN method, starting from the selection of keywords, creation (composition) of sentences, creating masking noise, defining criteria for the qualification of hearing impairment and others. All these problems are explained and discussed in detail. As a result of our work, we obtained the corpus of 126 sentences: twelve blocks of sentences intended for hearing impairment testing and nine sets of sentences for demonstration and subjets' training.
... This was necessary because the sensitivities for Noise1(LL) and Noise2(HL) were higher than the other noise types, and the comparisons of noise types in this present study were conducted well above classical just noticeable differences. Patients with profoundly impaired hearing show difficulty of hearing speech in noisy background with increased threshold of 15-20 dB SNR, and use of hearing aids improved it by $10 dB SNR (Killion and Niquette, 2000;Taylor, 2003). Thus, 9 ± 4 dB (MEAN ± SD) improvements in our present study indicate an effect size of clinically significant impact. ...
Article
Full-text available
During critical periods, neural circuits develop to form receptive fields that adapt to the sensory environment and enable optimal performance of relevant tasks. We hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task. We raised rat pups in different spectro-temporal noise statistics during their auditory critical period. As adults, they showed enhanced behavioral performance in detecting vocalizations in noise. Concomitantly, encoding of vocalizations in noise in the primary auditory cortex improves with noise-rearing. Significantly, spectro-temporal modulation plasticity shifts cortical preferences away from the exposed noise statistics, thus reducing noise interference with the foreground sound representation. Auditory cortical plasticity shapes receptive field preferences to optimally extract foreground information in noisy environments during noise-rearing. Early noise exposure induces cortical circuits to implement efficient coding in the joint spectral and temporal modulation domain.
... A similar limitation is found with the audiogram. The audiogram can provide a fairly accurate measure of an auditory system's ability to respond to various frequencies at various intensities, but it cannot reliably be used to gauge how much difficulty a person will have in listening in a noisy environment (Killion & Niquette, 2000). Structure-level assessments of auditory function will not necessarily accurately measure the activity level of auditory ability. ...
Article
Looking at hearing loss through the WHO-ICD model for disability reveals that auditory interventions do not necessarily address all of the components of auditory disability. Auditory training has been proposed as a solution to address activity-level deficits. The purpose of this study was to examine structure- and activity-level changes as the result of auditory training for normal hearing individuals training in the presence of noise. Thirty adults with normal hearing were placed into three experimental groups: A group engaging in active auditory training in noise, a group listening to speech material in noise, and a control group that performed no activity in noise. Measurements were taken of the rate and errors made for the auditory training. Performance measures on a word recognition task in noise (the QuickSIN), electrophysiological changes on an analysis of portions of the frequency following response (FFT), and self-reported measurements from the Speech and Spatial Qualities of Sound (SSQ), were all measured before and after to monitor changes as the result of training. Results show significant improvement on the auditory training task in terms of both rate and number of errors made. ANOVA’s reveal a significant effect of test condition on performance on the QuickSIN. There were mixed results in analyses of the differences in the electrophysiological measurements. There were no significant effects of training condition on answers to the SSQ. Similar to other Auditory training studies, results are mixed. This research serves as a proof-of-concept on normal-hearing subjects. The next step is to examine disordered populations.
... This option should be given serious consideration given the claim made by some (van Rooij & Plomp 1990& Plomp , 1992) that the contribution of cognition to speech perception in older participants, although real, is relatively small. Likewise, PTA tests, or modifications thereof, might simply not be the right kind of measures for assessing everyday listening, a possibility that has been voiced by hearing researchers (Plomp 1978;Killion & Niquette 2000;Heinrich et al. 2016), practitioners, and patients . ...
Article
Full-text available
Objectives: Cognitive load (CL) impairs listeners' ability to comprehend sentences, recognize words, and identify speech sounds. Recent findings suggest that this effect originates in a disruption of low-level perception of acoustic details. Here, we attempted to quantify such a disruption by measuring the effect of CL (a two-back task) on pure-tone audiometry (PTA) thresholds. We also asked whether the effect of CL on PTA was greater in older adults, on account of their reduced ability to divide cognitive resources between simultaneous tasks. To specify the mechanisms and representations underlying the interface between auditory and cognitive processes, we contrasted CL requiring visual encoding with CL requiring auditory encoding. Finally, the link between the cost of performing PTA under CL, working memory, and speech-in-noise (SiN) perception was investigated and compared between younger and older participants. Design: Younger and older adults (44 in each group) did a PTA test at 0.5, 1, 2, and 4 kHz pure tones under CL and no CL. CL consisted of a visual two-back task running throughout the PTA test. The two-back task involved either visual encoding of the stimuli (meaningless images) or subvocal auditory encoding (a rhyme task on written nonwords). Participants also underwent a battery of SiN tests and a working memory test (letter number sequencing). Results: Younger adults showed elevated PTA thresholds under CL, but only when CL involved subvocal auditory encoding. CL had no effect when it involved purely visual encoding. In contrast, older adults showed elevated thresholds under both types of CL. When present, the PTA CL cost was broadly comparable in younger and older adults (approximately 2 dB HL). The magnitude of PTA CL cost did not correlate significantly with SiN perception or working memory in either age group. In contrast, PTA alone showed strong links to both SiN and letter number sequencing in older adults. Conclusions: The results show that CL can exert its effect at the level of hearing sensitivity. However, in younger adults, this effect is only found when CL involves auditory mental representations. When CL involves visual representations, it has virtually no impact on hearing thresholds. In older adults, interference is found in both conditions. The results suggest that hearing progresses from engaging primarily modality-specific cognition in early adulthood to engaging cognition in a more undifferentiated way in older age. Moreover, hearing thresholds measured under CL did not predict SiN perception more accurately than standard PTA thresholds.
... More generally, standard pure-tone audiometry is unable to predict with precision the level of difficulty a person will have listening to speech in a challenging environment (17,(20)(21)(22). Speech signals have a spectrotemporal complexity that changes with limited predictability over time. ...
Article
Full-text available
Significance Understanding speech in noisy environments is an essential communication skill that varies widely between individuals and is poorly understood. We show here that extended high-frequency (EHF) hearing, beyond the currently tested range of clinical audiometry, contributes to speech perception in noise. EHF hearing loss is common in otherwise normally hearing young adults and predicts self-reported difficulty hearing speech in noise. The data suggest that EHF hearing is a long sought missing link between audiometry and speech perception and may be a sensitive predictor of age-related hearing loss much earlier in life when preventive measures can be effectively deployed.
Objectives This study compared a simplified in situ self-administered hearing screening test, conducted with a neckband-type self-fitting device, with conventional pure-tone audiometry. It evaluated the maximum speech-shaped noise level for screening (MSNLS), crucial for evaluating the feasibility of this in situ screening test in quiet environments. Methods This study included 30 adults with normal hearing and 30 adults with mild to moderately severe hearing impairment. A binaural neckband-type self-fitting device was developed. The results of an in situ hearing screening test conducted using the self-fitting device were compared with those obtained using traditional pure-tone audiometry conducted using TDH-50 earphones. Subsequently, MSNLS was determined by assessing noise-masking effects on screening outcomes. All tests were conducted in an audiometric booth, with the hearing screening test conducted in the booth with the door open. Results Strong positive correlations were observed between the results of pure-tone audiometry and those of hearing screening tests across all test frequencies, with the strongest correlation observed at 2000 Hz ( r s = 0.793, P < .001) and the weakest correlation observed at 500 Hz ( r s = 0.625, P < .001). Comparisons of screening tests results with pure-tone thresholds across all test frequencies revealed differences of approximately 10 dB HL for 80% of all ears. The sensitivity and specificity of the hearing screening test in detecting candidates with hearing loss (>30 dB HL) who are suitable for this device were 93% and 90%, respectively. The hearing-impaired group exhibited MSNLSs, such as 57 dB SPL at 500 Hz, exceeding ambient noise levels in an empty classroom. Conclusion The in situ hearing screening test, conducted using a self-fitting device, exhibited reasonable accuracy for self-fitting scenarios in general quiet environments. This test can be used for monitoring mild to moderate hearing loss or fluctuating hearing loss, such as that associated with Ménière’s disease.
Article
Full-text available
We aimed to test whether hearing speech in phonetic categories (as opposed to a continuous/gradient fashion) affords benefits to “cocktail party” speech perception. We measured speech perception performance (recognition, localization, and source monitoring) in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1–4 talkers) and via forward vs. time-reversed maskers, the latter promoting a release from masking. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show cocktail party speech recognition accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers. Dividing listeners into “discrete” vs. “continuous” categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of release from masking experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling and not high-frequency audiometric thresholds; more discrete listeners make less effective use of time-reversal and show less release from masking than their gradient responding peers. Our results suggest a link between speech categorization skills and cocktail party processing, with a gradient (rather than discrete) listening strategy benefiting degraded speech perception. These findings suggest that less flexibility in binning sounds into categories may be one factor that contributes to figure-ground deficits.
Article
Full-text available
Background The difficulty in understanding speech becomes worse in the presence of background noise for individuals with sensorineural hearing loss. Speech-in-noise tests help to assess this difficulty. Previously, the Tulu sentence lists have been assessed for their equivalency to measure speech recognition threshold in noise among individuals with normal hearing. The present study aimed to determine the equivalence and test–retest reliability of Tulu sentence lists for measuring speech recognition threshold in noise among individuals with sensorineural hearing loss. Results The SNR-50 was measured for 13 sentence lists in 20 Tulu-speaking individuals with mild to moderate sensorineural hearing loss. Retesting was done by administering all lists to eight participants after an average of 25.25 days (SD = 19.44). Friedman test was administered to check for the list equivalency. Intraclass correlation coefficient was measured to assess test–retest reliability. A regression analysis was performed to understand the influence of pure-tone average on SNR-50. A Kruskal–Wallis test was administered to check the statistical significance of the SNR-50 obtained across different configurations and degrees of hearing loss. Nine of the 13 Tulu sentence lists (lists 2, 4, 5, 6, 9, 10, 11, 12, and 13) were equivalent in individuals with sensorineural hearing loss. The mean SNR-50 for these nine lists was 1.13 dB (SD = 2.04 dB). The test–retest reliability was moderate (ICC = 0.727). The regression analysis showed that a pure-tone average accounted for 24.7% of the variance in SNR-50 data (p = 0.026). Individuals with mild to moderate hearing loss obtained the worst SNR-50, followed by mild and high-frequency hearing loss. Conclusion Nine Tulu sentence lists are equivalent and reliable and can be used to measure speech recognition threshold in noise among individuals with sensorineural hearing loss who are Tulu speakers.
Article
Full-text available
Simple Summary A significant portion of adults with clinically normal hearing sensitivity have difficulty understanding speech in background noise. Current clinical assessments fail to explain this phenomenon, prompting the exploration of auditory mechanisms beyond those covered by routine clinical testing. One mechanism important for separating sound sources—a key task for understanding speech-in-noise—is temporal processing, or the extraction and organization of acoustic timing characteristics. Here, we investigate the hypothesis that deficits in temporal processing contribute to difficulties in understanding speech-in-noise. We explore this in middle-aged adults—an under-investigated group, despite their high prevalence of speech-in-noise difficulties. In this study, we found that differences in speech-in-noise abilities were associated with deficits in two aspects of temporal processing: the neural encoding of periodic speech features, such as pitch, and perceptual sensitivity to rapid acoustic timing differences between ears. Interestingly, the use of these mechanisms was task-dependent, suggesting various aspects of temporal processing differentially contribute to speech-in-noise perception based on the characteristics of the listening environment. These findings contribute to our overall understanding of which auditory mechanisms play a role in speech-in-noise difficulties in normal hearing listeners, and can inform future clinical practice to serve this population. Abstract Auditory temporal processing is a vital component of auditory stream segregation, or the process in which complex sounds are separated and organized into perceptually meaningful objects. Temporal processing can degrade prior to hearing loss, and is suggested to be a contributing factor to difficulties with speech-in-noise perception in normal-hearing listeners. The current study tested this hypothesis in middle-aged adults—an under-investigated cohort, despite being the age group where speech-in-noise difficulties are first reported. In 76 participants, three mechanisms of temporal processing were measured: peripheral auditory nerve function using electrocochleography, subcortical encoding of periodic speech cues (i.e., fundamental frequency; F0) using the frequency following response, and binaural sensitivity to temporal fine structure (TFS) using a dichotic frequency modulation detection task. Two measures of speech-in-noise perception were administered to explore how contributions of temporal processing may be mediated by different sensory demands present in the speech perception task. This study supported the hypothesis that temporal coding deficits contribute to speech-in-noise difficulties in middle-aged listeners. Poorer speech-in-noise perception was associated with weaker subcortical F0 encoding and binaural TFS sensitivity, but in different contexts, highlighting that diverse aspects of temporal processing are differentially utilized based on speech-in-noise task characteristics.
Article
Introduction: Difficulties in understanding speech in noise is the most common complaint of people with hearing impairment. Thus, there is a need for tests of speech-in-noise ability in clinical settings, which have to be evaluated for each language. Here, a reference dataset is presented for a quick speech-in-noise test in the French language (Vocale Rapide dans le Bruit, VRB; Leclercq, Renard & Vincent, 2018). Methods: A large cohort (N=641) was tested in a nationwide multicentric study. The cohort comprised normal-hearing individuals and individuals with a broad range of symmetrical hearing losses. Short everyday sentences embedded in babble noise were presented over a spatial array of loudspeakers. Speech level was kept constant while noise level was progressively increased over a range of signal-to-noise ratios. The signal-to-noise ratio for which 50% of keywords could be correctly reported (Speech Reception Threshold, SRT) was derived from psychometric functions. Other audiometric measures were collected for the cohort, such as audiograms and speech-in-quiet performance. Results: The VRB test was both sensitive and reliable, as shown by the steep slope of the psychometric functions and by the high test-retest consistency across sentence lists. Correlation analyses showed that pure tone averages derived from the audiograms explained 74% of the SRT variance over the whole cohort, but only 29% for individuals with clinically normal audiograms. SRTs were then compared to recent guidelines from the French Society of Audiology (Joly et al., 2021). Among individuals who would not have qualified for hearing aid prescription based on their audiogram or speech intelligibility in quiet, 18.4% were now eligible as they displayed SRTs in noise impaired by 3 dB or more. For individuals with borderline audiograms, between 20 dB HL and 30 dB HL, the prevalence of impaired SRTs increased to 71.4%. Finally, even though five lists are recommended for clinical use, a minute-long screening using only one VRB list detected 98.6% of impaired SRTs. Conclusion: The reference data suggest that VRB testing can be used to identify individuals with speech-in-noise impairment.
Article
Full-text available
Background and Aims The quick speech in noise (Q-SIN) test shows how difficult it is to perceive speech in noise by determining signal-to-noise ratio (SNR) loss. The lists with high-frequency words have a better ability to identify SNR loss which have been created in Persian. Although a Persian version of Q-SIN with emphasis on high frequency is available، but there is no Q-SIN lists with high-frequency words; therefore,this study aims to develop new lists and the lists with high-frequency words for Q-SIN test and determine their equivalency in normal-hearing people which was condcuted in Tehran University of Medical Sciences. Methods The sentences were first developed. Then, their content validity and face validity were determined. In this regard، 36 sentences were used to make new Q-SIN lists and 36 sentences were used to make Q-SIN lists with high-frequency words. Based on the Q-SIN test development criteria، six regular lists (lists 1-6) and six lists with high-frequency words (lists 7-12) were tested on 46 people (23 males and 23 females) aged 18-35 with normal hearing Results The content validity index for new and high-frequency words lists were 0. 74 and 0. 736، respectively. The equivalency test results showed that among the first 6 lists، the lists no. 1, 2, 3, and 4 were equal. Among the six lists with high-frequency words, the lists no. 7, 8, 10, 11 were equal. There was no gender differences between six regular lists and high-frequency lists (P>0.05). Conclusion The Q-SIN word lists with equivalency can be used for normal-hearing people in clinical practice.
Chapter
In the area of smartphone-based hearing screening, the number of speech-in-noise tests available is growing rapidly. However, the available tests are typically based on a univariate classification approach, for example using the speech recognition threshold (SRT) or the number of correct responses. There is still lack of multivariate approaches to screen for hearing loss (HL). Moreover, all the screening methods developed so far do not assess the degree of HL, despite the potential importance of this information in terms of patient education and clinical follow-up. The aim of this study was to characterize multivariate approaches to identify mild and moderate HL using a recently developed, validated speech-in-noise test for hearing screening at a distance, namely the WHISPER (Widespread Hearing Impairment Screening and PrEvention of Risk) test. The WHISPER test is automated, minimally dependent on the listeners’ native language, it is based on an optimized, efficient adaptive procedure, and it uses a multivariate approach. The results showed that age and SRT were the features with highest performance in identifying mild and moderate HL, respectively. Multivariate classifiers using all the WHISPER features achieved better performance than univariate classifiers, reaching an accuracy equal to 0.82 and 0.87 for mild and moderate HL, respectively. Overall, this study suggested that mild and moderate HL may be discriminated with high accuracy using a set of features extracted from the WHISPER test, laying the ground for the development of future self-administered speech-in-noise tests able to provide specific recommendations based on the degree of HL.KeywordsHearing lossHearing screeningMachine learningSmartphone-based screeningMultivariate classifiers
Article
One of the current gaps in teleaudiology is the lack of methods for adult hearing screening viable for use in individuals of unknown language and in varying environments. We have developed a novel automated speech-in-noise test that uses stimuli viable for use in non-native listeners. The test reliability has been demonstrated in laboratory settings and in uncontrolled environmental noise settings in previous studies. The aim of this study was: (i) to evaluate the ability of the test to identify hearing loss using multivariate logistic regression classifiers in a population of 148 unscreened adults and (ii) to evaluate the ear-level sound pressure levels generated by different earphones and headphones as a function of the test volume. The multivariate classifiers had sensitivity equal to 0.79 and specificity equal to 0.79 using both the full set of features extracted from the test as well as a subset of three features (speech recognition threshold, age, and number of correct responses). The analysis of the ear-level sound pressure levels showed substantial variability across transducer types and models, with earphones levels being up to 22 dB lower than those of headphones. Overall, these results suggest that the proposed approach might be viable for hearing screening in varying environments if an option to self-adjust the test volume is included and if headphones are used. Future research is needed to assess the viability of the test for screening at a distance, for example by addressing the influence of user interface, device, and settings, on a large sample of subjects with varying hearing loss.
Preprint
Full-text available
The frequency-following response (FFR) is a scalp-recorded potential reflecting a mixture of phase-locked neural activity generated from several nuclei along the auditory pathway. FFRs have been widely used as a neural barometer of complex listening skills especially performance on speech-in noise (SIN) tasks: across listeners with various hearing profiles and ages, more robust speech-evoked FFRs are associated with improved SIN perception. Applying individually optimized source reconstruction to speech-FFRs recorded via EEG (FFREEG), we assessed the relative contributions of subcortical [auditory nerve (AN), brainstem (BS)] and cortical [bilateral primary auditory cortex, PAC] generators to the scalp response with the aim of identifying which source(s) drives the brain-behavior relation between FFRs and perceptual SIN skills. We found FFR strength declined precipitously from AN to PAC, consistent with the roll-off of phase-locking at progressively higher stages of the auditory neuroaxis. FFRs at the speech fundamental frequency (F0) were resistant to moderate noise interference across all sources, but FFRs were largest in BS relative to all other sources (BS > AN >> PAC). Cortical PAC FFRs were only weakly observed above the noise floor in a restricted bandwidth around the low pitch of speech stimuli (F0~100 Hz). Brain-behavior regressions revealed (i) AN and BS FFRs were sufficient to describe listeners' QuickSIN scores and (ii) contrary to neuromagnetic (MEG) FFRs, neither left nor right PAC FFREEG predicted SIN performance. Our findings suggest subcortical sources not only dominate the electrical FFR but also the link between speech-FFRs and SIN processing as observed in previous EEG studies.
Article
Objective To report the incidences of secondary lip and nose operations, otolaryngology procedures, speech-language therapy, neurodevelopmental concerns, and dental and orthodontic issues in children with isolated cleft lip to inform multidisciplinary cleft team protocols. Setting An American Cleft Palate-Craniofacial Association–approved team at a tertiary academic children’s hospital. Design Retrospective cohort study of patients evaluated through longitudinal clinic visits by a multidisciplinary cleft palate and craniofacial team between January 2000 and June 2018. Patients, Participants Children with nonsyndromic cleft lip with or without cleft alveolus (n = 92). Results Median age at final team visit was 4.9 years (interquartile range: 2.4-8.2 years). Secondary plastic surgery procedures were most common between ages 3 and 5 (135 per 1000 person-years), and the majority of these procedures were minor lip revisions. The rate of tympanostomy tube insertion was highest before age 3 (122 per 1000 person-years). By their final team visit, 88% of patients had normal hearing and 11% had only slight to mild conductive hearing loss. No patients had speech errors attributable to lip abnormalities. Psychological interventions, learning disabilities, and dental or orthodontic concerns were uncommon. Conclusions Most patients with isolated cleft lip may not require long-term, longitudinal evaluation by cleft team specialists. Cleft teams should develop limited follow-up protocols for these children to improve resource allocation and promote value-based care in this patient population.
Article
Objective: The present study aimed to establish the test-retest reliability and validity of a tablet-based automated pure-tone screening test and a word-in-noise test as hearing screening tools for older Hong Kong Cantonese-speaking adults. Design and study sample: It was a cross-sectional within-subject study. One hundred and thirty-two older adults participated in this study, and 112 of them completed the automated pure-tone screening test, word-in-noise test, and conventional pure-tone audiometry. Pure-tone threshold of 40 dB HL at each of the tested frequencies including 500, 1000, 2000 and 4000 Hz, obtained with conventional pure-tone audiometry was set as the pass/refer criterion, for the calculation of sensitivity and specificity of the tablet-based screening tools. Results: The tablet-based automated pure-tone screening test yielded a sensitivity of 0.93 and specificity of 0.82, while the word-in-noise test yielded a sensitivity of 0.81 and specificity of 0.70 with the cut-off chosen as a speech reception threshold of −3.5 dB signal-to-noise ratio. Both tests require around 3 minutes to be completed on both ears. Conclusions: The tablet-based pure-tone test and word-in-noise test are reliable and valid to be used as screening tools for hearing loss in the Hong Kong Cantonese-speaking elderly.
ResearchGate has not been able to resolve any references for this publication.