Article

Remote Microphone System Use at Home: Impact on Child-Directed Speech

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Purpose The impact of home use of a remote microphone system (RMS) on the caregiver production of, and child access to, child-directed speech (CDS) in families with a young child with hearing loss was investigated. Method We drew upon extant data that were collected via Language ENvironment Analysis (LENA) recorders used with 9 families during 2 consecutive weekends (RMS weekend and no-RMS weekend). Audio recordings of primary caregivers and their children with hearing loss obtained while wearing and not wearing an RMS were manually coded to estimate the amount of CDS produced. The proportion of CDS that was likely accessible to children with hearing loss under both conditions was determined. Results Caregivers produced the same amount of CDS when using and when not using the RMS. However, it was concluded that children with hearing loss, on average, could potentially access 12% more CDS if caregivers used an RMS because of their distance from their children when talking to them. Conclusion Given our understanding of typical child language development, findings from this investigation suggest that children with hearing loss could receive auditory, speech, and language benefits from the use of an RMS in the home environment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These automatized data have been deployed for a variety of purposes. For instance, researchers used LENA recordings to identify distinctive features of vocal development in children with autism [11], to explore the linguistic experiences of children with hearing loss in the home environment [12], to examine the effects of peer-to-peer talk in preschool classrooms on children's language growth [13], and to assess the effectiveness of interventions designed to increase parents' talk to their children [14]. In addition, researchers recently have used the AWC data from day-long LENA recordings to link variability in children's language exposure in the home environment to language-related brain structure in terms of neural connectivity [2] and cortical surface area [15]. ...
... First, the system does not distinguish CDS from non-CDS among adult talk captured in audio recordings. Therefore, researchers concerned with identifying children's exposure to CDS must utilize extensive additional analyses and hand-coding, leading to only modest amounts of audio-recorded talk to be evaluated (e.g., [12]). Second, the system does not provide nuanced measures of linguistic complexity, such as number of different words and mean length of utterance, which represent important aspects of CDS that correlate with children's language development [6]. ...
Article
Full-text available
The present study explored whether a tool for automatic detection and recognition of interactions and child-directed speech (CDS) in preschool classrooms could be developed, validated, and applied to non-coded video recordings representing children’s classroom experiences. Using first-person video recordings collected by 13 preschool children during a morning in their classrooms, we extracted high-level audiovisual features from recordings using automatic speech recognition and computer vision services from a cloud computing provider. Using manual coding for interactions and transcriptions of CDS as reference, we trained and tested supervised classifiers and linear mappings to measure five variables of interest. We show that the supervised classifiers trained with speech activity, proximity, and high-level facial features achieve adequate accuracy in detecting interactions. Furthermore, in combination with an automatic speech recognition service, the supervised classifier achieved error rates for CDS measures that are in line with other open-source automatic decoding tools in early childhood settings. Finally, we demonstrate our tool’s applicability by using it to automatically code and transcribe children’s interactions and CDS exposure vertically within a classroom day (morning to afternoon) and horizontally over time (fall to winter). Developing and scaling tools for automatized capture of children’s interactions with others in the preschool classroom, as well as exposure to CDS, may revolutionize scientific efforts to identify precise mechanisms that foster young children’s language development.
... Schools are excessively noisy environments, and noise interferes with the student's academic performance, especially those with hearing loss, with a direct impact on their listening effort (10,14) . The reverberation in the classrooms poses difficulties in understanding the message conveyed by the interlocutor, requiring more energy for the person to understand the content that is being said. ...
Article
Full-text available
Purpose: To identify relationships between Remote Microphone System (RMS) use in the classroom and the schools' and teachers' characteristics. Methods: We analyzed 120 subjects aged 5 to 17 years with hearing loss who had received an RMS from a health service accredited by the Unified Health System (SUS). The teachers of RMS users were the other subjects in the study. We analyzed the patients' medical records and interviewed their parents/guardians at the follow-up visit to verify issues related to the RMS and its use at school. We contacted the schools over the phone and visited some of them. Results: Of the students, 54% used the device at school; 22% involuntarily did not use it; and 24% voluntarily did not use it. The Speech Intelligibility Index pattern of those who used the RMS was similar to those who involuntarily did not use it. There was a significant difference between the type of school and educational level - 86% of regular school students and elementary school students tend to use the device more often (62%). Conclusion: Most subjects use the RMS at school. The students' educational level also interfered with the adherence to RMS use, as elementary school students had a higher adherence. The data suggest that the coordination between health services and schools favors RMS use. However, when the parents mediate this relationship, other factors interfere with the systematic RMS use in the school routine.
... It is noteworthy that, in the cases of younger children whose language is still developing and who cannot objectively report the device's benefit, the electroacoustic data may be the only indication of the effective functioning of the FM system [11]. Nevertheless, the benefit of RMS in early childhood for language acquisition and development is already strongly evidenced by the literature [11,[34][35][36][37][38]. ...
Article
Full-text available
The remote microphone system (RMS) must be appropriately working when fitting it in a person with hearing loss. For this verification process, the concept of transparency is adopted. If it is not transparent, the hearing aid (HA) may not capture the user’s voice and his peers appropriately, or the RMS may not have the advantage in gain needed to emphasize the speaker’s voice. This study investigates the influence of the receiver’s gain setting on the transparency of different brands and models of RMS and HAs. It is a retrospective chart review with 277 RMS from three distinct brands (RMA, RMB, and RMC) and HAs. There was an association of the receiver’s gain setting with the variables: brand of the transmitter/receiver (p = 0.005), neck loop’s receiver vs. universal and dedicated receivers (p = 0.022), and between brands of HA and transmitter/receiver (p < 0.001). RMS transmitter (odds ratio [OR = 7.9]) and the type of receiver (neckloop [OR = 3.4]; universal [OR = 0.78]) presented a higher risk of not achieving transparency in default gain, confirming and extolling the need to include electroacoustic verification in the protocol of fitting, verification, and validation of RMS and HA.
... One approach attempts to quantify aspects of the listener's actual listening environment. This category includes body-worn devices that capture acoustic characteristics of the listener's environment [1][2][3][4] and ecological momentary assessment in which the listener is periodically prompted to answer questions about their listening experience. 5,6 In a different approach, researchers have attempted to create ecologically relevant environments within a laboratory space. ...
Article
Background: Clinics are increasingly turning toward using virtual environments to demonstrate and validate hearing aid fittings in "realistic" listening situations before the patient leaves the clinic. One of the most cost-effective and straightforward ways to create such an environment is through the use of a small speaker array and amplitude panning. Amplitude panning is a signal playback method used to change the perceived location of a source by changing the level of two or more loudspeakers. The perceptual consequences (i.e., perceived source width and location) of amplitude panning have been well-documented for listeners with normal hearing but not for listeners with hearing impairment. Purpose: The purpose of this study was to examine the perceptual consequences of amplitude panning for listeners with hearing statuses from normal hearing through moderate sensorineural hearing losses. Research design: Listeners performed a localization task. Sound sources were broadband 4 Hz amplitude-modulated white noise bursts. Thirty-nine sources (14 physical) were produced by either physical loudspeakers or via amplitude panning. Listeners completed a training block of 39 trials (one for each source) before completing three test blocks of 39 trials each. Source production method was randomized within block. Study sample: Twenty-seven adult listeners (mean age 52.79, standard deviation 27.36, 10 males, 17 females) with hearing ranging from within normal limits to moderate bilateral sensorineural hearing loss participated in the study. Listeners were recruited from a laboratory database of listeners that consented to being informed about available studies. Data collection and analysis: Listeners indicated the perceived source location via touch screen. Outcome variables were azimuth error, elevation error, and total angular error (Euclidean distance in degrees between perceived and correct location). Listeners' pure-tone averages (PTAs) were calculated and used in mixed-effects models along with source type and the interaction between source type and PTA as predictors. Subject was included as a random variable. Results: Significant interactions between PTA and source production method were observed for total and elevation errors. Listeners with higher PTAs (i.e., worse hearing) did not localize physical and panned sources differently whereas listeners with lower PTAs (i.e., better hearing) did. No interaction was observed for azimuth errors; however, there was a significant main effect of PTA. Conclusion: As hearing impairment becomes more severe, listeners localize physical and panned sources with similar errors. Because physical and panned sources are not localized differently by adults with hearing loss, amplitude panning could be an appropriate method for constructing virtual environments for these listeners.
... Ainda, e conforme preconizado por alguns autores (21)(22)(23)(24)(25) , os benefícios do Sistema FM já são igualmente comprovados em ambiente extraescolar. Tal inferência fomenta a ideia de reavaliação do alcance da política pública de saúde em questão, tomando por base a dimensão e densidade do direito fundamental à saúde, somadas às evidências dos benefícios à saúde que esse dispositivo proporciona às pessoas com deficiência auditiva. ...
Article
Full-text available
Objetivo conhecer como se deu o processo de criação de políticas públicas em saúde auditiva no Brasil, bem como a influência do Poder Judiciário na concretização do acesso, pela pessoa com deficiência auditiva, ao Sistema de Frequência Modulada (Sistema FM) e para utilização em ambiente escolar. Métodos estudo qualitativo exploratório, por meio do qual foi realizado, inicialmente, um levantamento normativo nos sítios eletrônicos da Presidência da República, Câmara dos Deputados e Ministério da Saúde, visando identificar, no período compreendido entre outubro de 1988 e outubro de 2019, a existência de normas que versassem sobre a criação de políticas públicas em saúde auditiva. Foi realizado, em complemento, levantamento jurisprudencial nos sítios eletrônicos de Tribunais de Justiça, Tribunais Regionais Federais e Tribunais Superiores, visando identificar, no período compreendido entre janeiro de 2000 e outubro de 2019, a existência de decisões judiciais que versassem sobre acesso ao Sistema FM, via Sistema Único de Saúde (SUS). Resultados foi possível identificar dez instrumentos normativos que tratavam, especificamente, da criação de políticas públicas em saúde auditiva, além de seis decisões judiciais, cujos méritos consistiam, propriamente, no acesso ao Sistema FM, via SUS. Conclusão o Poder Judiciário tem papel fundamental na concretização do acesso ao Sistema FM pela pessoa com deficiência auditiva, uma vez que sua atuação suprime omissões dos outros Poderes e impede que políticas públicas já concebidas contemplem restrições contrárias à Constituição Federal.
... Fourteen different 5-min "far distance" segments of time were selected from the two recording days for the two study weekends per family (i.e., seven segments each from RMS and No-RMS conditions, distributed evenly across study days). 2 These 5-min audio segments had been extracted from the caregivers' DLPs, according to selection criteria delineated by Benítez- Barrera et al. (2019). In the aforementioned study, eligible segments were analyzed for both conditions (RMS and No-RMS) over two different distance categories (close distance and far distance) to represent key caregiver talk produced relative to the position of the key child. ...
Article
Purpose This study examined the impact of home use of remote microphone systems (RMSs) on caregiver communication and child vocalizations in families of children with hearing loss. Method We drew on data from a prior study in which Language ENvironmental Analysis recorders were used with 9 families during 2 consecutive weekends—1 that involved using an RMS and 1 that did not. Audio samples from Language ENvironmental Analysis recorders were (a) manually coded to quantify the frequency of verbal repetitions and alert phrases caregivers utilized in communicating to children with hearing loss and (b) automatically analyzed to quantify children's vocalization rate, duration, complexity, and reciprocity when using and not using an RMS. Results When using an RMS at home, caregivers did not repeat or clarify their statements as often as when not using an RMS while communicating with their children with hearing loss. However, no between-condition differences were observed in children's vocal characteristics. Conclusions Results provide further support for home RMS use for children with hearing loss. Specifically, findings lend empirical support to prior parental reports suggesting that RMS use eases caregiver communication in the home setting. Studies exploring RMS use over a longer duration of time might provide further insight into potential long-term effects on children's vocal production.
Article
Objectives: This study examined whether remote microphone (RM) systems improved listening-in-noise performance in youth with autism. We explored effects of RM system use on both listening-in-noise accuracy and listening effort in a well-characterized sample of participants with autism. We hypothesized that listening-in-noise accuracy would be enhanced and listening effort reduced, on average, when participants used the RM system. Furthermore, we predicted that effects of RM system use on listening-in-noise accuracy and listening effort would vary according to participant characteristics. Specifically, we hypothesized that participants who were chronologically older, had greater nonverbal cognitive and language ability, displayed fewer features of autism, and presented with more typical sensory and multisensory profiles might exhibit greater benefits of RM system use than participants who were younger, had less nonverbal cognitive or language ability, displayed more features of autism, and presented with greater sensory and multisensory disruptions. Design: We implemented a within-subjects design to investigate our hypotheses, wherein 32 youth with autism completed listening-in-noise testing with and without an RM system. Listening-in-noise accuracy and listening effort were evaluated simultaneously using a dual-task paradigm for stimuli varying in complexity (i.e., syllable-, word-, sentence-, and passage-level). In addition, several putative moderators of RM system effects (i.e., sensory and multisensory function, language, nonverbal cognition, and broader features of autism) on outcomes of interest were evaluated. Results: Overall, RM system use resulted in higher listening-in-noise accuracy in youth with autism compared with no RM system use. The observed benefits were all large in magnitude, although the benefits on average were greater for more complex stimuli (e.g., key words embedded in sentences) and relatively smaller for less complex stimuli (e.g., syllables). Notably, none of the putative moderators significantly influenced the effects of the RM system on listening-in-noise accuracy, indicating that RM system benefits did not vary according to any of the participant characteristics assessed. On average, RM system use did not have an effect on listening effort across all youth with autism compared with no RM system use but instead yielded effects that varied according to participant profile. Specifically, moderated effects indicated that RM system use was associated with increased listening effort for youth who had (a) average to below-average nonverbal cognitive ability, (b) below-average language ability, and (c) reduced audiovisual integration. RM system use was also associated with decreased listening effort for youth with very high nonverbal cognitive ability. Conclusions: This study extends prior work by showing that RM systems have the potential to boost listening-in-noise accuracy for youth with autism. However, this boost in accuracy was coupled with increased listening effort, as indexed by longer reaction times while using an RM system, for some youth with autism, perhaps suggesting greater engagement in the listening-in-noise tasks when using the RM system for youth who had lower cognitive abilities, were less linguistically able, and/or have difficulty integrating seen and heard speech. These findings have important implications for clinical practice, suggesting RM system use in classrooms could potentially improve listening-in-noise performance for some youth with autism.
Article
Full-text available
Objective: This work presents the design and verification of a simplified measurement setup for wireless remote microphone systems (WRMSs), which has been incorporated into guidelines of the European Union of Hearing Aid Acousticians (EUHA). Design: Three studies were conducted. First, speech intelligibility scores within the simplified setup were compared to that in an actual classroom. Second, different WRMSs were compared in the simplified setup, and third, normative data for normal-hearing people with and without WRMS were collected. Study sample: The first two studies include 40 older hearing impaired and the third study 20 young normal-hearing adults. Results: Speech intelligibility with WRMS was not different across actual classroom and simplified setup. An analog omnidirectional WRMS showed poorer speech intelligibility and poorer quality ratings than digital WMRSs. The usage of a WRMS in the simplified setup resulted in significantly higher speech intelligibility across all tested background noise levels. Conclusions: Despite being a simplified measurement setup, it realistically emulates a situation where people are listening to speech in noise from a distance, such as in a classroom or meeting room. Hence, with standard audiological equipment, the individual benefit of WRMSs can be measured and experienced by the user in clinical practice.
Article
Objectives: The objective of this study was to characterize the acoustics of the home environment of young children with hearing loss. Specifically, we aimed to quantify the range of speech levels, noise levels, and signal-to-noise ratios (SNRs) encountered by children with hearing loss in their homes. Design: Nine families participated in the study. The children with hear- ing loss in these families were between 2 and 5 years of age. Acoustic recordings were made in the children’s homes over one weekend (Saturday and Sunday) using Language ENvironmental Analysis (LENA) recorders. These recordings were analyzed using LENA’s proprietary software to determine the range of speech and noise levels in the child’s home. A custom Matlab program analyzed the LENA output to estimate the SNRs in the children’s homes. Results: The average SNR encountered by children with hearing loss in our sample was approximately +7.9 dB SNR. It is important to note that our analyses revealed that approximately 84% of the SNRs experienced by these children with hearing loss were below the +15 dB SNR recommended by the American Speech-Language Hearing Association. Averaged across families, speech and noise levels were 70.1 and 62.2 C-weighted decibels, respectively. Conclusions: These data show that, for much of the time, young children with hearing loss are forced to listen under suboptimal conditions in their home environments. This has important implications as listening under these conditions could negatively affect learning opportunities for young children with hearing loss. To mitigate these potential negative effects, the use of assistive listening devices that improve the SNR (e.g., remote microphone systems) should be considered for use at home by young children with hearing loss.
Article
Full-text available
Caregivers speaking to children often adjust segmental and suprasegmental qualities of their speech relative to adult-directed (AD) speech. The quality and quantity of infant-directed (ID) speech has been shown to support word learning and word segmentation by normal-hearing infants, but the extent to which children with cochlear implants (CIs) benefit linguistically from ID speech is unclear. The present study investigated the extent to which the quantity and quality of ID speech produced in the lab by each of 40 mothers to her child with a CI predicted the child’s speech-language outcome measures at two years post-implantation. Multiple measures of ID and AD speech for each mother were taken, including ID speech quantity in one minute, and several measures of ID speech quality, including fundamental frequency characteristics, speech rate, and the area of the vowel triangle formed by corner vowels in F1-F2 space. Forward stepwise regression showed that both quantity and quality of speech significantly predicted language outcomes measured by the Preschool Language Scales, Peabody Picture Vocabulary Test, and the Reynell Developmental Language Scales. These results support the hypothesis that hearing more ID speech that has acoustic modifications typical of IDS promotes language proficiency in children with CIs.
Article
Full-text available
Children's early language exposure impacts their later linguistic skills, cognitive abilities, and academic achievement, and large disparities in language exposure are associated with family socioeconomic status (SES). However, there is little evidence about the neural mechanism(s) underlying the relation between language experience and linguistic/cognitive development. Here, language experience was measured from home audio recordings of 36 SES-diverse 4-6 year-old children. During a story-listening fMRI task, children who had experienced more conversational turns with adults-independent of SES, IQ, and adult/child utterances alone-exhibited greater left inferior frontal (Broca's area) activation, which significantly explained the relation between children's language exposure and verbal skill. This is the first evidence directly relating children's language environments with neural language processing, specifying both environmental and neural mechanisms underlying SES disparities in children's language skills. Furthermore, results suggest that conversational experience impacts neural language processing over and above SES and/or the sheer quantity of words heard.
Article
Full-text available
The disparity in the amount and quality of language that low-income children hear relative to their more-affluent peers is often referred to as the 30-million-word gap. Here, we expand the literature about this disparity by reporting the relative contributions of the quality of early parent-child communication and the quantity of language input in 60 low-income families. Including both successful and struggling language learners from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, we noted wide variation in the quality of nonverbal and verbal interactions (symbol-infused joint engagement, routines and rituals, fluent and connected communication) at 24 months, which accounted for 27% of the variance in expressive language 1 year later. These indicators of quality were considerably more potent predictors of later language ability than was the quantity of mothers' words during the interaction or sensitive parenting. Bridging the word gap requires attention to how caregivers and children establish a communication foundation within low-income families. © The Author(s) 2015.
Article
Full-text available
Infants differ substantially in their rates of language growth, and slow growth predicts later academic difficulties. In this study, we explored how the amount of speech directed to infants in Spanish-speaking families low in socioeconomic status influenced the development of children's skill in real-time language processing and vocabulary learning. All-day recordings of parent-infant interactions at home revealed striking variability among families in how much speech caregivers addressed to their child. Infants who experienced more child-directed speech became more efficient in processing familiar words in real time and had larger expressive vocabularies by the age of 24 months, although speech simply overheard by the child was unrelated to vocabulary outcomes. Mediation analyses showed that the effect of child-directed speech on expressive vocabulary was explained by infants' language-processing efficiency, which suggests that richer language experience strengthens processing skills that facilitate language growth.
Article
Full-text available
The present study examines the effect of parents’ language input on the linguistic progress of children with cochlear implants. Participants were 21 children with cochlear implants and their mothers. Age at implantation ranged between 14 and 46 months. The study was longitudinal with data collections every 4½ months for a period of 27 months. Spontaneous speech in a free play situation with a parent was recorded at each data point. Children's grammar was measured in terms of Men Length of Utterance (MLU) and the use of noun plurals, verb markings, and case and gender markings on articles. Mothers’ child-directed speech was analysed in terms of MLU, self-repetitions and expansions. Time-lagged correlational analyses were performed relating properties of maternal speech at an earlier data point —while controlling for the child's language level at this data point— to child language at subsequent data points. The results showed that maternal MLU and expansions are positively related to child linguistic progress. Higher maternal MLU and more expansions are related to higher child MLU subsequently. More specifically, expansions of specific grammatical structures are related to an increased correct use of these structures by the child subsequently. This was the case particularly for case and gender marking on articles, but also for noun plurals and verb markings. Maternal self-repetitions were negatively related to child progress in grammar. The results demonstrate an effect on mothers’ language input on the linguistic progress of young children with cochlear implants. Rich language input leads to better language growth.
Article
Full-text available
PROCODER is a software system for observing and coding events that have been recorded on videotape. The system uses a personal-computer-based tape controller to control a VHS tape while observations are recorded. Frequencies of events, durations of events, and calculations of inter-observer agreement of events or intervals are included. Data can be output in ASCII format for use with other statistical programs. A sample study in which the system is used is described as well.
Article
Full-text available
For generations the study of vocal development and its role in language has been conducted laboriously, with human transcribers and analysts coding and taking measurements from small recorded samples. Our research illustrates a method to obtain measures of early speech development through automated analysis of massive quantities of day-long audio recordings collected naturalistically in children's homes. A primary goal is to provide insights into the development of infant control over infrastructural characteristics of speech through large-scale statistical analysis of strategically selected acoustic parameters. In pursuit of this goal we have discovered that the first automated approach we implemented is not only able to track children's development on acoustic parameters known to play key roles in speech, but also is able to differentiate vocalizations from typically developing children and children with autism or language delay. The method is totally automated, with no human intervention, allowing efficient sampling and analysis at unprecedented scales. The work shows the potential to fundamentally enhance research in vocal development and to add a fully objective measure to the battery used to detect speech-related disorders in early childhood. Thus, automated analysis should soon be able to contribute to screening and diagnosis procedures for early disorders, and more generally, the findings suggest fundamental methods for the study of language in natural environments.
Article
Full-text available
This study examined the head orientation of young children in naturalistic settings and the acoustics of their everyday environments for quantifying the potential effects of directionality. Twenty-seven children (11 with normal hearing, 16 with impaired hearing) between 11 and 78 months of age were video recorded in naturalistic settings for analyses of head orientation. Reports on daily activities were obtained from caregivers. The effect of directionality in different environments was quantified by measuring the Speech Transmission Index (STI; H. J. M. Steeneken & T. Houtgast, 1980). Averaged across 4 scenarios, children looked in the direction of a talker for 40% of the time when speech was present. Head orientation was not affected by age or hearing status. The STI measurements revealed a directional advantage of 3 dB when a child looked at a talker but a deficit of 2.8 dB when the talker was sideways or behind the child. The overall directional effect in real life was between -0.4 and 0.2 dB. The findings suggest that directional microphones in personal hearing devices for young children are not detrimental and have much potential for benefits in real life. The benefits may be enhanced by fitting directionality early and by counseling caregivers on ways to maximize benefits in everyday situations.
Article
Full-text available
Children typically learn in classroom environments that have background noise and reverberation that interfere with accurate speech perception. Amplification technology can enhance the speech perception of students who are hard of hearing. This study used a single-subject alternating treatments design to compare the speech recognition abilities of children who are, hard of hearing when they were using hearing aids with each of three frequency modulated (FM) or infrared devices. Eight 9-12-year-olds with mild to severe hearing loss repeated Hearing in Noise Test (HINT) sentence lists under controlled conditions in a typical kindergarten classroom with a background noise level of +10 dB signal-to-noise (S/N) ratio and 1.1 s reverberation time. Participants listened to HINT lists using hearing aids alone and hearing aids in combination with three types of S/N-enhancing devices that are currently used in mainstream classrooms: (a) FM systems linked to personal hearing aids, (b) infrared sound field systems with speakers placed throughout the classroom, and (c) desktop personal sound field FM systems. The infrared ceiling sound field system did not provide benefit beyond that provided by hearing aids alone. Desktop and personal FM systems in combination with personal hearing aids provided substantial improvements in speech recognition. This information can assist in making S/N-enhancing device decisions for students using hearing aids. In a reverberant and noisy classroom setting, classroom sound field devices are not beneficial to speech perception for students with hearing aids, whereas either personal FM or desktop sound field systems provide listening benefits.
Article
Full-text available
To examine speech recognition performance and subjective ratings for directional and omnidirectional microphone modes across a variety of simulated classroom environments. Speech recognition was measured in a group of 26 children age 10-17 years in up to 8 listening environments. Significant directional benefit was found when the sound source(s) of interest was in front, and directional decrement was measured when the sound source of interest was behind the participants. Of considerable interest is that a directional decrement was observed in the absence of directional benefit when sources of interest were both in front of and behind the participants. In addition, limiting directional processing to the low frequencies eliminated both the directional deficit and the directional advantage. Although these data support the use of directional hearing aids in some noisy school environments, they also suggest that use of the directional mode should be limited to situations in which all talkers of interest are located in the front hemisphere. These results highlight the importance of appropriate switching between microphone modes in the school-age population.
Article
Purpose: The purpose of this study was to investigate the effects of home use of a remote microphone system (RMS) on the spoken language production of caregivers with young children who have hearing loss. Method: Language Environment Analysis recorders were used with 10 families during 2 consecutive weekends (RMS weekend and No-RMS weekend). The amount of talk from a single caregiver that could be made accessible to children with hearing loss when using an RMS was estimated using Language Environment Analysis software. The total amount of caregiver talk (close and far talk) was also compared across both weekends. In addition, caregivers' perceptions of RMS use were gathered. Results: Children, with the use of RMSs, could potentially have access to approximately 42% more words per day. In addition, although caregivers produced an equivalent number of words on both weekends, they tended to talk more from a distance when using the RMS than when not. Finally, caregivers reported positive perceived communication benefits of RMS use. Conclusions: Findings from this investigation suggest that children with hearing loss have increased access to caregiver talk when using an RMS in the home environment. Clinical implications and future directions for research are discussed.
Article
This article employs meta-analysis procedures to evaluate whether children with cochlear implants demonstrate lower spoken-language vocabulary knowledge than peers with normal hearing. Of the 754 articles screened and 52 articles coded, 12 articles met predetermined inclusion criteria (with an additional 5 included for one analysis). Effect sizes were calculated for relevant studies and forest plots were used to compare differences between groups of children with normal hearing and children with cochlear implants. Weighted effect size averages for expressive vocabulary measures (g = −11.99; p < .001) and for receptive vocabulary measures (g = −20.33; p < .001) indicated that children with cochlear implants demonstrate lower vocabulary knowledge than children with normal hearing. Additional analyses confirmed the value of comparing vocabulary knowledge of children with hearing loss to a tightly matched (e.g., socioeconomic status-matched) sample. Age of implantation, duration of implantation, and chronological age at testing were not significantly related to magnitude of weighted effect size. Findings from this analysis represent a first step toward resolving discrepancies in the vocabulary knowledge literature.
Article
Objectives: This study examined the language outcomes of children with mild to severe hearing loss during the preschool years. The longitudinal design was leveraged to test whether language growth trajectories were associated with degree of hearing loss and whether aided hearing influenced language growth in a systematic manner. The study also explored the influence of the timing of hearing aid fitting and extent of use on children's language growth. Finally, the study tested the hypothesis that morphosyntax may be at particular risk due to the demands it places on the processing of fine details in the linguistic input. Design: The full cohort of children in this study comprised 290 children who were hard of hearing (CHH) and 112 children with normal hearing who participated in the Outcomes of Children with Hearing Loss (OCHL) study between the ages of 2 and 6 years. CHH had a mean better-ear pure-tone average of 47.66 dB HL (SD = 13.35). All children received a comprehensive battery of language measures at annual intervals, including standardized tests, parent-report measures, and spontaneous and elicited language samples. Principal components analysis supported the use of a single composite language score for each of the age levels (2, 3, 4, 5, and 6 years). Measures of unaided (better-ear pure-tone average, speech intelligibility index) and aided (residualized speech intelligibility index) hearing were collected, along with parent-report measures of daily hearing aid use time. Mixed modeling procedures were applied to examine the rate of change (227 CHH; 94 children with normal hearing) in language ability over time in relation to (1) degree of hearing loss, (2) aided hearing, (3) age of hearing aid fit and duration of use, and (4) daily hearing aid use. Principal components analysis was also employed to examine factor loadings from spontaneous language samples and to test their correspondence with standardized measures. Multiple regression analysis was used to test for differential effects of hearing loss on morphosyntax and lexical development. Results: Children with mild to severe hearing loss, on average, showed depressed language levels compared with peers with normal hearing who were matched on age and socioeconomic status. The degree to which CHH fell behind increased with greater severity of hearing loss. The amount of improved audibility with hearing aids was associated with differential rates of language growth; better audibility was associated with faster rates of language growth in the preschool years. Children fit early with hearing aids had better early language achievement than children fit later. However, children who were fit after 18 months of age improved in their language abilities as a function of the duration of hearing aid use. These results suggest that the language learning system remains open to experience provided by improved access to linguistic input. Performance in the domain of morphosyntax was found to be more delayed in CHH than their semantic abilities. Conclusion: The data obtained in this study largely support the predictions, suggesting that mild to severe hearing loss places children at risk for delays in language development. Risks are moderated by the provision of early and consistent access to well-fit hearing aids that provide optimized audibility.
Article
Objectives: Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design: Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Results: Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions: Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.
Article
Objectives: The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Design: Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. Results: At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Conclusions: Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
Article
Objectives: While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. Design: Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. Results: Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. Conclusions: The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.
Article
This paper reports 2 studies that explore the role of joint attentional processes in the child's acquisition of language. In the first study, 24 children were videotaped at 15 and 21 months of age in naturalistic interaction with their mothers. Episodes of joint attentional focus between mother and child--for example, joint play with an object--were identified. Inside, as opposed to outside, these episodes both mothers and children produced more utterances, mothers used shorter sentences and more comments, and dyads engaged in longer conversations. Inside joint episodes maternal references to objects that were already the child's focus of attention were positively correlated with the child's vocabulary at 21 months, while object references that attempted to redirect the child's attention were negatively correlated. No measures from outside these episodes related to child language. In an experimental study, an adult attempted to teach novel words to 10 17-month-old children. Words referring to objects on which the child's attention was already focused were learned better than words presented in an attempt to redirect the child's attentional focus.
Article
The overall objective of the present study was to assess the efficacy of FM system use in the home setting for a group of preschool children with mild-to-severe sensorineural hearing loss. Changes in language acquisition were monitored and compared with similar measures from a group of children who used hearing aids. Secondarily, the perceived benefits and practical problems associated with FM system use across a variety of nonacademic situations were documented. Ten children with mild-to-severe sensorineural hearing loss participated in a 2-yr longitudinal study investigating the efficacy of FM system use in the home setting. The subjects were divided into two groups: one group was instructed to use FM systems at home as often as possible while the other used only their personal hearing aids. Changes in language acquisition were monitored in both groups. Subjective benefit and the practical problems associated with use of FM systems outside of traditional academic environments were monitored via daily use logs, a weekly observation inventory, and a situational listening profile. The majority of children in both groups improved in all measures of language development over the study interval. Although there were relatively large individual differences in performance for some measures, no statistically significant differences between the FM and hearing aid users were found. However, some children in the FM group made unusually large gains in some aspects of language development over the study interval. In addition, both parents and children reported benefits of FM system use in specific listening situations. Throughout the 2-yr study, a number of practical problems associated with FM system use outside the classroom were identified. Formal language measures did not yield significant differences between the FM and HA groups, but some subjects had rates of language acquisition which suggested that FM system use may be beneficial in selected cases. In addition, subjective reports of FM system benefit suggest that appropriate use of the device may facilitate effective communication in a variety of listening situations. Although recent advances in FM system design may minimize some of the factors that reportedly restricted consistent FM use in this study, the complexities associated with the modes of operation and problems with FM interference remain issues that require consistent audiologic monitoring of FM system use in nonacademic environments.
Article
In this study the performance of a noise reduction strategy applied to cochlear implants is evaluated. The noise reduction strategy is based on a 2-channel adaptive filtering strategy using two microphones in a single behind-the-ear hearing aid. Four adult LAURA cochlear implant users (Peeters et al., 1993) took part in the experiments. The tests included identification of monosyllabic CVC (consonant-vowel-consonant) words and measurements of the speech reception threshold (SRT) of lists of numbers, in background noise presented at 90 degrees relative to the 0 degrees frontal direction of the speech. Percent correct phoneme scores for the CVC words at signal to noise ratios (SNRs) of -5, 0, and +5 dB in steady speech-weighted noise at 60 dB SPL and SRTs for numbers in speech-weighted steady and nonsteady ICRA noise were both obtained in conditions with and without the noise reduction pre-processing. Physical SNR improvements of the noise reduction system are evaluated as well, as a function of the direction of the noise source. Highly significant improvements in speech understanding, corresponding on average to an SNR improvement of about 10 dB, were observed with this 2-channel adaptive filtering noise reduction strategy using both types of speech-noise test materials. These perceptual evaluations agree with physical evaluations and simulations of this noise reduction strategy. Taken together, these data demonstrate that cochlear implantees may increase their speech intelligibility in noisy environments with the use of multimicrophone noise reduction systems.
The FM advantage in the real classroom
  • T S Flynn
  • M C Flynn
  • M Gregory
Flynn, T. S., Flynn, M. C., & Gregory, M. (2005). The FM advantage in the real classroom. Journal of Educational Audiology, 12, 37-44.
Reliability of the LENA TM language environment analysis system in young children's natural home environment
  • D Xu
  • U Yapanel
  • S Gray
Xu, D., Yapanel, U., & Gray, S. (2009). Reliability of the LENA TM language environment analysis system in young children's natural home environment. Retrieved from http://www.lenafoundation. org/TechReport.aspx/Reliability/LTR-05-2
Early development of children with hearing loss
  • S Nittrouer
Nittrouer, S. (2010). Early development of children with hearing loss. San Diego, CA: Plural.
Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development
  • S F Warren
Warren, S. F. (2010). Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proceedings of the National Academy of Sciences of the United States of America, 107(30), 13354-13359.
The stability and validity of automated vocal analysis in preschoolers with autism spectrum disorder in the early stages of language development (Doctoral dissertation)
  • T Woynaroski
Woynaroski, T. (2014). The stability and validity of automated vocal analysis in preschoolers with autism spectrum disorder in the early stages of language development (Doctoral dissertation). Vanderbilt University, Nashville, TN.
Observational measurement of behavior
  • P Yoder
  • F Symons
Yoder, P., & Symons, F. (2010). Observational measurement of behavior. New York, NY: Springer.
The FM advantage in the real classroom
  • Flynn T. S.