A preview of this full-text is provided by American Speech-Language-Hearing Association.
Content available from Journal of Speech, Language, and Hearing Research
This content is subject to copyright. Terms and conditions apply.
JSLHR
Research Article
Remote Microphone System Use at Home:
Impact on Caregiver Talk
Carlos R. Benítez-Barrera,
a
Gina P. Angley,
a
and Anne Marie Tharpe
a
Purpose: The purpose of this study was to investigate the
effects of home use of a remote microphone system (RMS)
on the spoken language production of caregivers with young
children who have hearing loss.
Method: Language Environment Analysis recorders were
used with 10 families during 2 consecutive weekends
(RMS weekend and No-RMS weekend). The amount of
talk from a single caregiver that could be made accessible
to children with hearing loss when using an RMS was
estimated using Language Environment Analysis software.
The total amount of caregiver talk (close and far talk) was
also compared across both weekends. In addition, caregivers’
perceptions of RMS use were gathered.
Results: Children, with the use of RMSs, could
potentially have access to approximately 42% more
words per day. In addition, although caregivers
produced an equivalent number of words on both
weekends, they tended to talk more from a distance
when using the RMS than when not. Finally, caregivers
reported positive perceived communication benefits of
RMS use.
Conclusions: Findings from this investigation suggest
that children with hearing loss have increased access
to caregiver talk when using an RMS in the home
environment. Clinical implications and future directions
for research are discussed.
Access to linguistic input and social interactions
are essential to the development of children’s
language. There is a positive relationship between
the number of words to which children are exposed and
their subsequent vocabulary (Hart & Risley, 1995). More-
over, children who are exposed to more words have better
processing efficiency than those with less word exposure
(Hurtado, Marchman, & Fernald, 2008). This word learning
process requires access to the acoustic patterns of words,
thus leading to the recognition of conditional probabilities
of phoneme sequences that result in word identification
(Aslin, Saffran, & Newport, 1998).
However, young children with hearing loss might not
have consistent access to this necessary high-quality linguis-
tic input, which is critical for receptive and expressive lan-
guage development (e.g., Hoff & Naigles, 2002; Quittner
et al., 2013). Although multiple factors can affect a child’s
access to linguistic input, it is reasonable to conclude that
compromised exposure to high-quality speech input con-
tributes, at least in part, to the well-known deficits in speech,
language, and vocabulary development in children with
hearing loss (Ching & Dillon, 2013; Geers, Strube, Tobey,
& Moog, 2011; Stelmachowicz, Pittman, Hoover, Lewis,
& Moeller, 2004; Tomblin, Oleson, Ambrose, Walker, &
Moeller, 2014). Well-fit hearing aids that are worn consis-
tently (> 10 hours a day), thus providing enhanced audi-
bility, have been shown to help ameliorate some of these
language deficits (Tomblin et al., 2015).
It is widely recognized that listening in the presence
of background noise is challenging for children with hearing
loss (e.g., Crandell, 1993; Stickney, Zeng, Litovsky, &
Assmann, 2004), thereby potentially limiting acoustic access
to desired speech. Despite the use of well-fit hearing aids,
a signal-to-noise ratio (SNR) of at least +15 dB is neces-
sary for optimum speech perception by children with hear-
ing loss (American National Standards Institute/Acoustical
Society of America, 2010). Numerous hearing technologies
have been recommended to improve the ability of children
to hear in adverse listening conditions by enhancing the
SNR, including different microphone types (e.g., fixed direc-
tional, fully adaptive directional), and various signal pro-
cessing approaches (e.g., digital noise reduction). However,
the most effective technology for this purpose used with
a
Department of Hearing and Speech Sciences, Vanderbilt University
School of Medicine, Nashville, TN
Correspondence to Carlos R. Benitez-Barrera:
carlos.r.benitez@vanderbilt.edu
Editor-in-Chief: Frederick (Erick) Gallun
Editor: Steve Aiken
Received May 3, 2017
Revision received August 20, 2017
Accepted September 15, 2017
https://doi.org/10.1044/2017_JSLHR-H-17-0168
Disclosure: This article was funded by Phonak, who provided equipment for use in
this study.
Journal of Speech, Language, and Hearing Research •Vol. 61 •399–409 •February 2018 •Copyright © 2018 American Speech-Language-Hearing Association 399