ArticlePDF Available

Performance of Directional Microphones for Hearing Aids: Real-World versus Simulation

Authors:
  • ETYMOTIC RESEARCH
  • Advanced Hearing concepts

Abstract and Figures

The purpose of this study was to assess the accuracy of clinical and laboratory measures of directional microphone benefit. Three methods of simulating a noisy restaurant listening situation ([1] a multimicrophone/multiloudspeaker simulation, the R-SPACE, [2] a single noise source behind the listener, and [3] a single noise source above the listener) were evaluated and compared to the "live" condition. Performance with three directional microphone systems differing in polar pattern (omnidirectional, supercardioid, and hypercardioid array) and directivity indices (0.34, 4.20, and 7.71) was assessed using a modified version of the Hearing in Noise Test (HINT). The evaluation revealed that the three microphones could be ordered with regard to the benefit obtained using any of the simulation techniques. However, the absolute performance obtained with each microphone type differed among simulations. Only the R-SPACE simulation yielded accurate estimates of the absolute performance of all three microphones in the live condition. Performance in the R-SPACE condition was not significantly different from performance in the "live restaurant" condition. Neither of the single noise source simulations provided accurate predictions of real-world (live) performance for all three microphones.
Content may be subject to copyright.
Abstract
The purpose of this study was to assess the accuracy of clinical and labora-
tory measures of directional microphone benefit. Three methods of simulating
a noisy restaurant listening situation ([1] a multimicrophone/multiloudspeaker
simulation, the R-SPACE
TM
, [2] a single noise source behind the listener, and
[3] a single noise source above the listener) were evaluated and compared
to the “live” condition. Performance with three directional microphone sys-
tems differing in polar pattern (omnidirectional, supercardioid, and hypercardioid
array) and directivity indices (0.34, 4.20, and 7.71) was assessed using a
modified version of the Hearing in Noise Test (HINT). The evaluation revealed
that the three microphones could be ordered with regard to the benefit obtained
using any of the simulation techniques. However, the absolute performance
obtained with each microphone type differed among simulations. Only the R-
SPACE simulation yielded accurate estimates of the absolute performance
of all three microphones in the live condition. Performance in the R-SPACE
condition was not significantly different from performance in the “live restau-
rant” condition. Neither of the single noise source simulations provided accurate
predictions of real-world (live) performance for all three microphones.
Key Words: Articulation Index-Directivity Index, benefit, directional micro-
phones, Directivity Index, hearing aids
Abbreviations: AI-DI = Articulation Index-Directivity Index; DI = Directivity
Index; HINT = Hearing in Noise Test; ITE = in the ear; RTS = Reception
Threshold for Sentences; SNR = Signal-to-Noise Ratio
Sumario
El propósito de este estudio fue evaluar la exactitud de las medidas clínicas
y de laboratorio sobre el beneficio del micrófono direccional. Se evaluaron
tres métodos de simular situaciones auditivas de restaurante ruidoso ([1] una
simulación multi-micrófono/multi-altoparlante, el R-SPACE™, [2] una fuente
única de ruido detrás del sujeto, y [3] una fuente única de ruido por encima
del sujeto) y se compararon con una condición “en vivo”. Se evaluó el desem-
peño de tres sistemas de micrófonos direccionales con diferentes patrones
polares (omni-direccional, supercardioide, e hipercardioide) y los índices de
direccionalidad (0.34, 4.20, y 7.71), usando una versión modificada de la
Prueba de Audición en Ruido (HINT). La evaluación reveló que los tres micró-
fonos podían ordenarse en relación con el beneficio obtenido utilizando
cualquiera de las técnicas de simulación. Sin embargo, el rendimiento abso-
luto obtenido con cada tipo de micrófono difirió de acuerdo a las simulaciones.
Sólo la simulación R-SPACE rindió estimaciones exactas del desempeño
Performance of Directional Microphones
for Hearing Aids: Real-World versus
Simulation
Cynthia L. Compton-Conley*†
Arlene C. Neuman*
Mead C. Killion‡
Harry Levitt*
*The Graduate Center of the City University of New York; †Gallaudet University; ‡Etymotic Research, Inc.
Reprint requests: Cynthia L. Compton-Conley, Ph.D., Department of Hearing, Speech, and Language Sciences, Gallaudet
University, 800 Florida Avenue, NE, Washington, DC 20002; Phone: 202-651-5326; Fax: 202-651-5324; E-mail:
cynthia.conley@gallaudet.edu
This dissertation work was supported primarily by a student grant from Etymotic Research. Financial incentives for the listeners were
provided by the Gallaudet Research Institute.
440
J Am Acad Audiol 15:440–455 (2004)
91898 CD text 7/28/04 6:54 PM Page 46
absoluto de los tres micrófonos en condiciones en vivo. El desempeño en la
condición de R-SPACE no fue significativamente diferente del desempeño
en la condición de “restaurante en vivo”. Ninguna de las
simulaciones de una fuente única de ruido aportó predicciones exactas del
desempeño en la vida real (en vivo) de los tres micrófonos.
Palabras Clave: Indice de Articulación-Indice de Direccionalidad, beneficio,
micrófonos direccionales, Indice de Direccionalidad, auxiliar auditivo
Abreviaturas: AI-DI = Indice de Articulación-Indice de Direccionalidad; DI
= Indice de Direccionalidad; HINT = Prueba de Audición en Ruido; ITE = intra-
auricular; RTS = Umbral de Recepción para Frases; SNR = Tasa Señal/Ruido
I
mprovements in the design and in the
performance of directional microphones
for hearing aids have led to improved
ability to recognize speech in some noisy
environments and in increased user
satisfaction (Kochkin, 1996, 2000). There are
many types of directional microphone systems
available for use in hearing aids. As such,
research and clinical facilities require an
efficient and practical method to (1) document
performance for patients, their families, and
third party providers, and (2) predict how
well directional microphone hearing aids
(DMHAs) will perform in real acoustic
environments such as restaurants, living
rooms, classrooms, churches, and in other
settings, characterized by the presence of
both noise and reverberation. Typically,
performance with DMHAs is compared with
performance of the same hearing aid with an
omnidirectional microphone. The method of
evaluation usually involves simulation of a
noisy environment in a sound-treated room.
The speech signal is introduced through a
loudspeaker placed in front of the hearing aid
user, and noise is introduced into the room
from one or more loudspeakers. Performance
with the hearing aid is measured using an
omnidirectional microphone and with the
directional microphone under study. A
comparison of the two measures gives an
indication of the benefit to be obtained with
the directional microphone.
Studies have shown that the benefit
provided by a DMHA is related to its polar
pattern, as well as variations in the test
environment. Factors such as the intensity
level of the target stimuli, signal-to-noise
ratio (SNR), room reverberation, location of
the listener, and location of the noise source(s)
will all affect directional microphone benefit
(e.g., Nielsen, 1973; Nielsen and Ludvigsen,
1978; Studebaker et al, 1980; Madison and
Hawkins, 1983; Hawkins and Yacullo, 1984;
Ricketts and Dhar, 1999; Ricketts and
Mueller, 1999). A review of previous research
reveals several possible deficiencies in the
methods used for evaluating directional
microphone efficacy. In many studies,
simulation of a noisy listening environment
has been accomplished by placing a single
noise source directly behind the listener, that
is, at 180° azimuth (e.g., Lentz, 1972; Mueller
and Johnson, 1979; Madison et al, 1983;
Hawkins et al, 1984; Valente, et al, 1995;
Lurquin and Rafhay, 1996). While there
might, indeed, be occasions where a listener
would encounter a single noise source directly
behind, this test condition is not typical of the
listening conditions that bother most people.
In addition, an evaluation method that
utilizes a signal in front of the listener and
noise directly behind the listener will show
maximum benefit for microphones with
maximum attenuation (null) at 180° (i.e., a
cardioid pattern of directivity) as compared
to modern day supercardioid and
hypercardioid microphones whose polar
patterns are characterized by rear lobes. In
some early studies, multiple noise sources
were used (e.g., Nielsen, 1973; Compton,
1974; Preves, 1975; Rumoshosky, 1976; Lentz,
1977), but in most of these cases the noises
were correlated (waveforms from each
loudspeaker were similar). Correlated noise
is not typical of most listening situations.
Performance of Directional Microphones/Compton-Conley et al
441
91898 CD text 7/28/04 6:54 PM Page 47
The use of multiple noise sources would
seem to be necessary because modern
directional hearing aids contain microphones
having varying polar patterns and degrees of
directivity. In real-world environments such
as a restaurant or a cocktail party, noises
may arise from all directions. Therefore, in
order to assess the improvement in SNR
achieved by directional hearing aids, it would
be advantageous to have noise arising from
multiple directions in the evaluation
environment.
The effect of room reverberation is
another factor that is typically not assessed in
evaluation procedures. Most studies have been
carried out in anechoic chambers or in sound-
attenuating booths; thus, the reverberation
conditions are unrealistically low.
The findings of recent studies emphasize
the importance of including such factors as
part of the evaluation procedure. For example,
Ricketts (2000) studied the effect of the
configuration of multiple noise source(s) in
two reverberant environments. The Hearing
in Noise Test (HINT) (Nilsson et al, 1994a)
was used to determine the absolute binaural
reception threshold for sentences (RTS) for
three pairs of different directional hearing
aids, as well as the directional benefit
(difference between the RTS for
omnidirectional and directional conditions).
Listeners with sensorineural hearing loss
were tested in two listening environments: (1)
a “living room” with a reverberation time of
0.6 seconds and (2) a “classroom” with a
reverberation time of 1.1 seconds. Four noise
source configurations were studied, including
a signal located in front and noise at (a) 180°
(typical of earlier evaluation methods); (b) 90°,
135°, 180°, 225°, and 270°, (typical of listening
in the front of a class or in the theater); (c)
30°, 105°, 180°, 225°, and 330° (typical of an
environment with more diffuse noise); and (d)
30°, 105°, 180°, 225°, and 330° but with the
30° and 330° loudspeakers turned
perpendicular to the listener (typical of a
situation in which the noise sources in the
front are farther away).
Both reverberation and noise
configuration were found to affect the
directional benefit across hearing aids. In
the living room environment, directional
benefit ranged between 3.6 to 7.9 dB,
depending on the noise source configuration.
This directional benefit decreased to a range
of 2 to 5.1 dB in the classroom setting.
Directional benefit was significantly higher
for the 0°/180° loudspeaker configuration in
comparison with all others. Significantly less
directional benefit was provided to listeners
in the diffuse restaurant configuration
(condition c) than the classroom or restaurant
configuration where the background noise
at 30° and 330° was reduced by 5 dB
(condition d). These results reveal that the
0°/180° test configuration commonly used in
clinical evaluation may overestimate the
benefit that will be obtained in more realistic
environments having multiple noise sources.
Second, an inverse relationship was noted
between directional benefit/performance and
reverberation time across all hearing aid
brands, that is, as reverberation time
increased, directional benefit/performance
decreased. Although smaller in magnitude,
this trend is in agreement with previous
investigations (Studebaker et al, 1980;
Madison and Hawkins, 1983; Hawkins and
Yacullo, 1984).
While Ricketts attempted to simulate
real-world effects in the clinic, Killion and
colleagues (1998) took a very different
approach—they recorded evaluation
materials in real-world environments. Test
recordings were made in several different
environments while subjects wore prototypes
of binaural in-the-ear (ITE) hearing aids
equipped with both omnidirectional and
supercardioid microphones. Several pairs of
ITE hearing aids were equipped with D-
Mic
TM
cartridges whose outputs were
available through subminiature Microtronic
four-pin connectors. One pin was connected
to the omnidirectional microphone output
and another pin to the directional microphone
output. The directional microphone output
was equalized to produce the same frequency
response (flat) as the omnidirectional
microphone. Cables were connected to permit
each of the two stereo microphone outputs—
directional and omnidirectional—to be
connected to a hand-held digital analog tape
(DAT) recorder. The individual, acting as a
"recording dummy," wore two custom ITE
hearing aids attached to the recording
instrumentation described above. Each DAT
recorder was carried in a small belt pack.
Outputs of the omnidirectional and
directional microphones were recorded
simultaneously, thereby permitting later
comparison of the two microphone outputs
under identical conditions.
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
442
91898 CD text 7/28/04 6:54 PM Page 48
A sequence of sentence blocks modeled
after the SIN (Speech in Noise) Test (Fikret-
Pasa, 1993; Killion and Villchur, 1993 ) was
recorded in various noisy real-world
environments: a crowded street party (90–95
dBA), two restaurants (70–80 dBA and 60–65
dBA), a museum party (80–85 dBA), and a
classroom party simulation (80–85 dBA).
Because the methodology of the experimental
design was not standardized, it is difficult to
compare the study's results with past and
future investigations of other directional
hearing aids. However, this study attempted
to address the need for a test environment
that approximates common real-world
reverberation and noise conditions. A
comparison of the results measured with the
outdoor (street party) and indoor recordings
showed that individuals with hearing loss
obtained greater benefit (9 dB improvement)
with the directional microphones in the
outdoor situation. This is to be expected
because the street party situation is a free
field situation where the listener is in the
direct sound path of the primary talker. In the
other listening environments, the room
reverberation and talker-listener distance
made listening more difficult.
As demonstrated by these studies, it may
be difficult to predict performance of a
particular DMHA for a specific listening
environment because of the complex
interaction between the characteristics of
the microphone and the characteristics of
the environment in which the hearing aid is
used. Recently a system was developed for the
purpose of accurately recording and then
reproducing/simulating real-world environments
for hearing aid evaluations (Revit et al, 2002a,
2002b). The system, called the R-SPACE,
consists of a circular, horizontal array of eight
interference-tube (shotgun) microphones that
can be placed in a circular configuration in
the environment to be recorded for later
simulation. Once the noisy/reverberant
recordings have been made, the recorded
environment can be recreated by playing
back the recordings through an array of eight
loudspeakers placed in a configuration that
mimics the configuration and placement that
had been used for the microphones at the
time of recording. If this technique were
successful in reproducing specific listening
environments, it would make it possible to
obtain accurate assessments of hearing aids
with directional microphones.
The purpose of the present study was to
assess whether real-world/“live” performance
with DMHAs could be accurately assessed in
a clinical/laboratory environment. The
absolute accuracy of the measured benefit as
well as the ability to correctly order the
relative benefit of three hearing aid
microphones were considered.
In order to be able to assess real-world/
live performance, recordings were made in a
noisy restaurant through three sets of
binaural microphones placed on a Knowles
Electronic Manikin for Acoustic Research
(KEMAR). This condition, which we call “live,”
served as the reference condition, or the gold
standard to which the simulations would be
compared. The R-SPACE simulation
technique was compared to more traditional
methods for simulating a noisy environment:
(1) use of a single loudspeaker for generating
noise from the rear of the listener, a
traditional technique, and (2) use of a single
loudspeaker for generating noise from
overhead. These two single-loudspeaker
competition paradigms have previously been
used for evaluation purposes. The latter was
proposed as an efficient method of creating
a simulated diffuse noise field (Mueller and
Sweetow, 1978). For all of the test conditions,
noise was recorded in the busy neighborhood
restaurant that was to be simulated.
METHOD
Directional Microphones under Test
Three pairs of hearing aid microphones
differing in directionality were used in this
study: (1) an ITE omnidirectional microphone;
(2) an ITE supercardioid microphone
(D-Mic); and (3) a five-element endfire
array microphone with hypercardioid
characteristics (Soede et al, 1993a).
These three microphones were selected
because they represented a wide range of
directivity and had different polar patterns.
The in situ (all microphones in place on
KEMAR) Directivity Index (DI) and
Articulation Index-weighted Directivity Index
(AI-DI) values of these microphones as
measured under anechoic conditions appear
in Table 1. The AI-DI is a method used to
predict the effect of the directivity on speech
recognition performance. Measurements were
made at a distance of 24 inches from the
Performance of Directional Microphones/Compton-Conley et al
443
91898 CD text 7/28/04 6:54 PM Page 49
loudspeaker, the listening distance used in the
study.
Instrumentation for Recording
The same KEMAR and microphone setup
was used to make recordings in all four
environments (live, R-SPACE, 0°/180°, and
0°/90°), as well as to measure the
electroacoustic characteristics of the
microphones, including directivity.
Three sets of binaural hearing aid
microphones were simultaneously mounted
to KEMAR. ITE hearing aid cases contained
both omnidirectional and supercardioid
microphones that functioned simultaneously.
The hypercardioid (array) microphone was
taped to the side of KEMAR’s head
approximately an inch above each ear and
facing slightly downward. The output of each
of the hearing aid microphones was pre-
amplified and then amplified before recording.
The outputs from all of the microphones were
recorded without additional hearing aid
circuitry, such as signal processing and
frequency shaping.
The same KEMAR and microphone setup
was used to make recordings in all four
environments (live, R-SPACE, 0°/180°, and
0°/90°), as well as to measure the
electroacoustic characteristics of the
microphones, including directivity.
An ER-11 half-inch microphone was
suspended six inches above the apex of
KEMAR’s head and was used for calibration,
as well as for recording. The output of this
microphone was also amplified before being
sent to the multitrack recording system.
To make the noise recordings for playback
in the R-SPACE, a circular, horizontal array
of eight interference-tube (shotgun)
microphones was placed in equally
distributed, 45-degree angular increments
around KEMAR (Figure 1).
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
444
Table 1. DI and AI-DI Values (AI-DI values in parentheses) for Hearing Aid Microphones as
Measured on KEMAR under Anechoic Conditions, 24 Inches from the Loudspeaker
Right Ear Left Ear Average
Omnidirectional 1.10 (1.30) -0.42 (0.36) 0.34 (0.83)
Supercardioid 4.35 (4.70) 4.05 (4.32) 4.20 (4.51)
Hypercardioid 8.03 (8.03) 7.39 (7.33) 7.71 (7.68)
24"
0
o
45
o
315
o
270
o
90
o
135
o
180
o
225
o
Figure 1. Multimicrophone array (surrounding KEMAR) used to record restaurant background noise for R-SPACE.
Illustration adapted from Revit et al (2002b).
91898 CD text 7/28/04 6:54 PM Page 50
The acoustic center of each microphone
was positioned at a distance of 24 inches
from the center of KEMAR’s head, facing
outward. A multitrack DTRS, consisting of
two Tascam DA-38 eight-track recorders was
used to record the three pairs of binaural
hearing aid microphone tracks and the ER-
11 omnidirectional microphone signals. A
third recorder (DA-98) was used to record
the signals for the R-SPACE simulation. Each
DTRS unit uses a helical scan head system
to record up to eight tracks of 16-bit, linear
pulse-code modulated digitized audio on a
Hi8-size cassette tape. A sampling rate of 48
kHz was employed. The three DTRS units
were driven in synchrony by the clock of the
DA-98. Thus, 24 synchronized tracks were
available.
Restaurant Noise Recordings
Noise recordings were made in a busy
neighborhood restaurant. The noise stimuli
were recorded during a breakfast party
attended by 42 people. A restaurant
environment was chosen because it is an
environment that causes considerable speech
recognition problems for hearing aid users
due to the presence of uncorrelated noise
sources at all azimuths. Ambient noise levels
measured in the restaurant on several
different occasions revealed the average noise
level to be 75 dB SPL and the average signal-
to-noise ratio (C-scale) to be approximately
+5 to +10 dB. Subjectively, the nature of the
noise was judged to be rather diffuse, that is,
it was difficult to single out any one particular
person’s speech over anothers.
Three sets of noise recordings were made
simultaneously: These recordings included (1)
recordings through the three pairs of hearing
aid microphones mounted on KEMAR (for
use in the live condition); (2) an eight-track
multimicrophone array recording of the
restaurant noise (for use in the R-SPACE
condition); and (3) one ER-11 overhead
omnidirectional microphone track (for
calibration and for preparation of test
materials for the 0°/90° and 0°/180° degree
conditions).
The KEMAR was positioned in the middle
of the main dining room of the restaurant in a
location normally used for a small dining table,
situated among many nearby tables occupied
by other diners. The manikin was oriented at
an angle to the walls of the restaurant and a
foam “hairpiece” was affixed to the top of
KEMAR’s head to reduce the reflections from
the head to the reference microphone.
ATannoy Arena loudspeaker, equalized
for flat response +/- 3 dB for the 1/3-octave
bands centered at 160 Hz to 16 kHz, was
placed at a distance 24 inches in front of
KEMAR (24 inches from the pick-up point of
each head-worn microphone). A pink noise
calibration signal was delivered through the
loudspeaker. The calibration signal was 84 dB
SPL at the chosen field reference point (FRP),
Performance of Directional Microphones/Compton-Conley et al
445
11.25"
Calibration Mic
KRP
FRP
12.75"
6"
24"
NOISE CALIBRATION
(LIVE and R-SPACE)
SPEECH
CALIBRATION
(IAC Booth Only)
NOISE
CALIBRATION
(IAC 180
o
Only)
24"
NOISE CALIBRATION
(IAC Overhead Only)
Figure 2. Calibration setup for field reference point (FRP).
91898 CD text 7/28/04 6:54 PM Page 51
six inches above the apex of KEMAR. Figure
2 illustrates the arrangement of the
equipment for calibration and recording in the
restaurant, as well as the recording sessions
that took place in the simulator and in the
IAC booth. The calibration signal was
recorded simultaneously on separate tracks
of the DTRS through each hearing aid
microphone, as well as through the ER-11
calibration/recording microphone.
Recording of Noise for R-SPACE
Condition
The recordings obtained in the restaurant
from the array of shotgun microphones were
used to produce the R-SPACE condition. The
R-SPACE recording/playback system was
placed in a large room (dimensions = 19.4’ L
x 17’ W x 7’ H). Each of the eight tracks
recorded in the restaurant through the
multimicrophone array was played back
through each of eight Tannoy Arena
loudspeakers placed in a circular pattern
around KEMAR with the emanating surface
of each loudspeaker 24 inches from the
KEMAR reference point (KRP) (Figure 3).
The same calibration loudspeaker, KEMAR
manikin, head-worn microphones, and ER-11
reference microphone that were used in the
previous recordings were employed in this
condition. As in the real restaurant, the
orientation of the recording/playback system
and KEMAR were such that neither was
directly facing any walls in the simulation
room.
Recording of the Noise for 0°/90° and
0°/180° Conditions
The recording of the output of the ER-11
omnidirectional reference microphone made
in the restaurant was used for the simulations
in the 0°/90° and 0°/180° conditions. These
recordings were made in an IAC booth
(dimensions = 12’ L x 9.4’ W x 7.5’ H). The noise
recording was played back through a single
Tannoy Arena loudspeaker placed either 24
inches above KEMAR (90°) or 24 inches
behind (180°) KEMAR. Again, the same
equipment and procedures used in the
restaurant were used to record these
simulations.
Analysis of the Spectrum of the
Restaurant Noise
The long-term speech spectrum of the
restaurant noise used in the study was
verified to be similar to that of the speech-
shaped HINT noise (Figure 4).
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
446
24"
0
o
45
o
315
o
270
o
90
o
135
o
180
o
225
o
Figure 3. R-SPACE playback/recording system (all loudspeakers 24 inches from the KEMAR reference point [KRP]).
Illustration adapted from Revit et al (2002b).
91898 CD text 7/28/04 6:54 PM Page 52
Production of Speech Recordings
The HINT sentences were chosen as the
speech material to be used for the evaluation
of the directional microphones for several
reasons. These materials were specifically
designed for measuring the reception
threshold for sentences (RTS) in noise using
an adaptive test procedure. The HINT
material and procedure have been used in
previous studies of directional microphone
hearing aids. A sufficient number of lists
were available for the design requirements
of this study. This material provides a simple
method for determining directional
performance and benefit, without regard for
variation of real-world SNRs. Normative data
are available for listeners with normal
hearing as well as sensorineural hearing loss
(Nilsson et al, 1992; Nilsson et al, 1994b).
Initial plans called for speech materials
to be recorded in each of the environments to
be evaluated. However, ambient traffic noise
from a nearby four-lane truck route and
airport made it impossible to record the
speech materials in the restaurant without
distortion and with an acceptable SNR.
Therefore, it was necessary to record all of the
speech materials in a sound-treated room. A
speaker-to-listener distance of 24 inches was
chosen for evaluation. Although rather close,
this distance was chosen to represent the
distance at which a hearing aid user might
position him- or herself in inorder to
maximize the signal-to-noise ratio in a very
difficult listening situation. Estimates of the
critical distance were obtained in each
recording environment (restaurant, R-SPACE
room, and IAC room) by measuring the level
of pink noise at several distances from the
same Tannoy Arena loudspeaker used for
delivering the pink noise calibration signal
and the HINT sentences. Measurements were
made under acoustic conditions similar to
those present during recording sessions. In
all rooms, a distance of 24 inches was well
within the critical distance. Because the
distance between KEMAR and the
loudspeaker would have been within the
critical distance in any of the test
environments, recording the speech in the
sound-treated room at this distance would
yield a recorded test material similar to what
would have been recorded in each of the
environments. In addition, the direct-to-
reverberant ratio at the 24-inch distance was
greater than +10 dB for the restaurant and
the IAC booth, and at least +15 dB for the R-
SPACE recording studio. Thus, masking
effects due to room reverberation would not
contaminate measurements of threshold
Performance of Directional Microphones/Compton-Conley et al
447
Figure 4. Frequency spectra of HINT sentences, HINT noise, and restaurant noise as measured by Sound Forge 4.5.
91898 CD text 7/28/04 6:54 PM Page 53
when using the KEMAR-recorded speech and
noise presented to the subjects.
Recordings of the HINT sentences were
played from a single Tannoy Arena
loudspeaker placed 24 inches in front of
KEMAR (0° azimuth) in an Iac booth
(dimensions = 12’ L x 9.4’ W x 7.5’ H) and
recorded through the three pairs of
microphones mounted on KEMAR and
through the ER-11 microphone placed above
KEMAR.
Final Token Preparation
The goal of the final re-recording process
was to equate the live restaurant noise
calibration to that of the HINT sentences so
that the test conditions would reflect what the
various microphones would have provided
to the listener had the sentences and noise
been recorded simultaneously. To accomplish
this, the following procedures were followed:
1. The ER-11 reference IAC booth
recording of the HINT sentences was
re-recorded so that all sentences were
concatenated, deleting any waveform
40 dB or more below the
instantaneous peak level. The rms
level of the sentences was then
measured.
2. The ER-11 reference microphone
recording of the restaurant noise was
divided into 14 approximately 2.5-
minute-long segments, and Sound
Forge 4.5 (2000) was used to
determine the rms level for each noise
sample. Then, the rms level of each
of the 14 ER-11 noise samples (in
each environment) was adjusted to
achieve equal rms of all noise
samples.
3. Sentence and noise samples were
recorded onto new tapes as test
tokens to produce a four-track
recording in which binaural hearing
aid recordings of the HINT sentences
presented at 0° azimuth in the IAC
booth were aligned with binaural
recordings of restaurant noise as
recorded through the same hearing
aid microphones under four test
conditions. Each 20-sentence HINT
list was matched to one of 12 possible
noise segments using a 12 x 12 Hyper
Greco Latin Square Design where
each sentence and noise segment
forms a unique pair. The four test
conditions were :
a. Live: Noise from real restaurant
b. R-SPACE: Noise from simulator
c. 0°/180°: Noise from single speaker
at 180° azimuth, IAC booth
d. 0°/90°: Noise from single speaker
at 90° azimuth (overhead), IAC
booth
This process resulted in a DTR four-track
recording of HINT sentences (tracks 1 and 2;
left/right) and noise segments (tracks 3 and
4; left/right). 1000 Hz calibration tones
equaling 78 dB SPL were applied such that
-3 dB VU equaled 75 dB SPL as the average
level of the recorded HINT sentences and
restaurant noise in each ear, at the FRP.
Prior to data collection, a pilot study
with five normal-hearing young adults (ages
21 to 25) was completed to determine whether
substituting restaurant noise for the noise
provided as part of the HINT test would
affect the reliability of the thresholds
measured with the adaptive test procedure.
The pilot study revealed a 95% critical
difference similar to that obtained by Nilsson
et al (1994a) for testing with either the
original HINT noise or with the restaurant
noise. As indicated previously, the restaurant
noise was very similar in spectrum to the
speech-spectrum shaped noise used in the
standard administration of the HINT test.
METHOD
A
repeated measures design was used in
order to determine the effect of the four
test environments for the three directional
microphones.
Twelve listeners, ages 22 to 28 years, with
bilaterally symmetrical normal-hearing and
excellent (92-100%) speech recognition ability
(NU-6) served as subjects. The number of
subjects included in the study was determined
based on a statistical power analysis.
Estimates of the error variance obtained in the
pilot study showed that for a statistical power
of 0.8, a repeated measures design with 12
subjects per condition would result in an
expected error probability of 0.034 for not
detecting a difference of up to 1 dB between
two conditions of greatest interest (e.g., live vs.
R-SPACE). Thus, the experimental design
was found to be reasonably powerful.
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
448
91898 CD text 7/28/04 6:54 PM Page 54
Test Procedure
ATascam DA-38 Digital Audio Tape
Recorder was used to play the test stimuli.
The output of the DA-38 was connected to two
two-channel audiometers. Tracks 1 and 2 of
the DA-38 DATR were used to deliver the
HINT sentences to channels 1 and 2 of a
Grason-Stadler 16 audiometer. The
audiometer outputs of channels 1 and 2 were
led to line inputs 1 and 2 of a Mackie 1402
VLZ Mixer. Tracks 3 and 4 of the DA-38
DATR were used to deliver the noise
recordings to channels 1 and 2 of a Grason-
Stadler (GSI) 10 audiometer. The outputs of
channels 1 and 2 were then led to line inputs
3 and 4 of the mixer. A pair of ER4B insert
earphones was used to deliver the binaural
sentence and noise tracks to the subjects
who sat inside an IAC test booth.
The settings on the mixer were adjusted
to yield a level of 75 dB SPL through the
insert earphones (as measured in a 2-cc
coupler) when the attenuator dials of both
audiometers were set at 63 dB HTL and the
calibration tone for each track of the test
tape was set to “0” on the VU meter.
The test procedure and method of scoring
recommended by Nilsson et al (1994b) were
followed, except that the initial presentation
level for the speech was started 20 dB below
the noise level, rather than the recommended
-10 dB RTS. This was necessary to avoid
starting at an audible speech level for some
of the directional hearing aid conditions. For
this experiment, two ten-sentence blocks
were used for each of the 12 test conditions.
Test conditions were counterbalanced across
participants. Two ten-sentence practice lists
were presented before data collection.
The listener's task was to repeat the
sentences spoken by the male talker in the
presence of the restaurant noise presented at
a fixed level of 75 dB SPL in each ear. The
level of the speech was adjusted adaptively
to estimate the RTS at which the sentences
could be repeated correctly 50% of the time.
Correct identification of each sentence was
based on proper repetition of all words of
each sentence, with the exception of certain
articles where substitution was allowed (e.g.,
"a" for "the"). The sentences were presented
at the same level bilaterally. An incorrect
response resulted in the speech presentation
level being raised bilaterally, and a correct
response resulted in the speech presentation
level being lowered bilaterally for the next
trial. The level of the sentence stimuli
presented to each ear was varied in 4 dB
steps for trials 1 through 4 and in 2 dB steps
for trials 5 through 20. If a correct response
was noted for the 20th (and last) trial, then
a hypothetical 21st trial would occur 2 dB
lower. If an incorrect response was noted for
Performance of Directional Microphones/Compton-Conley et al
449
Listening Environment
Live R-Space IAC 180 IAC 90
Mean RTS (in dB)
-25
-20
-15
-10
-5
0
5
Omni
D-Mic
Array
Figure 5. Mean field-referred RTSs (dB) required across three hearing aid microphone conditions and four noise delivery
environments. Standard deviations are also shown.
91898 CD text 7/28/04 6:54 PM Page 55
the 20th trial, then a hypothetical 21st trial
would occur 2 dB higher. To calculate the
RTS, the attenuator settings (presentation
levels for the sentences) for trials 5 through
21 were averaged, and the audiometer dial
setting for the noise was subtracted from
this average.
RESULTS AND DISCUSSION
M
ean RTS (in dB) and standard
deviations for the three hearing aid
microphones and four noise environments
are shown in Figure 5 and Table 2. Inspection
of the figure reveals that in all of the test
environments, performance is poorest with
the omnidirectional microphone and best
with the hypercardioid microphone.
Examination of the figure also reveals that
the absolute threshold for a given microphone
differs as a function of the test environment.
Thus the three microphones are ranked
similarly in the four environments (i.e.,
omnidirectional = poorest performance,
hypercardioid = best performance,
supercardioid = in between), but the RTS
obtained by each microphone type differs
with the environment.
A decision was made a priori to compare
the mean for each microphone in the three
experimental evaluation conditions to that in
the live condition. The Bonferroni method of
multiple comparisons (Dunn, 1961) was
employed to determine whether significant
differences existed between the mean
performance for each microphone in the live
condition versus that in each of the other
three conditions. For this test, a difference of
1.4 dB was significant. Table 2 shows which
means were found to be significantly different
from the live condition for each microphone,
while Table 3 illustrates the benefit achieved
with each microphone type in the R-SPACE,
0°/180°, and 0°/90° conditions when compared
to the live condition.
Results showed performance to be
statistically identical for the live and R-
SPACE conditions. For the live versus 0°/180°
condition, significantly better performance
was seen for the supercardioid (2.4 dB) and
hypercardioid (2.0 dB) microphones in the
0°/180° condition as compared to the live
condition. For the 0°/90° (overhead) condition,
performance was similar to the live condition
for the omnidirectional and directional
microphones, but was very different for the
hypercardioid microphones. The mean RTS
obtained in the 0°/90° condition with the
hypercardioid microphones was 9.1 dB better
than performance obtained in the live
condition. Omnidirectional performance was
the same in all situations with the exception
of slightly poorer performance (about 1.6 dB)
in the 0°/180° condition.
These results can be explained, in part,
by examining the relationship between the
spatial characteristics of the sound sources
and the directivity of the microphones.
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
450
Table 2. Mean Absolute RTS (dB) across Microphones and Environments
Environment
Microphone Live R-SPACE 0°/180° 0°/90°
Omnidirectional -5.7 -6.2 -4.1* -6.1
Supercardioid -10.3 -9.8 -12.7* -11.4
Hypercardioid -11.7 -12.0 -13.7* -20.8*
Note: Mean values marked with an asterisk indicate significantly different performance (p = 0.01) in that condition versus
the live condition for the same microphone.
Table 3. Benefit RTS (dB) Achieved with Each Microphone Type Re: the Live Condition
Environment
Microphone R-SPACE 0°/180° 0°/90°
Omnidirectional 0.5 -1.6* 0.4
Supercardioid -0.5 -2.4* 1.1
Hypercardioid 0.3 2.0* 9.1*
Note: Asterisks indicate a difference from the live condition of more than 1.4 dB (critical difference).
91898 CD text 7/28/04 6:54 PM Page 56
Figures 6 and 7 show the in situ polar
directional patterns of the super- and
hypercardioid microphones used in this study
(left microphone shown). Both microphones
have noticeable rear lobes. The supercardioid
microphone (Figure 6) has a very large pick-
up pattern in the front hemisphere, and its
nulls are located approximately at 120° and
265°, while the hypercardioid microphone
(Figure 7) has a narrower pick-up pattern in the
front hemisphere and deeper nulls at 90° and
270°.
According to research by Ricketts, "in
an environment without reverberation, and
given a particular hearing aid’s polar
directivity pattern with a signal of interest
Performance of Directional Microphones/Compton-Conley et al
451
-20
-15
-10
-5
0
5
-20
-15
-10
-5
0
5
0
30
60
90
120
150
180
210
240
270
300
330
500 Hz
1000 Hz
2000 Hz
4000 Hz
Figure 6. In situ polar plot at 500, 1000, 2000, and 4000 Hz of left supercardioid microphone (all microphones on KEMAR);
measurement performed 24 inches from the loudspeaker.
-20 -15 -10 -5 0 5
-20
-15
-10
-5
0
5
-20-15-10-5
05
-20
-15
-10
-5
0
5
0
30
60
90
120
150
180
210
240
270
300
330
500 Hz
1000 Hz
2000 Hz
4000 Hz
Figure 7. In situ polar plot at 500, 1000, 2000, and 4000 Hz of the left hypercardioid (array) microphone (all microphones
on KEMAR); measurement performed at 24 inches from the loudspeaker.
91898 CD text 7/28/04 6:54 PM Page 57
directly in front of the listener, the SNR from
a pair of directional hearing aids will be
dependent on the relative intensity level of
the competing noise integrated over all angles
of the polar pattern” (2000, p.202). Thus, it
seems reasonable to assume that, in the case
of the 0°/180° condition, better performance
for both directional microphones occurred
because noise was not present in the front
hemisphere as it was in the live and R-SPACE
conditions.
The slightly poorer performance (about
2 dB) of the omnidirectional microphones in
the 0°/180° as opposed to the 0°/90° condition
was initially puzzling, since manufacturer's
specifications (in situ AI-weighted polar plot)
for the supercardioid showed that, on the
average, it should provide more noise rejection
from directly behind (1.5 dB) as compared to
90° in the horizontal plane (0 dB). Since
Roberts and Schulein (1997) have shown that
the calculated two-dimensional DI scores
provide a reasonable approximation of true
three-dimensional measures, it was assumed
that the omnidirectional microphone would
show no noise rejection overhead. However,
post hoc spectral analysis of identical
omnidirectional noise tokens recorded from
behind and overhead revealed approximately
1 dB greater amplitude when the tokens for
each channel were presented from behind
versus from above (Figure 8).
As a crosscheck, anechoic AI-weighted
one-third octave band polar responses were
obtained for the left omnidirectional ITE
microphone, using a chirp stimulus and with
the test loudspeaker positioned 24" (same
distance used for calibration and recording)
above (90°) and behind (180°) KEMAR.
Results (Figure 9) revealed about 1 dB more
sensitivity in the behind condition, thus
explaining most of the discrepancy. The
authors suspect that the additional 1 dB
difference is due to test-retest variability or
some other unaccounted-for factor.
A large, divergent result from the live
condition was obtained with the array
microphones in the 0°/90° condition. This 9.1
dB improvement in threshold was due to the
location of the single noise source directly
above a wide, deep null (20 dB) at the midline
of each array microphone (Figure 7). A
listening check verified dramatic signal
attenuation in both the horizontal and
vertical planes of the microphones’ midlines.
For these particular microphones, placing a
signal noise source overhead produced a
contrived advantage. This type of test
arrangement would be good to use to
demonstrate array superiority in conditions
where noise arises from a single source
overhead, for example, overhead ventilation
noise in an office. If it were not for the
contrived array performance, it could be said
that the 0°/90° condition produced similar
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
452
Figure 8. Comparison of spectra for left omnidirectional ITE noise tokens as recorded through KEMAR at 90 vs. 180
degrees.
91898 CD text 7/28/04 6:54 PM Page 58
results to those produced in a diffuse noise
field (live condition). Thus, if one wanted to
use a single noise source to test the
performance of microphones having various
degrees of directivity, this might be a viable
(and less expensive) option. However, it would
be important for the clinician to know ahead
of time if the microphones being tested were
devoid of midline nulls in the vertical plane.
In addition to providing information
about the validity of various test
environments, this study also provides
information about the improvement that can
be obtained with three different microphones
of varying directivity. Killion et al (1998)
suggested that the AI-DI value should predict
the improved SNR to be obtained with a
particular microphone. The improvement
seen in the live and R-SPACE conditions
with the supercardioid microphones as
compared to the omnidirectional microphones
was between 3.5 and 4.6 dB and is in rough
agreement with the average improvement
of AI-DI values for the directional
microphones of 3.68 dB (anechoic). This also
agrees with the results of Killion and his
colleagues (1998) for the same restaurant.
Benefit (over omnidirectional microphones)
with the array microphones was 5.8 dB for the
R-SPACE and 6.0 for the live condition. This
was found to be similar in magnitude to the
AI-DI of 6.9 dB obtained in the anechoic
condition. Listener performance was similar
to that obtained by Soede et al (1993b) with
normal listeners. In that study, monaural
SNRs of normal-hearing listeners improved
approximately 5 dB with the array
microphone compared to unaided
performance. In the current study, the
improvement from the omnidirectional ITE
condition (used to simulate an unaided
condition) was approximately 6 dB in the R-
SPACE and live conditions.
In conclusion, the recording/simulation
technique used in this study (R-SPACE)
provided a reasonably accurate simulation of
the live condition yielding equivalent
performance in the real versus the reproduced
restaurant. It is clear from the data that the
R-SPACE technique is superior to traditional
methods of evaluating directional
microphones that use single loudspeakers to
simulate a noisy environment. Of the three
methods evaluated, the R-SPACE recording
technique did the best job of predicting
performance in the real restaurant. While the
overhead loudspeaker simulation of a diffuse
noise field was an adequate predictor of
performance for the supercardioid
microphone, it drastically overestimated the
benefit of the array microphone. As described
above, this was because the array had a large
null in the center of the vertical plane. Thus,
in evaluating directional microphones
clinically, one should use a single overhead
noise source configuration only with full
Performance of Directional Microphones/Compton-Conley et al
453
-4.0
-3.0
-2.0
-1.0
0.0
100 1000 10,000
1/3-Octave Band Frequencies (Hz)
Relative Output (dB)
Weighted Omni 90
Weighted Omni 180
Figure 9. AI-weighted one-third octave band polar frequency responses for the left omnidirectional ITE microphone at
90° and 180° (IAC booth).
91898 CD text 7/28/04 6:54 PM Page 59
knowledge of the three-dimensional polar
pattern of the microphone in question; that
is, the microphone does not have a midline
null in the vertical plane. The major strength
of the simulator system is that a realistic
estimate of the in-noise performance of any
microphone can be assessed without having
detailed information about its polar pattern.
This investigation was limited in scope.
Because the simulation was found to yield
RTS performance through directional
microphones consistent with real-world
performance for normal-hearing listeners,
further studies of this approach are
warranted. Important issues not considered
in this study should be investigated.
First, subjects with sensorineural hearing
loss should be tested with hearing aids
containing directional microphones to verify
the validity of the technique on the population
for whom it is intended.
Second, in several recent studies (e.g.,
Ricketts and Dhar, 1999; Ricketts, 2000),
attempts have been made to simulate the
real world using multiple loudspeakers and
uncorrelated noise recordings. The
recording/simulation technique used in the
present study is unique in that
multimicrophone recordings were obtained in
the real world and then were reproduced in
the clinical/laboratory setting. A validation
study comparing the two approaches with
recordings of the live condition would reveal
the accuracy required for predicting benefit
in the real world.
Third, because the noise recordings
employed in the R-SPACE simulation
technique were specific to a particular
restaurant, the findings of this investigation
cannot necessarily be generalized to other
restaurants or other noisy environments.
Future research is needed to catalogue a
range of daily typical listening environments
for listeners with hearing loss. A recorded
“library” of sound using R-SPACE or other
recording techniques could be developed to
represent these environments.
Fourth, in the present study the
recordings were purposely made well within
the critical distance, thus ruling out the
effects of reverberation on speech perception
in noise. Additional study is needed to
determine the pattern of test results obtained
in other environments characterized by
increased reverberation time (e.g., places of
worship, lecture halls).
Acknowledgments. The first author would like to
thank her dissertation committee, Arlene C. Neuman,
chair, and Harry Levitt and Mead Killion, commit-
tee members. Many thanks are also due to Larry
Revit and Robert Schulein who provided the audio
engineering expertise, to David Preves who served
as the outside examiner, to Jonathan Siegel of
Northwestern University who provided access to his
IAC booth for recording purposes, and to those indi-
viduals who served as listeners for several pilots and
the main investigation. Finally, sincere appreciation
is extended to Ruth Bentler and two anonymous peer
reviewers for their helpful suggestions on an earlier
version of this manuscript.
REFERENCES
Compton CL. (1974) The Effect of Conventional and
Directional Microphone Hearing Aids on Speech
Discrimination Scores. Masters thesis, Vanderbilt
University.
Dunn OJ. (1961) Multiple comparisons among means.
J Am Stat Assoc 56:52–64.
Fikret-Pasa S. (1993) The Effects of Compression
Ratio on Speech Intelligibility and Quality. PhD diss,
Northwestern University. Ann Arbor, MI: University
Microfilms.
Hawkins DB, Yacullo WS. (1984) Signal-to-noise ratio
advantage of binaural hearing aids and directional
microphones under different levels of reverberation.
J Speech Hear Disord 49:278–286.
Killion M, Schulein R, Christensen L, Fabry D, Revit
L, Niquette P, Chung K. (1998) Real-world perform-
ance of an ITE directional microphone. Hear J 51:1–6.
Killion M, Villchur E. (1993) Kessler was right—
partly: but SIN test shows some aids improve hearing
in noise. Hear J 46:31–35.
Kochkin S. (1996) Customer satisfaction and subjec-
tive benefit with high-performance hearing
instruments. Hear Rev 3:16–26.
Kochkin S. (2000) MarkeTrak V: “why my hearing
aids are in the drawer”: the consumers’ perspective.
Hear J (53):34–42.
Lentz WE. (1972) Speech discrimination in the pres-
ence of background noise using a hearing aid with a
directionally-sensitive microphone. Maico Audiologic
Library Series 10:1–4.
Lentz WE. (1977) A summary of research using direc-
tional and omnidirectional hearing aids. J Audiol
Tech, 42–65.
Lurquin P, Rafhay S. (1996) Intelligibility in noise
using multi-microphone hearing aids. Acta
Otorhinolaryngol Belg 50:103–109.
Madison TK, Hawkins DB. (1983) The signal-to-noise
ratio advantage of directional microphones. Hear
Instrum 34:18–49.
Journal of the American Academy of Audiology/Volume 15, Number 6, 2004
454
91898 CD text 7/28/04 6:54 PM Page 60
Mueller H, Johnson R. (1979) The effects of various
front-to-back ratios on the performance of directional
microphone hearing aids. J Am Auditory Soc 5:30–34.
Mueller H, Sweetow R. (1978) Clinical rationale for
using an overhead speaker in the evaluation of hear-
ing aids. Arch Otolaryngol 104:417–418.
Nielsen HB. (1973) A comparison between hearing
aids with a directional microphone and hearing aids
with conventional microphone. Scand Audiol 2:45–48.
Nielsen HB, Ludvigsen C. (1978) Effect of hearing
aids with directional microphones in different acoustic
environments. Scand Audiol 7:217–224.
Nilsson M, Gellnet D, Sullivan JA. (1992) Norms for
the hearing in noise test: the influence of spatial sep-
aration, hearing loss and English language experience
on speech reception thresholds. J Acoust Soc Am 92:
2385.
Nilsson M, Soli SD, Sullivan JA. (1994a) Development
of the Hearing in Noise Test for the measurement of
speech reception thresholds in quiet and in noise. J
Acoust Soc Am 2:1085–1099.
Nilsson M, Soli SD, Sumida A. (1994b) A Definition
of Normal Binaural Sentence Recognition in Quiet
and Noise. Internal document, House Ear Institute,
1–13.
Preves D. (1975) Selecting the best directivity pat-
tern for unidirectional noise suppressing hearing aids.
Hear Instrum 42:18–19.
Revit LJ, Schulein RB, Killion MC, Compton CL,
Julstrom SD. (2002a) Multi-channel sound-field
system for assessing the real-world benefit of hear-
ing aids. Paper presented at the International Hearing
Aid Research Conference, Lake Tahoe, California.
Revit LJ, Schulein RB, Julstrom S. (2002b) Toward
accurate assessment of real-world hearing aid bene-
fit. Hear Rev 9:34–38, 51.
Ricketts T. (2000) Impact of noise source configura-
tion on directional hearing aid benefit and
performance. Ear Hear 21:194–205.
Ricketts T, Dhar S. (1999) Aided benefit across direc-
tional and omni-directional hearing aid microphones
for behind-the-ear hearing aids. J Am Acad Audiol
10:180–189.
Ricketts T, Mueller H. (1999) Making sense of direc-
tional microphone hearing aids. Am J Audiol
8:117–127.
Roberts M, Schulein R. (1997) Measurement and intel-
ligibility optimization of directional microphones for
the use in hearing aid devices. Presented at the 103rd
meeting of the Audio Engineering Society, New York.
Rumoshosky J. (1976) Directional microphones in in-
the-ear aids. Hear Aid J 11:48–50.
Soede W, Berkhout AJ, Bilsen FA. (1993a)
Development of a directional hearing instrument
based on array technology. J Acoust Soc Am
94:785–798.
Soede W, Bilsen FA, Berkhout AJ. (1993b) Assessment
of a directional microphone array for hearing-impaired
listeners. J Acoust Soc Am 94:799–808.
Sound Forge 4.5. (2000) Sonic Foundry Corporate
Headquarters, Madison, WI.
Studebaker G, Cox R, Formby C. (1980) The effect of
environment on the directional performance of head
worn hearing aids. In: Hochberg I, ed. Acoustical
Factors Affecting Hearing Aid Performance. Baltimore:
University Park Press, 81–105.
Valente M, Fabry DA, Potts LG. (1995) Recognition
of speech in noise with hearing aids using dual micro-
phones. J Am Acad Audiol 6:440–449.
Performance of Directional Microphones/Compton-Conley et al
455
91898 CD text 7/28/04 6:54 PM Page 61
... In sound field, some studies have used 2-3 loudspeakers with target speech typically presented directly in front of the listener (0˚azimuth) and maskers presented from 0˚azimuth or from some other location (e.g., ±45˚, ±90˚azimuth, relative to the target location). Others have used more complex, multi-speaker setups such as the R-space TM 8-loudpeaker system to evaluate masked speech understanding in TH and/or hard of hearing listeners [21][22][23][24][25][26][27]. The masking sound used in the R-space TM system replicates a "cocktail party" setting, with multi-talker babble processed to come from the multi-speaker sound sources; the target and masker speech can be assigned to any of the loudspeakers. ...
... The masking sound used in the R-space TM system replicates a "cocktail party" setting, with multi-talker babble processed to come from the multi-speaker sound sources; the target and masker speech can be assigned to any of the loudspeakers. The R-space system has also been used to evaluate the effect of microphone directionality and signal processing on signal-to-noise ratios (SNRs) for hearing aid and cochlear implant systems [22]. ...
... Advantages for speech understanding in diffuse noise (3-loudspeaker array) have been observed in cochlear implant listeners using beamforming microphones and signal-processing [31,32]. Similarly, directional microphones have been evaluated using diffuse noise in hearing aid users [22,33]. ...
Article
Full-text available
Spatial cues can facilitate segregation of target speech from maskers. However, in clinical practice, masked speech understanding is most often evaluated using co-located speech and maskers (i.e., without spatial cues). Many hearing aid centers in France are equipped with five-loudspeaker arrays, allowing masked speech understanding to be measured with spatial cues. It is unclear how hearing status may affect utilization of spatial cues to segregate speech and noise. In this study, speech reception thresholds (SRTs) for target speech in “diffuse noise” (target speech from 1 speaker, noise from the remaining 4 speakers) in 297 adult listeners across 9 Audilab hearing centers. Participants were categorized according to pure-tone-average (PTA) thresholds: typically-hearing (TH; ≤ 20 dB HL), mild hearing loss (Mild; >20 ≤ 40 dB HL), moderate hearing loss 1 (Mod-1; >40 ≤ 55 dB HL), and moderate hearing loss 2 (Mod-2; >55 ≤ 65 dB HL). All participants were tested without aided hearing. SRTs in diffuse noise were significantly correlated with PTA thresholds, age at testing, as well as word and phoneme recognition scores in quiet. Stepwise linear regression analysis showed that SRTs in diffuse noise were significantly predicted by a combination of PTA threshold and word recognition scores in quiet. SRTs were also measured in co-located and diffuse noise in 65 additional participants. SRTs were significantly lower in diffuse noise than in co-located noise only for the TH and Mild groups; masking release with diffuse noise (relative to co-located noise) was significant only for the TH group. The results are consistent with previous studies that found that hard of hearing listeners have greater difficulty using spatial cues to segregate competing speech. The data suggest that speech understanding in diffuse noise provides additional insight into difficulties that hard of hearing individuals experience in complex listening environments.
... The fact that reproduction of VAEs via headphones is not feasible and likely entails uncontrolled HA algorithm behaviour, let alone feedback issues, motivates the use of loudspeaker-based spatial audio reproduction (Minnaar, Favrot, & Buchholz, 2010;Grimm, Ewert, & Hohmann, 2015;Grimm, Kollmeier, & Hohmann, 2016;Oreinos & Buchholz, 2016). Investigations included, for example, assessing the real-world benefit of beamforming algorithms (Compton-Conley et al., 2004;Gnewikow et al., 2009), or aimed at perceptually validating SiN test results obtained in loudspeaker-based VAEs (Cubick & Dau, 2016). The availability of increased computational resources allowed to implement interactive low-latency listening scenarios using advanced room acoustic simulations in combination with highly efficient convolution algorithms (Noisternig et al., 2008;Mehra et al., 2015;Wefers, 2015;Schissler, Stirling, & Mehra, 2017). ...
... Nikles & Tschopp, 1996). Objective evaluations of HA algorithms in VAEs help to understand reasons for this discrepancy by being able to increase the complexity of the laboratory condition, which allows to estimate and predict a real-world benefit (Walden et al., 2000;Cord et al., 2002;Compton-Conley et al., 2004). For indoor environments, room acoustics play an important role as additional reflections reduce the effectiveness of binaural cues (Plomp, 1976) and the performance of HA algorithms (Kates, 2001). ...
Thesis
Full-text available
Hearing loss (HL) has multifaceted negative consequences for individuals of all age groups. Despite individual fitting based on clinical assessment, consequent usage of hearing aids (HAs) as a remedy is often discouraged due to unsatisfactory HA performance. Consequently, the methodological complexity in the development of HA algorithms has been increased by employing virtual acoustic environments which enable the simulation of indoor scenarios with plausible room acoustics. Inspired by the research question of how to make such environments accessible to HA users while maintaining complete signal control, a novel concept addressing combined perception via HAs and residual hearing is proposed. The specific system implementations employ a master HA and research HAs for aided signal provision, and loudspeaker-based spatial audio methods for external sound field reproduction. Systematic objective evaluations led to recommendations of configurations for reliable system operation, accounting for perceptual aspects. The results from perceptual evaluations involving adults with normal hearing revealed that the characteristics of the used research HAs primarily affect sound localisation performance, while allowing comparable egocentric auditory distance estimates as observed when using loudspeaker-based reproduction. To demonstrate the applicability of the system, school-age children with HL fitted with research HAs were tested for speech-in-noise perception in a virtual classroom and achieved comparable speech reception thresholds as a comparison group using commercial HAs, which supports the validity of the HA simulation. The inability to perform spatial unmasking of speech compared to their peers with normal hearing implies that reverberation times of 0.4 s already have extensive disruptive effects on spatial processing in children with HL. Collectively, the results from evaluation and application indicate that the proposed systems satisfy core criteria towards their use in HA research.
... 1,2 This may be because established laboratory and clinical tests consider only simplistic sound scenes and static listening conditions, [3][4][5][6][7] despite several studies suggesting that such scenes may be a poor indicator of real world HAD performance. [8][9][10][11] Therefore, the ability to faithfully reproduce a variety of recorded or ecologically valid simulated sound scenes within these clinical settings may be desirable since this may help facilitate more optimal fittings or adjustments of devices so that they may be better suited to real world scenarios. Such sound-field reproduction methods may also find application in perceptual studies and HAD research and development, or be utilized for training the hearing abilities of HAD users. ...
Article
Full-text available
A perceptual study was conducted to investigate the perceived accuracy of two sound-field reproduction approaches when experienced by hearing-impaired (HI) and normal-hearing (NH) listeners. The methods under test were traditional signal-independent Ambisonics reproduction and a parametric signal-dependent alternative, which were both rendered at different Ambisonic orders. The experiment was repeated in two different rooms: (1) an anechoic chamber, where the audio was delivered over an array of 44 loudspeakers; (2) an acoustically-treated listening room with a comparable setup, which may be more easily constructed within clinical settings. Ten bilateral hearing aid users, with mild to moderate symmetric hearing loss, wearing their devices, and 15 NH listeners were asked to rate the methods based upon their perceived similarity to simulated reference conditions. In the majority of cases, the results indicate that the parametric reproduction method was rated as being more similar to the reference conditions than the signal-independent alternative. This trend is evident for both groups, although the variation in responses was notably wider for the HI group. Furthermore, generally similar trends were observed between the two listening environments for the parametric method. The signal-independent approach was instead rated as being more similar to the reference in the listening room.
... Previous validation studies had shown that an eight sources system (recording microphones / playback speakers) would allow directional hearing aids and the hearing mechanism to perform in the lab as they do in the real world [6]. Pilot validation studies confirmed the realism of the reproduction and the accurate prediction of real-world speech intelligibility over a wide range of directional devices [7] in a lunchroom and a restaurant simulations. ...
... Directionality is the only hearing aid technology that has been shown to improve SNR in a way that significantly improves speech recognition in noisy situations where hearing aid users are listening to speech coming from in front of them with competing sound from other directions [9,32]. This has been observed with two-microphone arrays [20,[32][33][34][35][36] and more recently, with four-microphone arrays. In addition, fourmicrophone arrays in beamforming mode have shown benefit over omnidirectional conditions in terms of rejecting stimulus from the side [20,[36][37][38][39][40]. ...
Chapter
Full-text available
Sensorineural hearing loss is the most common type of permanent hearing loss. Most people with sensorineural hearing loss experience challenges with hearing in noisy situations, and this is the primary reason they seek help for their hearing loss. It also remains an area where hearing aid users often struggle. Directionality is the only hearing aid technology—in addition to amplification—proven to help hearing aid users hear better in noise. It amplifies sounds (sounds of interest) coming from one direction more than sounds (“noise”) coming from other directions, thereby providing a directional benefit. This book chapter describes the hearing-in-noise problem, natural directivity and hearing in noise, directional microphone systems, how directionality is quantified, and its benefits, limitations, and other clinical implications.
... While all hypothesized effects (e.g., threshold, bandwidth, distorted tonotopy) likely affect the coding of connected speech (both vowels and consonants) in both quiet and noisy backgrounds, distorted tonotopy, in particular, may have especially important implications with natural background noises, such as competing voices, which typically have substantial low-frequency energy ( Compton-Conley et al., 2004 ;Lo and McPherson, 2013 ). Thus, it is critical to go beyond simple synthetic speech tokens to examine the ef- fects of NIHL on natural speech coding in noise. ...
Article
Listeners with sensorineural hearing loss (SNHL) have substantial perceptual deficits, especially in noisy environments. Unfortunately, speech-intelligibility models have limited success in predicting performance of listeners with hearing loss. A better understanding of the various suprathreshold factors that contribute to neural-coding degradations of speech in noisy conditions will facilitate better modeling and clinical outcomes. Here, we highlight the importance of one physiological factor that has received minimal attention to date, termed distorted tonotopy, which refers to a disruption in the mapping between acoustic frequency and cochlear place that is a hallmark of normal hearing. More so than commonly assumed factors (e.g., threshold elevation, reduced frequency selectivity, diminished temporal coding), distorted tonotopy severely degrades the neural representations of speech (particularly in noise) in single- and across-fiber responses in the auditory nerve following noise-induced hearing loss. Key results include: 1) effects of distorted tonotopy depend on stimulus spectral bandwidth and timbre, 2) distorted tonotopy increases across-fiber correlation and thus reduces information capacity to brain, and 3) its effects vary across etiology, which may contribute to individual differences. These results motivate the development and testing of noninvasive measures that can assess the severity of distorted tonotopy in human listeners. Development of such noninvasive measures of distorted tonotopy would advance precision-audiological approaches to improving diagnostics and rehabilitation for listeners with SNHL.
... The R-SPACE is a testing configuration that simulates an everyday real-life restaurant environment by using a 360-degree array. 18 The purpose of this study was to compare SCAN to different directional options (Zoom, Beam, and Omni-directional) to determine which directional option provides recipients with the best speech recognition. ...
Article
Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.
Article
Full-text available
While the relationships between spectral resolution, temporal resolution, and speech recognition are well defined in adults with cochlear implants (CIs), they are not well defined for prelingually deafened children with CIs, for whom language development is ongoing. This cross-sectional study aimed to better characterize these relationships in a large cohort of prelingually deafened children with CIs (N = 47; mean age = 8.33 years) by comprehensively measuring spectral resolution thresholds (measured via spectral modulation detection), temporal resolution thresholds (measured via sinusoidal amplitude modulation detection), and speech recognition (measured via monosyllabic word recognition, vowel recognition, and sentence recognition in noise via both fixed signal-to-noise ratio (SNR) and adaptively varied SNR). Results indicated that neither spectral or temporal resolution were significantly correlated with speech recognition in quiet or noise for children with CIs. Both age and CI experience had a moderate effect on spectral resolution, with significant effects for spectral modulation detection at a modulation rate of 0.5 cyc/oct, suggesting spectral resolution may improve with maturation. Thus, it is possible we may see an emerging relationship between spectral resolution and speech perception over time for children with CIs. While further investigation into this relationship is warranted, these findings demonstrate the need for new investigations to uncover ways of improving spectral resolution for children with CIs.
Article
Objective: A multisite clinical trial was conducted to obtain cochlear implant (CI) efficacy data in adults with asymmetric hearing loss (AHL) and establish an evidence-based framework for clinical decision-making regarding CI candidacy, counseling, and assessment tools. Study hypotheses were threefold: (1) 6-month postimplant performance in the poor ear (PE) with a CI will be significantly better than preimplant performance with a hearing aid (HA), (2) 6-month postimplant performance with a CI and HA (bimodal) will be significantly better than preimplant performance with bilateral HAs (Bil HAs), and (3) 6-month postimplant bimodal performance will be significantly better than aided, better ear (BE) performance. Design: Forty adults with AHL from four, metropolitan CI centers participated. Hearing criteria for the ear to be implanted included (1) pure-tone average (PTA, 0.5, 1, 2 kHz) of >70 dB HL, (2) aided, monosyllabic word score of ≤30%, (3) duration of severe-to-profound hearing loss of ≥6 months, and (4) onset of hearing loss ≥6 years of age. Hearing criteria for the BE included (1) PTA (0.5, 1, 2, 4 kHz) of 40 to 70 dB HL, (2) currently using a HA, (3) aided, word score of >40%, and (4) stable hearing for the previous 1-year period. Speech perception and localization measures, in quiet and in noise, were administered preimplant and at 3-, 6-, 9-, and 12-months postimplant. Preimplant testing was performed in three listening conditions, PE HA, BE HA, and Bil HAs. Postimplant testing was performed in three conditions, CI, BE HA, and bimodal. Outcome factors included age at implantation and length of deafness (LOD) in the PE. Results: A hierarchical nonlinear analysis predicted significant improvement in the PE by 3 months postimplant versus preimplant for audibility and speech perception with a plateau in performance at approximately 6 months. The model predicted significant improvement in postimplant, bimodal outcomes versus preimplant outcomes (Bil HAs) for all speech perception measures by 3 months. Both age and LOD were predicted to moderate some CI and bimodal outcomes. In contrast with speech perception, localization in quiet and noise was not predicted to improve by 6 months when comparing Bil HAs (preimplant) to bimodal (postimplant) outcomes. However, when participants' preimplant everyday listening condition (BE HA or Bil HAs) was compared with bimodal performance, the model predicted significant improvement by 3 months for localization in quiet and noise. Lastly, BE HA results were stable over time; a generalized linear model analysis revealed bimodal performance was significantly better than performance with a BE HA at all postimplant intervals for most speech perception measures and localization. Conclusions: Results revealed significant CI and bimodal benefit for AHL participants by 3-months postimplant, with a plateau in CI and bimodal performance at approximately 6-months postimplant. Results can be used to inform AHL CI candidates and to monitor postimplant performance. On the basis of this and other AHL research, clinicians should consider a CI for individuals with AHL if the PE has a PTA (0.5, 1, 2 kHz) >70 dB HL and a Consonant-Vowel Nucleus-Consonant word score ≤40%. LOD >10 years should not be a contraindication.
Article
Objectives: Limited evidence exists for the use of rerouting devices in children with severe-to-profound unilateral sensorineural hearing loss. Many laboratory studies to date have evaluated hearing-in-noise performance in specific target-masker spatial configurations within a small group of participants and with only a subset of available hearing devices. In the present study, the efficacy of all major types of nonsurgical devices was evaluated within a larger group of pediatric subjects on a challenging speech-in-noise recognition task. Design: Children (7-18 years) with unaided severe-to-profound unilateral hearing loss (UHL' n = 36) or bilateral normal hearing (NH, n = 36) participated in the present study. The signal-to-noise ratio (SNR) required for 50% speech understanding (SNR-50) was measured using BKB sentences in the presence of proprietary restaurant noise (R-SPACE BSIN-R) in the R-SPACE Sound System. Subjects listened under 2 target/masker spatial configurations. The target signal was directed toward subjects' NH or hearing-impaired ear (45º azimuth), while the interfering restaurant noise masker was presented from the remaining 7 loudspeakers encircling the subject, spaced every 45º. Head position was fixed during testing. The presentation level of target sentences and masking noise varied over time to estimate the SNR-50 (dB). The following devices were tested in all participants with severe-to-profound UHL: air conduction (AC) contralateral routing of signal (CROS), bone conduction (BC) CROS fitted on a headband with and without the use of remote microphone (RM), and an ear-level RM hearing assistance technology (HAT) system. Results: As a group, participants with severe-to-profound UHL performed best when the target signal was directed toward their NH ear. Across listening conditions, there was an average 8.5 dB improvement in SNR-50 by simply orienting the NH ear toward the target signal. When unaided, participants with severe-to-profound UHL performed as well as participants with NH when the target signal was directed toward the NH ear. Performance was negatively affected by AC CROS when the target signal was directed toward the NH ear, whereas no statistically significant change in performance was observed when using BC CROS. When the target signal was directed toward participants' hearing-impaired ear, all tested devices improved SNR-50 compared with the unaided condition, with small improvements (1-2 dB) observed with CROS devices and the largest improvement (9 dB) gained with the personal ear-level RM HAT system. No added benefit nor decrement was observed when RM was added to BC CROS using a 50/50 mixing ratio when the target was directed toward the impaired ear. Conclusions: In a challenging listening environment with diffuse restaurant noise, SNR-50 was most improved in the study sample when using a personal ear-level RM HAT system. Although tested rerouting devices offered measurable improvement in performance (1-2 dB in SNR-50) when the target was directed to the impaired ear, benefit may be offset by a detriment in performance in the opposing condition. Findings continue to support use of RM HAT for children with severe-to-profound UHL in adverse listening environments, when there is one primary talker of interest, to ensure advantageous SNRs.
Article
Full-text available
Norms have been developed for the hearing in noise test [Nilsson et al., J. Acoust. Soc. Am. Suppl. 1 88, S175 (1990)]. Speech reception thresholds (SRTs) were measured adaptively in the presence of spectrally matched noise for 150 young male and female adults. Speech was presented at 0‐deg azimuth in all conditions, and noise was presented at either 0‐, 90‐, or 270‐deg azimuth at 65 dB(A). Pure‐tone thresholds were measured at 0.5, 1, 2, 3, 4, 6, and 8 kHz. Subjects were also characterized according to their early language acquisition experience with English in one of five categories ranging from ‘‘English only’’ to ‘‘no English in the home.’’ Average SRTs for normal‐hearing, ‘‘English only’’ subjects (pure‐tone thresholds at all frequencies tested of 15 dB HL or better) noise equaled 62.26 dB(A) (−2.74 dB S/N). Several factors significantly influence thresholds: (1) spatial separation between the speech and noise lowered thresholds an average of 7.42 dB; (2) unilateral, high‐frequency hearing loss elevated thresholds in quiet by 3 dB; and (3) thresholds in quiet and noise were elevated by 3.34 dB in subjects with normal hearing but ‘‘no English in the home.’’ This elevation of thresholds is especially intriguing because it suggests a cognitive/linguistic factor in the ability to understand speech in noise.
Article
Methods for constructing simultaneous confidence intervals for all possible linear contrasts among several means of normally distributed variables have been given by Scheffé and Tukey. In this paper the possibility is considered of picking in advance a number (say m) of linear contrasts among k means, and then estimating these m linear contrasts by confidence intervals based on a Student t statistic, in such a way that the overall confidence level for the m intervals is greater than or equal to a preassigned value. It is found that for some values of k, and for m not too large, intervals obtained in this way are shorter than those using the F distribution or the Studentized range. When this is so, the experimenter may be willing to select the linear combinations in advance which he wishes to estimate in order to have m shorter intervals instead of an infinite number of longer intervals.
Article
In a double blindfold test 22 hearing impaired persons with a slight and moderate hearing loss have compared hearing aids with directional microphone to hearing aids with conventional microphone. A discrimination test in the sound—box showed a significant difference between the two types in favour of the hearing aid with directional microphone. A group conversation situation built up in a sound—box where the testpersons had the opportunity of comparing the two types of hearing aids also showed a significant difference. A comparison in a field test did not show corresponding results. The research project may indicate both pros et cons in connection with the hearing aid with the directional microphone. It was advantageous under critical listening conditions. Disadvantages occurred when the test persons heard sounds from behind. Furthermore there was a disturbing rubbing noise in this hearing aid. 22 hørehœmmede med lette og moderate høretab har i et dobbelt blindforsøg sammenlignet høre—apparater med retningsmikrofoner og høreapparater med konventionelle mikrofoner. Ved en diskriminationsprøve i lydboks viste der sig en signifikant forskel mellem de to typer til fordel for høreapparater med retningsmikrofon. En gruppekonversations—situation opbygget i boksen, hvor forsøgspersonerne havde lejlighed til at sammenligne de to høreapparattyper, viste også signifikant forskel. En sammenligning i et markforsøg gav ikke tilsvarende resultater. Forsøgspersonerne gav her udtryk for fordele og ulemper ved høreapparatet med retningsmikrofon. Fordelen at de hørte bedre under en del kritiske lytteforhold. Ulempen at de havde vanskeligere ved at høre lyde bagfra og end—videre, at der var en generende berøringsstøj ved høreapparatet med retningsmikrofon.
Article
arkeTrak 1 research conducted at Knowles Electronics has shown that overall customer satisfac- tion with hearing instruments declined to 53% (from 58%) while satisfaction with new (less than one year old) instruments improved to 71% (from 66%). Nearly 18% of hear- ing instrument owners do not use their hearing instruments. New users have declined from 53% of sales in 1989 to 29% of sales in 1994. In addition, the mean age of instruments has increased from 3.2 years in 1991 to 4.1 years in 1994. Clearly, the trends indicate that both the new user and replacement markets have declined. Previous research with hear- ing-impaired individuals who do not own hearing instruments 2 esti- mated 11.1 million individuals, or more than half of the market the industry is trying to reach, ques- tion the value of hearing instru- ments. Some of the more common
Article
The present investigation examined the effect of different compression ratios (compression ratios of 2:1, 3:1, 8:1 and wide‐dynamic‐range compression) on speech intelligibility and quality in compression‐limiting systems. Speech intelligibility and quality were evaluated for sentences in four‐talker babble. Sentences were presented at a signal‐to‐noise ratio of 7.5 dB at six different input levels, and at two of these levels (80 and 100 dB SPL) four different signal‐to‐noise ratios. Speech intelligibility was evaluated in terms of percent correct words and the quality of the conditions was rated on a scale of 0 to 100%. Across subjects, across input levels, and across signal‐to‐noise ratios, different compression ratios did not differ in terms of speech intelligibility and quality. Individual subject analysis revealed that for three of the subjects 8:1 compression ratio conditions gave the best speech intelligibility and quality results. One of the subjects achieved his best speech intelligibility performance with the wide‐dynamic‐range compression condition. For four of the subjects, different compression ratios did not effect speech intelligibility and quality differently. Overall, results indicated that there were considerable variations in performance with different compression ratios among individuals with similar hearing sensitivity. [Work partly supported by Northwestern Univ. Doctoral Student Research Grant.]
Article
Twenty-four sensorineural hearing-impaired adults were evaluated using four directional microphone hearing aids differing only in front-to-back ratios. The speech material utilized was the Synthetic Sentence Identification Message Competition Ratios of 0, -10, and -20 dB. The primary signal was presented from a 0 degree azimuth with the competing message was presented from a direct overhead location. The results revealed a systematic improvement in speech understanding as the size of the front-to-back ratio increased. This relationship was not significantly affected by the difficulty of the listening situation.
Article
• A variable that has received little attention in the psychoacoustic evaluation of the hearing aid is the position of loudspeakers with respect to the listener, particularly the azimuth of the loudspeaker, which is used for presenting the competing message. In the past, a variety of locations have been used, some of which can bias the outcome of the evaluation. For this reason, this article suggests the use of an overhead speaker to deliver the competing signal. The overhead placement provides a neutral location that is highly desirable for making reliable repeated speech performance comparisons. In addition, the overhead speaker can be easily adapted to the testing environment while it produces the effect of surrounding the listener with the competing signal. (Arch Otolaryngol 104:417-418, 1978)