Virginia Best

Virginia Best
Boston University | BU

About

140
Publications
12,555
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,806
Citations

Publications

Publications (140)
Article
The aim of this study was to extend the harmonic-cancellation model proposed by Prud’homme et al. [J. Acoust. Soc. Am. 148, 3246-3254 (2020)] to predict speech intelligibility against a harmonic masker, so that it takes into account binaural hearing, amplitude modulations in the masker and variations in masker fundamental frequency (F0) over time....
Article
Listeners are sensitive to interaural time differences carried in the envelope of high-frequency sounds (ITD ENV ), but the salience of this cue depends on several envelope properties. Making use of the fact that sensitivity to ITD ENV varies systematically with the depth of modulation of sinusoidally amplitude-modulated tones, we devised a task in...
Article
Full-text available
Laboratory and clinical-based assessments of speech intelligibility must evolve to better predict real-world speech intelligibility. One way of approaching this goal is to develop speech intelligibility tasks that are more representative of everyday speech communication outside the laboratory. Here, we evaluate speech intelligibility using both a s...
Article
Full-text available
While many studies have reported a loss of sensitivity to interaural time differences (ITDs) carried in the fine structure of low-frequency signals for listeners with hearing loss, relatively few data are available on the perception of ITDs carried in the envelope of high-frequency signals in this population. The relevant studies found stronger eff...
Article
Full-text available
It is generally assumed that listeners with normal audiograms have relatively symmetric hearing, and more specifically that diotic stimuli (having zero interaural differences) are heard as centered in the head. While measuring intracranial lateralization with a visual pointing task for tones and 50-Hz-wide narrowband noises from 300 to 700 Hz, exam...
Article
No PDF available ABSTRACT Fluctuations in the amplitude envelope play a critical role in the spatial perception of sounds. For example, listeners place increased perceptual weight on binaural cues occurring at onsets, and the steepness of onset slopes can influence binaural sensitivity. For speech stimuli, we recently showed that the temporal weigh...
Article
No PDF available ABSTRACT A novel speech enhancement scheme is being developed in our lab with the goal of improving speech intelligibility in “cocktail party” listening environments. By manipulating the temporal envelope to increase the salience of acoustic onsets, the algorithm improves access to binaural cues sampled at these onsets. The hope is...
Article
No PDF available ABSTRACT The technical committee on Psychological and Physiological acoustics (P&P) encompasses a wide and multidisciplinary range of topics. It is concerned with questions of what happens to sound once it enters the auditory system, and how sound is processed to facilitate communication and navigation. Topics include the biomechan...
Poster
Full-text available
In listeners with normal audiometric thresholds, it is assumed that stimuli presented over headphones with zero interaural differences are perceived to be centered in the head. In this study, intracranial lateralization was measured by having ten young adults with normal and interaurally symmetric thresholds (within 5 dB) point to perceived sound l...
Article
This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target spe...
Article
Previous studies have shown that for high-rate click trains and low-frequency pure tones, interaural time differences (ITDs) at the onset of stimulus contribute most strongly to the overall lateralization percept (receive the largest perceptual weight). Previous studies have also shown that when these stimuli are modulated, ITDs during the rising p...
Chapter
This chapter summarizes the empirical results from binaural discrimination and lateralization experiments that have addressed aspects of across-frequency processing. These experiments have used a variety of stimuli including pure tones, modulated tones, and noise bursts of various bandwidths. The results are discussed for cases in which binaural cu...
Preprint
Full-text available
Listening in an acoustically cluttered scene remains a difficult task for both machines and hearing-impaired listeners. Normal-hearing listeners accomplish this task with relative ease by segregating the scene into its constituent sound sources, then selecting and attending to a target source. An assistive listening device that mimics the biologica...
Article
This work aims to predict speech intelligibility against harmonic maskers. Unlike noise maskers, harmonic maskers (including speech) have a harmonic structure that may allow for a release from masking based on fundamental frequency (F0). Mechanisms, such as spectral glimpsing and harmonic cancellation, have been proposed to explain F0 segregation,...
Chapter
This chapter provides an overview of the cues listeners use to segregate sound sources and make sense of the auditory scene. These include spectral, temporal, spatial, and contextual cues. A review is also given of some of the known effects of age on sensitivity to these cues, with consideration given to the possible contributions of peripheral lim...
Article
No PDF available ABSTRACT Perceptual adaptation to a talker allows listeners to efficiently resolve inherent ambiguities present in the speech signal introduced by the lack of a one-to-one mapping between acoustic signals and intended phonemic categories across talkers. In ideal listening environments, preceding speech context enhances perceptual a...
Article
No PDF available ABSTRACT Previous studies have noted an interaction between eye position and auditory spatial attention, including a tendency to look towards the location of an attended sound (even in the absence of useful visual information). There can also be objective improvements in the detection and discrimination of sounds when the eyes are...
Chapter
This chapter reviews binaural models available to predict speech intelligibility for different kinds of interference and in the presence of reverberation. A particular effort is made to quantify their performances and to highlight the a priori knowledge they require in order to make a prediction. In addition, cognitive factors that are not included...
Article
Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal “dominance” regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-tempor...
Article
Spatial perception is an important part of a listener's experience and ability to function in everyday environments. However, the current understanding of how well listeners can locate sounds is based on measurements made using relatively simple stimuli and tasks. Here the authors investigated sound localization in a complex and realistic environme...
Article
Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) cont...
Article
Full-text available
To capture the demands of real-world listening, laboratory-based speech-in-noise tasks must better reflect the types of speech and environments listeners encounter in everyday life. This article reports the development of original sentence materials that were produced spontaneously with varying vocal efforts. These sentences were extracted from con...
Article
Ideal time-frequency segregation (ITFS) is a signal processing technique that may be used to estimate the energetic and informational components of speech-on-speech masking. A core assumption of ITFS is that it roughly emulates the effects of energetic masking (EM) in a speech mixture. Thus, when speech identification thresholds are measured for IT...
Article
This study tested the hypothesis that adding noise to a speech mixture may cause both energetic masking by obscuring parts of the target message and informational masking by impeding the segregation of competing voices. The stimulus was the combination of two talkers—one target and one masker—presented either in quiet or in noise. Target intelligib...
Article
Full-text available
A study was conducted to examine the benefits afforded by a signal-processing strategy that imposes the binaural cues present in a natural signal, calculated locally in time and frequency, on the output of a beamforming microphone array. Such a strategy has the potential to combine the signal-to-noise ratio advantage of beamforming with the percept...
Article
Full-text available
Sound externalization, or the perception that a sound source is outside of the head, is an intriguing phenomenon that has long interested psychoacousticians. While previous reviews are available, the past few decades have produced a substantial amount of new data.In this review, we aim to synthesize those data and to summarize advances in our under...
Conference Paper
Previous studies have noted an interaction between eye position and auditory spatial attention, including a tendency to look towards the location of an attended sound (even in the absence of useful visual information). There can also be objective improvements in the detection and discrimination of sounds when the eyes are directed to their location...
Article
When a target talker speaks in the presence of competing talkers, the listener must not only segregate the voices but also understand the target message based on a limited set of spectrotemporal regions (“glimpses”) in which the target voice dominates the acoustic mixture. Here, the hypothesis that a broad audible bandwidth is more critical for the...
Article
Sensitivity to interaural time differences (ITDs) was measured in two groups of listeners, one with normal hearing and one with sensorineural hearing loss. ITD detection thresholds were measured for pure tones and for speech (a single word), in quiet and in the presence of noise. It was predicted that effects of hearing loss would be reduced for sp...
Presentation
Full-text available
The interaural time difference (ITD) is the primary sound-localization cue for humans. While realistic sound sources have energy across a wide frequency range, the ear performs a narrowband frequency decomposition, and within each band are ambiguities in the interaural cross-correlation (an index of the signal ITD). These ambiguities are thought to...
Article
This study aimed at predicting speech intelligibility in the presence of harmonic maskers. Contrary to a noise signal, these maskers have a harmonic structure that allows for a segregation of the competing sounds based on a difference of their fundamental frequency (F0). This F0 segregation could be due to spectral glimpsing or harmonic cancellatio...
Article
The ability to understand a target speech signal against a background of interfering speech signals is typically improved when the interfering signals are spatially separated (spatial release from masking; SRM). Swaminathan et al. (2016) found a significant reduction in SRM when the temporal fine structure (TFS) across the left and right ears was d...
Article
The ability to identify the words spoken by one talker masked by two or four competing talkers was tested in young-adult listeners with sensorineural hearing loss (SNHL). In a reference/baseline condition, masking speech was colocated with target speech, target and masker talkers were female, and the masker was intelligible. Three comparison condit...
Article
Full-text available
Speech perception in complex sound fields can greatly benefit from different unmasking cues to segregate the target from interfering voices. This study investigated the role of three unmasking cues (spatial separation, gender differences, and masker time reversal) on speech intelligibility and perceived listening effort in normal-hearing listeners....
Article
Cubick and Dau [(2016). Acta Acust. Acust. 102, 547-557] showed that speech reception thresholds (SRTs) in noise, obtained with normal-hearing listeners, were significantly higher with hearing aids (HAs) than without. Some listeners reported a change in their spatial perception of the stimuli due to the HA processing, with auditory images often bei...
Article
Full-text available
The perception of simple auditory mixtures is known to evolve over time. For instance, a common example of this is the "buildup" of stream segregation that is observed for sequences of tones alternating in pitch. Yet very little is known about how the perception of more complicated auditory scenes, such as multitalker mixtures, changes over time. P...
Article
Full-text available
Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty de...
Article
Understanding speech in noise involves a complex set of processes including segregation of sources, selection of the target talker, and recognition and comprehension of the ongoing message. Previous studies often have focused on the segregation, selection and recognition processes by measuring the intelligibility of short strings of words presented...
Article
Full-text available
The ability to identify who is talking is an important aspect of communication in social situations and, while empirical data are limited, it is possible that a disruption to this ability contributes to the difficulties experienced by listeners with hearing loss. In this study, talker identification was examined under both quiet and masked conditio...
Article
Objectives: The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for s...
Article
A hearing-aid strategy that combines a beamforming microphone array in the high frequencies with natural binaural signals in the low frequencies was examined. This strategy attempts to balance the benefits of beamforming (improved signal-to-noise ratio) with the benefits of binaural listening (spatial awareness and location-based segregation). The...
Article
Full-text available
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; t...
Article
Objective: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of he...
Article
Full-text available
The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech i...
Article
While much is known about how well listeners can locate single sound sources under ideal conditions, it remains unclear how this ability relates to the more complex task of spatiallyanalyzing realistic acoustic environments. There are many challenges in measuringspatial perception in realistic environments, including generating simulations that off...
Article
In multitalker mixtures, listeners with hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. However, it is not clear whether this problem reflects an inability to use spatial cues to segregate sounds, or a degraded representation of the target speech itself. In this work, a simple monaural glimps...
Article
This study extends previous work [Roverud et al., Trends Hear, 20, 1-17, 2016] reporting differences between normal-hearing (NH) and hearing-impaired (HI) listeners in their use of low- vs. high-frequency information. In that study, listeners identified well-learned tonal patterns presented simultaneously at two center frequencies (CFs). The CFs co...
Article
Cubick and Dau (2016) showed that speech reception thresholds (SRTs) in noise, obtained with normal-hearing (NH) listeners, can be significantly higher with hearing aids (HAs) than in the corresponding unaided condition. Some of the listeners reported a change in their spatial perception of the sounds due to the HA processing, with auditory images...
Article
Full-text available
Localization of a 2-ms click target was previously shown to be influenced by a preceding identical distractor for inter-click-intervals up to 400 ms [Kopčo, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420–432]. Here, two experiments examined whether perceptual organization plays a role in this effect. In the experiments, the distrac...
Chapter
Most normal-hearing listeners can understand a conversational partner in an everyday setting with an ease that is unmatched by any computational algorithm available today. This ability to reliably extract meaning from a sound source in a mixture of competing sources relies on the fact that natural, meaningful sounds have structure in both time and...
Article
In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduc...
Article
Full-text available
This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locati...
Poster
The ability to identify voices was examined under both quiet and masked conditions. Subjects were grouped by hearing status (normal hearing/sensorineural hearing loss) and age (younger/older adults). Listeners first learned to identify the voices of four same-sex “target” talkers in quiet, with a fixed amount of training. On each trial subjects ide...
Article
Unlabelled: While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intel...
Article
Full-text available
Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from mas...
Article
Background: Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in comple...
Article
Background: Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. Purpose: The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situ...
Article
Hearing-impaired (HI) individuals typically experience greater difficulty listening selectively to a target talker in speech mixtures than do normal-hearing (NH) listeners. To assist HI listeners in these situations, the benefit of a visually guided hearing aid (VGHA)—highly directional amplification created by acoustic beamforming steered by eye g...
Article
Auditory localization research needs to be performed in more realistic testing environments to better capture the real-world abilities of listeners and their hearing devices. However, there are significant challenges involved in controlling the audibility of relevant target signals in realistic environments. To understand the important aspects infl...
Chapter
Full-text available
Hearing loss has been shown to reduce speech understanding in spatialized multitalker listening situations, leading to the common belief that spatial processing is disrupted by hearing loss. This paper describes related studies from three laboratories that explored the contribution of reduced target audibility to this deficit. All studies used a st...
Article
Full-text available
This study investigated to what extent spatial release from masking (SRM) deficits in hearing-impaired adults may be related to reduced audibility of the test stimuli. Sixteen adults with sensorineural hearing loss and 28 adults with normal hearing were assessed on the Listening in Spatialized Noise-Sentences test, which measures SRM using a symmet...
Article
The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe...
Article
Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored....
Article
Full-text available
The benefit provided to listeners with sensorineural hearing loss (SNHL) by an acoustic beamforming microphone array was determined in a speech-on-speech masking experiment. Normal-hearing controls were tested as well. For the SNHL listeners, prescription-determined gain was applied to the stimuli, and performance using the beamformer was compared...
Article
Full-text available
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech s...
Article
Full-text available
In multisource listening environments it is important to be able to attend to a single source (selective listening) while also monitoring unattended sources for potentially useful information (divided listening). Previous studies have indicated that hearing-impaired (HI) listeners have more difficulty with listening in multisource environments than...
Article
Full-text available
It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. U...
Article
There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing...
Article
A previous study of sound localization with a preceding distractor showed that (1) the distractor affects response bias and response variance for distractor-target inter-stimulus-intervals of up to 400 ms, and that (2) localization responses are biased away from the distractor even on interleaved control trials in which the target is presented alon...
Article
There is ample evidence in the literature that hearing loss increases the time taken to process speech (e.g., increased response times for word discrimination, sentence identification, and passage comprehension). This has led to the assumption that providing hearing-impaired listeners with more time to process speech would be beneficial. For senten...
Article
Under certain circumstances, listeners with sensorineural hearing loss demonstrate poorer speech intelligibility in spatially separated speech maskers than those with normal hearing. One important issue in interpreting these results is whether the target speech information is available or audible in the spatialized mixture. Simple energy-based “gli...