FIG 1 - uploaded by Sean R. Anderson
Content may be subject to copyright.
Illustration of the electrode-neuron interface. This illustration shows examples of factors that can affect the spectro-temporal representations of sounds in each ear for listeners with

Illustration of the electrode-neuron interface. This illustration shows examples of factors that can affect the spectro-temporal representations of sounds in each ear for listeners with

Source publication
Thesis
Full-text available
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the amount of benefit received by each patient varies considerably. One reason for this variability is difference between the two ears’ hearing function, i.e., interaural asymmetry. Thus far...

Similar publications

Article
Full-text available
Composite materials have been widely used in spacecraft structures. Due to the harsh environment in space, gas leakage will occur in the structure, so it is necessary to locate the leakage position in time. In this paper, a beamforming localization method based on a U-shaped sensor array is studied. The array can be divided into two subarrays, whic...
Article
Full-text available
In the process of locating mixed far-field and near-field sources, sparse nonlinear arrays (SNAs) can achieve larger array apertures and higher degrees of freedom compared to traditional uniform linear arrays (ULAs) with the same number of sensors. This paper introduces a Modified Symmetric Nested Array (MSNA), which can automatically generate the...
Preprint
Full-text available
Audio-Visual Source Localization (AVSL) aims to localize the source of sound within a video. In this paper, we identify a significant issue in existing benchmarks: the sounding objects are often easily recognized based solely on visual cues, which we refer to as visual bias. Such biases hinder these benchmarks from effectively evaluating AVSL model...
Preprint
Full-text available
Audio-visual semantic segmentation (AVSS) aims to segment and classify sounding objects in videos with acoustic cues. However, most approaches operate on the close-set assumption and only identify pre-defined categories from training data, lacking the generalization ability to detect novel categories in practical applications. In this paper, we int...
Article
Full-text available
Acoustic imaging method is a critical task in various applications since it can locate the sound sources. However, the resolution of the method becomes low at low frequencies. This paper proposes a novel method to realize high-resolution acoustic imaging based on block sparsity constraint. By dividing the focusing area into blocks, the block sparse...

Citations

... These assumptions may not apply to patients with BiCIs, who often show marked interaural asymmetry in various aspects of auditory processing, such as speech understanding and spectro-temporal resolution. These asymmetries are likely to be produced by many different sources (Anderson, 2022). Thus, throughout this manuscript we define interaurally asymmetric hearing outcomes as any undesirable difference between the two ears to which one would answer affirmatively to the question "Does listening with your left compared to your right ear sound different?" ...
... SA would like to thank Drs. Lina Reiss, Emily Burg and Lukas Suveg for their helpful feedback and discussions as this project developed. Portions of this work appear in the dissertation of SRA (Anderson, 2022) and were presented at the 2020 Association for Research in Otolaryngology MidWinter meeting in San Jose, CA. ...
Article
Full-text available
Speech information in the better ear interferes with the poorer ear in patients with bilateral cochlear implants (BiCIs) who have large asymmetries in speech intelligibility between ears. The goal of the present study was to assess how each ear impacts, and whether one dominates, speech perception using simulated CI processing in older and younger normal-hearing (ONH and YNH) listeners. Dynamic range (DR) was manipulated symmetrically or asymmetrically across spectral bands in a vocoder. We hypothesized that if abnormal integration of speech information occurs with asymmetrical speech understanding, listeners would demonstrate an atypical preference in accuracy when reporting speech presented to the better ear and fusion of speech between the ears (i.e., an increased number of one-word responses when two words were presented). Results from three speech conditions showed that: (1) When the same word was presented to both ears, speech identification accuracy decreased if one or both ears decreased in DR, but listeners usually reported hearing one word. (2) When two words with different vowels were presented to both ears, speech identification accuracy and percentage of two-word responses decreased consistently as DR decreased in one or both ears. (3) When two rhyming words (e.g., bed and led) previously shown to phonologically fuse between ears (e.g., bled) were presented, listeners instead demonstrated interference as DR decreased. The word responded in (2) and (3) came from the right (symmetric) or better (asymmetric) ear, especially in (3) and for ONH listeners in (2). These results suggest that the ear with poorer dynamic range is downweighted by the auditory system, resulting in abnormal fusion and interference, especially for older listeners.