About
132
Publications
14,165
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,528
Citations
Current institution
Publications
Publications (132)
Navigating complex sensory environments is critical to survival, and brain mechanisms have evolved to cope with the wide range of surroundings we encounter. To determine how listeners learn the statistical properties of acoustic spaces, we assessed their ability to perceive speech in a range of noisy and reverberant rooms. Listeners were also expos...
Autistic people often report a heightened sensitivity to sound. Yet, research into Autistic people’s auditory environments and their impacts on quality of life is limited. We conducted an online survey to understand how auditory environments influence the relationships between Autistic traits and impacts on quality of life (iQoL) due to sound sensi...
Functional near-infrared spectroscopy (fNIRS) is an increasingly popular neuroimaging technique that measures cortical hemodynamic activity in a non-invasive and portable fashion. Although the fNIRS community has been successful in disseminating open-source processing tools and a standard file format (SNIRF), reproducible research and sharing of fN...
Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration—interaural phase differences (ΔIPD)—or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA– repetitions (A and B = 80-ms tones, “–”...
We need to combine sensory data from various sources to make sense of the world around us. This sensory data helps us understand our surroundings, influencing our experiences and interactions within our everyday environments. Recent interest in sensory‐focused approaches to supporting autistic people has fixed on auditory processing—the sense of he...
Background: In Kreuz et al., J Neurosci Methods 381, 109703 (2022) two methods were proposed that perform latency correction, i.e., optimize the spike time alignment of sparse neuronal spike trains with well defined global spiking events. The first one based on direct shifts is fast but uses only partial latency information, while the other one mak...
Exploring the complex structure of the human brain is crucial for understanding its functionality and diagnosing brain disorders. Thanks to advancements in neuroimaging technology, a novel approach has emerged that involves modeling the human brain as a graph-structured pattern, with different brain regions represented as nodes and the functional r...
Simple Summary
The listening brain must resolve the mix of sounds that reaches our ears into events, sources, and meanings. In this process, noise—sound that interferes with our ability to detect or understand sounds we need or wish to—is the primary challenge when listening. Importantly, noise to one person, or in one moment, might be an important...
Exploring the complex structure of the human brain is crucial for understanding its functionality and diagnosing brain disorders. Thanks to advancements in neuroimaging technology, a novel approach has emerged that involves modeling the human brain as a graph-structured pattern, with different brain regions represented as nodes and the functional r...
Background: Measures of autistic traits have received extensive use in clinical and research settings. As conceptualisations of autism have grown, and language around autism evolved, many established measures are misaligned with the current diagnostic criteria. The recently developed Comprehensive Autistic Trait Inventory (CATI) was designed to mea...
Exploring the complex structure of the human brain is crucial for understanding its functionality and diagnosing brain disorders. Thanks to advancements in neuroimaging technology, a novel approach has emerged that involves modeling the human brain as a graph-structured pattern, with different brain regions represented as nodes and the functional r...
Measures of autistic traits are only useful — for pre-diagnostic screening, exploring individual differences, and gaining personal insight — if they efficiently and accurately assess autism as currently conceptualised whilst maintaining psychometric validity across different demographic groups. We recruited 1,322 autistic and 1,279 non-autistic adu...
Interpreting the world around us requires integrating sensory information across modalities to derive meaning and shape our experiences and interactions with and within everyday environments. Recent interest in sensory-focused approaches to supporting autistic people has fixed on auditory processing—the sense of hearing and the act of listening—and...
Several objective measures for identifying individuals with “hidden hearing loss” (HHL), have been proposed based on cochlear synaptopathy and the resulting central changes in neural gain. While the loss of high-threshold auditory nerve fibres may result in weaker middle-ear muscle reflexes (MEMR) in HHL sufferers, binaural processing is likely dis...
Neural adaptation to sound level statistics has been demonstrated at various levels of the auditory pathway, including the auditory periphery. Adaptation is thought to improve the efficiency of encoding acoustic stimuli using limited neural resources without compromising accuracy. However, the precise mechanisms underlying the statistical learning...
Navigating complex sensory environments is critical to survival, and brain mechanisms have evolved to cope with the wide range of surroundings. In noisy spaces listeners place more emphasis on early-arriving sound energy, nevertheless, reverberant energy is highly informative about those spaces per se, and human listeners show improved speech under...
Many otherwise normal-hearing individuals experience listening problems not apparent from their hearing thresholds. Carefully controlled animal studies using invasive techniques in a range of species have established a clear link between exposure to loud sounds and a range of pathologies underlying this hidden hearing loss (HHL), including cochlear...
Perceptual anchoring, a process akin to statistical learning, occurs rapidly and without conscious awareness and is integral to our ability to successfully navigate a noisy world. Here, we investigated anchoring abilities in typical hearing and reading participants by implementing an anchoring paradigm (Agus et al., 2014) using rapid pure-tone sequ...
Auditory activity in humans is influenced by feedback networks in the brain. The final leg of this pathway originates in the brainstem and plays an important role in protecting and aiding communication in competing noise. Previous studies have used auditory and/or visual attention tasks to manipulate cortical activity when exploring its influence o...
Learning is crucial for the development of species, enabling them to acquire behaviours, accumulate knowledge, and refine skills. An example of implicit and unsupervised learning is “perceptual anchoring,” where the brain creates an internal representation of the statistical properties of a stimulus that it encounters repeatedly. Behaviourally, bot...
[This corrects the article DOI: 10.3389/fnins.2023.1081295.].
We investigated the cortical representation of emotional prosody in normal-hearing listeners using functional near-infrared spectroscopy (fNIRS) and behavioural assessments. Consistent with previous reports, listeners relied most heavily on F0 cues when recognizing emotion cues; performance was relatively poor-and highly variable between listeners-...
Measurement of brain functional connectivity has become a dominant approach to explore the interaction dynamics between brain regions of subjects under examination. Conventional functional connectivity measures largely originate from deterministic models on empirical analysis, usually demanding application-specific settings (e.g., Pearson's Correla...
Analysing complex auditory scenes depends in part on learning the long-term statistical structure of sounds comprising those scenes. One way in which the listening brain achieves this is by analysing the statistical structure of acoustic environments over multiple time courses and separating background from foreground sounds. A critical component o...
Analysis of neuroimaging data (e.g., Magnetic Resonance Imaging, structural and functional MRI) plays an important role in monitoring brain dynamics and probing brain structures. Neuroimaging data are multi-featured and non-linear by nature, and it is a natural way to organise these data as tensors prior to performing automated analyses such as dis...
Functional near-infrared spectroscopy (fNIRS) is an increasingly popular neuroimaging technique that measures cortical hemodynamic activity in a non-invasive and portable fashion. Although the fNIRS community has been successful in disseminating several open-source processing tools and a standard file format (SNIRF), the development of reproducible...
EEG-based tinnitus classification is a valuable tool for tinnitus diagnosis, research, and treatments. Most current works are limited to a single dataset where data patterns are similar. But EEG signals are highly non-stationary, resulting in model's poor generalization to new users, sessions or datasets. Thus, designing a model that can generalize...
We investigated the cortical representation of emotional prosody in normal-hearing listeners using functional near-infrared spectroscopy and behavourial assessments. Consistent with previous reports, listeners relied most heavily on F0 cues when recognizing emotion cues; performance was relatively poor—and highly variable between listeners—when onl...
Many individuals experience hearing problems that are hidden under a normal audiogram. This not only impacts on individual sufferers, but also on clinicians who can offer little in the way of support. Animal studies using invasive methodologies have developed solid evidence for a range of pathologies underlying this hidden hearing loss (HHL), inclu...
Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical langu...
Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome. Capturing brain networks’ structural information and hierarchical patterns is essential for understanding brain functions and disease states. Recently, the prom...
SharedIt Link: The legal way to share Springer-Nature content
SharedIt Link: The legal way to share Springer-Nature content
EEG-based tinnitus classification is a valuable tool for tinnitus diagnosis, research, and treatments. Most current works are limited to a single dataset where data patterns are similar. But EEG signals are highly non-stationary, resulting in model's poor generalization to new users, sessions or datasets. Thus, designing a model that can generalize...
Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome. Capturing brain networks' structural information and hierarchical patterns is essential for understanding brain functions and disease states. Recently, the prom...
Sounds reach the ears as a mixture of energy generated by different sources. Listeners extract cues that distinguish different sources from each other, including how similar sounds are arriving at the two ears, the interaural coherence (IAC). Here, we find listeners cannot reliably distinguish two completely interaurally coherent sounds from a sing...
Presbycusis, or age-related hearing loss, is the most common sensory deficit globally, and the biggest modifiable risk factor for a later dementia diagnosis. Despite its ubiquity, however, the primary pathology contributing to presbycusis is reportedly contentious, particularly the relative role of damage to the sensory outer hair cells compared to...
Electroencephalogram (EEG)-based applications in Brain-Computer Interfaces (BCIs, or Human-Machine Interfaces, HMIs), diagnosis of neurological disease, rehabilitation,
etc
, rely on supervised techniques such as EEG classification that requires given class labels or markers. Incomplete or incorrectly labeled or unlabeled EEG data are increasing...
For amplitude-modulated sound, the envelope interaural time difference (ITDENV) is a potential cue for sound-source location. ITDENV is encoded in the lateral superior olive (LSO) of the auditory brainstem, by excitatory-inhibitory (EI) neurons receiving ipsilateral excitation and contralateral inhibition. Between human listeners, sensitivity to IT...
The ability to navigate “cocktail party” situations by focusing on sounds of interest over irrelevant, background sounds is often considered in terms of cortical mechanisms. However, subcortical circuits such as the pathway underlying the medial olivocochlear (MOC) reflex modulate the activity of the inner ear itself, supporting the extraction of s...
Objectives:
The purp ose of this study is to develop a biophysical model of human spiral ganglion neurons (SGNs) that includes voltage-gated hyperpolarization-activated cation (HCN) channels and low-threshold potassium voltage-gated, delayed- rectifier low-threshold potassium (KLT) channels, providing for a more complete simulation of spike-rate a...
Electroencephalogram (EEG)-based neurofeedback has been widely studied for tinnitus therapy in recent years. Most existing research relies on experts’ cognitive prediction, and studies based on machine learning and deep learning are either data-hungry or not well generalizable to new subjects. In this paper, we propose a robust, data-efficient mode...
For abruptly gated sound, interaural time difference (ITD) cues at onset carry greater perceptual weight than those following. This research explored how envelope shape influences such carrier ITD weighting. Experiment 1 assessed the perceived lateralization of a tonal binaural beat that transitioned through ITD (diotic envelope, mean carrier frequ...
Listeners typically perceive a sound as originating from the direction of its source, even as direct sound is followed milliseconds later by reflected sound from multiple different directions. Early-arriving sound is emphasised in the ascending auditory pathway, including the medial superior olive (MSO) where binaural neurons encode the interaural-...
Brain signals refer to the biometric information collected from the human brain. The research on brain signals aims to discover the underlying neurological or physical status of the individuals by signal decoding. The emerging deep learning techniques have improved the study of brain signals significantly in recent years. In this work, we first pre...
A potential auditory spatial cue, the envelope interaural time difference (ITD ENV ) is encoded in the lateral superior olive (LSO) of the brainstem. Here, we explore computationally modeled LSO neurons, in reflecting behavioral sensitivity to ITD ENV . Transposed tones (half-wave rectified low-frequency tones, frequency-limited, then multiplying a...
Navigating cocktail party situations by enhancing foreground sounds over irrelevant background information is typically considered from a cortico-centric perspective. However, subcortical circuits, such as the medial olivocochlear reflex (MOCR) that modulates inner ear activity itself, have ample opportunity to extract salient features from the aud...
Listeners perceive sound energy as originating from the direction of its source, even when followed only milliseconds later by reflected energy from multiple different directions. Here, modelling responses of brainstem neurons responsible for encoding auditory spatial cues, we demonstrate that accurate localisation in reverberant environments relie...
Binaural hearing, the ability to detect small differences in the timing and level of sounds at the two ears, underpins the ability to localize sound sources along the horizontal plane, and is important for decoding complex spatial listening environments into separate objects - a critical factor in 'cocktail-party listening'. For human listeners, th...
Interaural time differences (ITDs) conveyed by the modulated envelopes of high-frequency sounds can serve as a cue for localising a sound source. Klein-Hennig et al. (2011) demonstrated the envelope attack, the rate at which stimulus energy in the envelope increases, and the duration of the pause, the interval between successive envelope pulses, as...
Previous studies have shown that normal-hearing (NH) listeners' spatial perception of non-stationary interaural time differences(ITDs) is dominated by the carrier ITD during rising amplitude segments. Here, ITD sensitivity throughout the amplitude-modulation cycle in NH listeners and bilateral cochlear implant(CI) subjects is compared, the latter b...
Neural adaptation is central to sensation. Neurons in auditory midbrain, for example, rapidly adapt their firing rates to enhance coding precision of common sound intensities. However, it remains unknown whether this adaptation is fixed, or dynamic and dependent on experience. Here, using guinea pigs as animal models, we report that adaptation acce...
Supplementary Figures 1 - 5
Humans, and many other species, exploit small differences in the timing of sounds at the two ears (interaural time difference, ITD) to locate their source and to enhance their detection in background noise. Despite their importance in everyday listening tasks, however, the neural representation of ITDs in human listeners remains poorly understood,...
Sound-source localization in the horizontal plane relies on detecting small differences in the timing and level of the sound at the two ears, including differences in the timing of the modulated envelopes of high-frequency sounds (envelope interaural time differences (ITDs)). We investigated responses of single neurons in the inferior colliculus (I...
We assessed neural sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure (TFS) of low-frequency sounds and ITDs conveyed in the temporal envelope of amplitude-modulated (AM'ed) high-frequency sounds. Using electroencephalography (EEG), we recorded brain activity to sounds in which the interaural phase difference...
This special issue contains a collection of 13 papers highlighting the collaborative research and engineering project entitled Advancing Binaural Cochlear Implant Technology-ABCIT-as well as research spin-offs from the project. In this introductory editorial, a brief history of the project is provided, alongside an overview of the studies.
There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)-the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural - (...
Data for Monaghan, Bleeck, McAlpine, "Sensitivity to Envelope ITDs at High Modulation Rates", Trends in Hearing.
shown are thresholds for all (anonymous) participants for various frequencies for all conditions as described in the paper.
A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response...
Significance
Locating the source of a sound is critical to the survival of many species and an important factor in human communication. Auditory spatial cues—differences in the timing and intensity of sounds arriving at the two ears—are processed by specialized neurons in the brainstem. The importance of these cues varies with sound frequency. Thro...
Interaural Correlation (IAC) is related to variance in Interaural Time Difference (ITD) and Interaural Level Difference (ILD). While normalized IAC can account for behavioral performance in discrimination tasks, so can models directly employing this variance as a cue. Attempts at identifying a neural correlate of IAC discrimination have focused on...
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging...
Recently, employing an amplitude-modulated binaural beat (AMBB) in which sound amplitude and interaural phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al., Proc. Natl. Acad. Sci. 110, 2013a, p.15151-15156), we demonstrated that the human auditory system utilizes interaural timing differences in the temporal fine-st...
A model is presented that predicts the binaural advantage to speech intelligibility by analyzing the right and left recordings at the two ears containing mixed target and interferer signals. This auditory-inspired model implements an equalization-cancellation stage to predict the binaural unmasking (BU) component, in conjunction with a modulation-f...
Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show...
Recent studies employing speech stimuli to investigate 'cocktail-party' listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpin...
Significance
Sound localization in rooms is challenging, especially for hearing-impaired listeners or technical devices. Reflections from walls and ceilings make it difficult to distinguish sounds arriving direct from a source from the mixture of potentially confounding sounds arriving a few milliseconds later. Nevertheless, normal-hearing listener...
A measure to predict speech intelligibility in unilateral and bilateral cochlear implant (CI) users is proposed that does not need a priori information (i.e. is non-intrusive), such as the room acoustics. Such measure, termed BiSIMCI , combines an equalization-cancellation stage together with a modulation frequency estimation stage. Simulated and a...
Background:
Current theories of tinnitus assume that the phantom sound is generated either through increased spontaneous activity of neurons in the auditory brain, or through pathological temporal firing patterns of the spontaneous neuronal discharge, or a combination of both factors. With this in mind, Tass and colleagues recently tested a number...
Recently, Klein-Hennig et al. (J Acoust Soc Am 129:3856-3872, 2011) suggested a design for envelope waveforms that allows for independent setting of the duration of the four segments of an envelope cycle - pause, attack, sustain, and decay. These authors conducted psychoacoustic experiments to determine the threshold interaural time differences (IT...
Many neurons adapt their spike output to accommodate the prevailing sensory environment. Although such adaptation is thought to improve coding of relevant stimulus features, the relationship between adaptation at the neural and behavioral levels remains to be established. Here we describe improved discrimination performance for an auditory spatial...
This study validates a novel approach to predict speech intelligibility for Cochlear Implant users (CIs) in rever-berant environments. More specifically, we explore the use of existing objective quality and intelligibility met-rics, applied directly to vocoded speech degraded by room reverberation, here assessed at ten different reverberation time...
A: Re-analysis of the data in Fig. 4, base-corrected relative to the pre-transition interval (-50:0 relative to the transition time). Plotted are group-RMS of right hemisphere auditory cortical evoked responses in the low load (blue) and high load (red) conditions. Shaded areas mark time intervals where a significant difference is found between loa...
Ever since Pliny the Elder coined the term tinnitus, the perception of sound in the absence of an external sound source has remained enigmatic. Traditional theories assume that tinnitus is triggered by cochlear damage, but many tinnitus patients present with a normal audiogram, i.e., with no direct signs of cochlear damage. Here, we report that in...
This study investigates how acoustic change-events are represented in a listener's brain when attention is strongly focused elsewhere. Using magneto-encephalography (MEG) we examine whether cortical responses to different kinds of changes in stimulus statistics are similarly influenced by attentional load, and whether the processing of such acousti...
Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound source location in the horizontal plane by extracting interaural time differences (ITD) from the fine structure or envelope of sound stimuli. Both cell types are tuned to frequency (characteristic frequency, CF) and are organized alo...
The ability to determine the location of a sound source is fundamental to hearing. However, auditory space is not represented in any systematic manner on the basilar membrane of the cochlea, the sensory surface of the receptor organ for hearing. Understanding the means by which sensitivity to spatial cues is computed in central neurons can therefor...
In order to investigate whether performance in an auditory spatial discrimination task depends on the prevailing listening conditions, we tested the ability of human listeners to discriminate target sounds with and without presentation of a preceding sound. Target sounds were either lateralized by means of interaural time differences (ITDs) of +400...
Adaptation of sensory neurons to the prevailing environment is thought to underlie improved coding of relevant stimulus features.
Here, we report that neurons in the inferior colliculus (IC) of anesthetized guinea pigs adapt to statistical distributions
of interaural time differences (ITDs - one of the binaural cues for sound-source localization)....
Is binaural processing in humans different to that of other mammals? While psychophysical data suggest that the range of internal delays necessary for processing interaural time differences is at least +/-3 ms, physiological data from small mammals indicate a more limited range. This study demonstrates that binaural detection is impeded by reduced...
A key function of the auditory system is to provide reliable information about the location of sound sources. Here, we describe how sound location is represented by synaptic input arriving onto pyramidal cells within auditory cortex by combining free-field acoustic stimulation in the frontal azimuthal plane with in vivo whole-cell recordings. We fo...
Belying the apparent ease with which the acoustic world is perceived, the sheer vastness of the range of sounds and sound parameters that must be encoded represents a challenge to traditional models of neural coding in audition. Here, we review recent evidence suggesting that a process of gain control, operating at multiple stages in the auditory p...
Humans use differences in the timing of sounds at the two ears to determine the location of a sound source. Various models have been posited for the neural representation of these interaural time differences (ITDs). These models make opposing predictions about the lateralization of ITD processing in the human brain. The weighted-image model predict...
Neurons in the auditory midbrain are sensitive to differences in the timing of sounds at the two ears--an important sound localization cue. We used broadband noise stimuli to investigate the interaural-delay sensitivity of low-frequency neurons in two midbrain nuclei: the inferior colliculus (IC) and the dorsal nucleus of the lateral lemniscus. Noi...
Auditory neurons must represent accurately a wide range of sound levels using firing rates that vary over a far narrower range of levels. Recently, we demonstrated that this "dynamic range problem" is lessened by neural adaptation, whereby neurons adjust their input-output functions for sound level according to the prevailing distribution of levels...
Adaptation of sensory neurons to the statistics of their respectivestimulus domain has been observed across many sensory systems and stimulus modalities. Here, we investigate mechanisms of neuronal adaptation to statistical distributions of interaural time differences (ITDs), an important binaural cue for the azimuthal localisation of a sound sourc...
Interaural time differences (ITDs) are the main cues used by humans to determine the horizontal position of low-frequency
(<1500 Hz) sound sources. The neural representation of ITDs is presumed to be one in which brain centres in each hemisphere
encode the opposite (contralateral) side of space (Jenkins and Merzenich 1984). Assumptions in most huma...
Interaural time difference (ITD) is an important sound localization cue, arising from the different travel time of a sound
from its source to the left and right ears for sources located to either side of the head. Neural extraction of ITDs occurs
in the superior olivary complex (SOC). SOC neurons receive binaural input and are thought to perform a...