Katharina von Kriegstein’s research while affiliated with TU Dresden and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (165)


Functional alterations of the magnocellular subdivision of the visual sensory thalamus in autism
  • Article

November 2024

·

19 Reads

Proceedings of the National Academy of Sciences

·

·

·

[...]

·

Katharina von Kriegstein

The long-standing hypothesis that autism is linked to changes in the visual magnocellular system of the human brain has never been directly examined due to technological constraints. Here, we used a recently developed 7-Tesla functional MRI (fMRI) approach to investigate this hypothesis within the visual sensory thalamus (lateral geniculate nucleus, LGN). The LGN is a crucial component of the primary visual pathway. It is particularly suited to investigate the magnocellular visual system, because within the LGN, the magnocellular (mLGN) uniquely segregates from the parvocellular (pLGN) system. Our results revealed diminished mLGN blood-oxygenation-level-dependent (BOLD) responses in the autism group compared to controls. pLGN responses were comparable across groups. The mLGN alterations were observed specifically for stimuli optimized for mLGN function, i.e., visual displays with low spatial frequency and high temporal flicker frequency. The results confirm the long-standing hypothesis of magnocellular visual system alterations in autism. They substantiate the emerging perspective that sensory processing variations are part of autism symptomatology.


Experimental design and hypotheses. (A) Example of an FM-sweep with positive FM-rate. (B) The three FM-sweeps used in the experiment (in dark blue) in comparison to an hypothetical family of seven sweeps with increasing modulation rate. All sweeps had the same duration of 50 ms. They were characterised by differences in the frequency span Δf. (C) Example trial. Each trial consisted of a sequence of seven repetitions of one FM-sweep (standards; blue) and one other FM-sweep (deviant; red). In each trial, a single deviant was located in positions 4, 5, or 6 of the sequence. Participants reported, in each trial, the position of the deviant right after they identified it. Each participant completed up to 540 trials in total, 60 per deviant position and FM-sweep combination Δ=|Δfdeviant−Δfstandard|. Sweeps within a sequence were separated by 700 ms inter-stimulus-intervals (ISIs). (D) Schematic view of the expected underlying responses in the auditory pathway for the sequence shown in (C), together with the definition of the experimental variables (std0: first standard; std1: repeated standards preceding the deviant; std2: standards following the deviant; devx: deviant in position x). (E) Schematic view of the six standard (blue) and deviant (red) combinations. Combinations are characterised by whether deviant and standard differ in: modulation direction only, modulation rate only, or both. (F) Expected responses in the auditory pathway nuclei corresponding to the hypotheses: h1) responses reflect adaptation by habituation only; h2) responses reflect prediction error with respect to the participant’s expectations.
Design of the Bayesian models. The table shows the parametrised expected response to each tone in the sequence (rows) for the two different models (h1/h2) and the three deviant positions. Each model was defined according to the relative amplitudes it predicted for the different sounds in the sequences. H1 assumed asymptotic habituation to consecutive standards and recovered responses to deviants. H2 assumed that responses to the stimuli depended on how predictable they were. Note that the models have free linear parameters: the displayed amplitudes are one of the many possible solutions of the linear fit. See Table 1 for an exact definition of each model.
Performance and reaction times. Mean accuracy (A) and reaction times (B) across deviant positions. Grey circles represent the average value per participant and deviant position. Violin plots are the kernel density estimations of the reaction times for each deviant position. **p<0.005, ****p<0.00005; all p-values corrected for 3 comparisons.
Mesoscopic stimulus specific adaptation (SSA) in bilateral IC and MGB. Regions within the MGB and IC ROIs adapted to the repeated standards (adaptation; blue shows adaptation only, purple shows SSA, which includes adaptation) and recovered responses to deviants (deviant detection; red shows deviant detection only, purple shows SSA, which includes deviant detection). Stimulus-specific adaptation (i.e., recovered responses to a deviant in voxels showing adaptation; SSA) occurred in bilateral MGB and IC (purple). Maps were computed by thresholding the contrast p-maps at FDR<0.05. Yellow patches show voxels included in the anatomical masks computed with a functional localiser that showed neither adaptation nor deviant detection.
BOLD responses for partitions of the data where deviant and standard differed only in direction or rate. Average z-score in each of the four SSA ROIs to the different experimental conditions in trials where the standard and deviant differed only in direction (orange) or rate (yellow). Violin plots are kernel density estimations of the distribution of z-scores, averaged over voxels and runs of each ROI. Each distribution holds 17 samples, one per participant (one participant was excluded from this analysis because there were not enough trials available, see Section 2 for details). Black error bars show the mean and standard error of the distributions.

+3

Fast frequency modulation is encoded according to the listener expectations in the human subcortical auditory pathway
  • Article
  • Full-text available

September 2024

·

16 Reads

·

5 Citations

Expectations aid and bias our perception. For instance, expected words are easier to recognise than unexpected words, particularly in noisy environments, and incorrect expectations can make us misunderstand our conversational partner. Expectations are combined with the output from the sensory pathways to form representations of auditory objects in the cerebral cortex. Previous literature has shown that expectations propagate further down to subcortical stations during the encoding of static pure tones. However, it is unclear whether expectations also drive the subcortical encoding of subtle dynamic elements of the acoustic signal that are not represented in the tonotopic axis. Here, we tested the hypothesis that subjective expectations drive the encoding of fast frequency modulation (FM) in the human subcortical auditory pathway. We used fMRI to measure neural responses in the human auditory midbrain (inferior colliculus) and thalamus (medial geniculate body). Participants listened to sequences of FM-sweeps for which they held different expectations based on the task instructions. We found robust evidence that the responses in auditory midbrain and thalamus encode the difference between the acoustic input and the subjective expectations of the listener. The results indicate that FM-sweeps are already encoded at the level of the human auditory midbrain and that encoding is mainly driven by subjective expectations. We conclude that the subcortical auditory pathway is integrated in the cortical network of predictive processing and that expectations are used to optimise the encoding of fast dynamic elements of the acoustic signal.

Download

Figure 1. Schematic overview of auditory-visual processing in human communication. (a) Concurrent audio-visual input. In this listening condition, the listener can hear the speaker's voice and see the speaker's face concurrently. Processing of the auditory signal (auditory speech and voice identity) is supported by interactions (indicated via bidirectional arrows) between visual and auditory systems (Peelle & Sommers, 2015; Young et al., 2020). Here, concurrent visual cues help to predict and enhance the sensory processing of the auditory signal, this is particularly beneficial in noisy listening conditions (Ross et al., 2007; Sumby & Pollack, 1954). (b) Auditory-only input processed in an auditory-only model. In this listening condition, the listener can hear the speaker's voice only, e.g., on the phone. The speaker's face is not available. The speaker is, however, known to the listener audio-visually, such as a familiar person. The auditory system is engaged in the sensory processing of the auditory signal (auditory speech and voice identity) (Ellis et al., 1997; Hickok & Poeppel, 2007). Any engagement of the visual system, including for speakers known by face, is epiphenomenal to the processed auditory signal (indicated via unidirectional arrows) (Bunzeck et al., 2005). Under this model, learned visual mechanisms are not behaviourally relevant for auditory-only processing and, consequently, would not benefit sensory processing in noisy listening conditions. (c) Auditory-only input processed in an audio-visual model. The listening condition is identical to Panel B, i.e., auditory-only input. That is, again only the speaker's voice is available to the listener, such as on the phone. The speaker is, however, known to the listener audio-visually, such as a familiar person. Both the auditory and visual system are engaged in the sensory processing of the auditory signal (auditory speech and voice identity) (von Kriegstein, 2012; von Kriegstein et al., 2008). Interactions between the systems (indicated via bidirectional arrows) are behaviourally relevant: in a similar manner to concurrent visual input (see Panel A), learned visual mechanisms (for speakers known by face) assist in auditory processing by generating predictions and providing constraints about what is heard in the auditory signal (von Kriegstein et al., 2008). Such a process should be particularly beneficial in noisy listening conditions. In Panels A, B, and C, yellow colour denotes the auditory system, blue colour denotes the visual.
Figure 2. Schematic illustration of the audio-visual training phase and auditory-only speech recognition test phase in Experiment 1. (a) Audio-visual training. Prior to auditory-only testing, participants learned the voice and name of six speakers in conjunction with their corresponding face, i.e., video (voiceface learning) or with an occupation image (voice-occupation learning). (b) Auditory-only speech recognition. Following audio-visual training, participants listened to auditory-only sentences spoken by the same learned six speakers in different levels of auditory noise (SNR +4, 0, −4, −8 dB). The sentences were presented in blocks (15 trials per block), which were blocked by learning type (voice-face learned speakers or voice-occupation learned speakers) and noise level. In each trial, participants viewed a fixation cross and then listened to a speaker utter a five-or six-word sentence. This was followed immediately by the presentation of a word on screen. The participant's task was to decide if the presented word was contained within the previously heard sentence or not. Note that the face identities shown in (a) are for illustration purposes, and some differ from those used in the audio-visual training phase. These images are not displayed due to consent restrictions.
Figure 4. Schematic illustration of the auditory-only voiceidentity recognition test phase in Experiment 2. Following audio-visual training with six male speakers (procedure identical to Figure 2a), participants listened to blocks of sentences spoken by the same speakers in different levels of auditory noise (SNR +4, 0, −4, −8 dB). Each block contained 15 trials. In each trial, participants viewed a fixation cross and heard the speaker utter a two-word sentence which was followed immediately by the presentation of a name on screen. The participant's task was to decide if the presented name matched the identity of the speaker or not. The trials were blocked by learning type (voice-face learned speakers or voice-occupation learned speakers) and noise level.
Auditory-only speech recognition performance. Mean accuracy (% correct with standard deviations) for voice-face and voice-occupation learned speakers, for each of the four noise levels. Face-benefit scores, that is (% correct for voice-face-learned speakers) minus (% correct for voice-occupation-learned speakers), are also shown. Scores for all 30 participants are on the left of the table, scores for the 14 participants with a positive overall face-benefit score are displayed on the right.
Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise

August 2024

·

36 Reads

Quarterly Journal of Experimental Psychology (2006)

Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the “face-benefit.” Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers’ voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio–visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.


Dysfunction of the magnocellular subdivision of the visual thalamus in developmental dyslexia

August 2024

·

65 Reads

·

4 Citations

Brain

Developmental dyslexia (DD) is one of the most common learning disorders, affecting millions of children and adults worldwide. To date, scientific research has attempted to explain DD primarily based on pathophysiological alterations in the cerebral cortex. In contrast, several decades ago, pioneering research on five post-mortem human brains suggested that a core characteristic of DD might be morphological alterations in a specific subdivision of the visual thalamus – the magnocellular LGN (M-LGN). However, due to considerable technical challenges in investigating LGN subdivisions non-invasively in humans, this finding was never confirmed in-vivo, and its relevance for DD pathology remained highly controversial. Here, we leveraged recent advances in high-resolution magnetic resonance imaging (MRI) at high field strength (7 Tesla) to investigate the M-LGN in DD in-vivo. Using a case-control design, we acquired data from a large sample of young adults with DD (n = 26; age 28 ± 7 years; 13 females) and matched control participants (n = 28; age 27 ± 6 years; 15 females). Each participant completed a comprehensive diagnostic behavioral test battery and participated in two MRI sessions, including three functional MRI experiments and one structural MRI acquisition. We measured blood-oxygen-level-dependent responses and longitudinal relaxation rates to compare both groups on LGN subdivision function and myelination. Based on previous research, we hypothesized that the M-LGN is altered in DD and that these alterations are associated with a key DD diagnostic score, i.e., rapid letter and number naming (RANln). The results showed aberrant responses of the M-LGN in DD compared to controls, which was reflected in a different functional lateralization of this subdivision between groups. These alterations were associated with RANln performance, specifically in male DD. We also found lateralization differences in the longitudinal relaxation rates of the M-LGN in DD relative to controls. Conversely, the other main subdivision of the LGN, the parvocellular LGN (P-LGN), showed comparable blood-oxygen-level-dependent responses and longitudinal relaxation rates between groups. The present study is the first to unequivocally show that M-LGN alterations are a hallmark of DD, affecting both the function and microstructure of this subdivision. It further provides a first functional interpretation of M-LGN alterations and a basis for a better understanding of sex-specific differences in DD with implications for prospective diagnostic and treatment strategies.


The Role of the Thalamus for Human Auditory and Visual Speech Perception

December 2023

·

8 Reads

The Cerebral Cortex and Thalamus is guided by two central and related tenets, the thalamus plays an ongoing and essential role in cortical functioning, and the cortex is essential for thalamic functioning. Accordingly, neither the cortex nor the thalamus can be understood in any meaningful way in the absence of the other. With chapters written by more than 100 leading experts in the field, The Cerebral Cortex and Thalamus provides a comprehensive account of the structure, function, development, and evolution of the circuitry interconnecting the thalamus and cortex and the consequences of pathology on these circuits.


Figure 2. Anatomical location of the subcortical ROIs in each participant. Each panel plots the location of each ROI projected from MNI to the structural space of the participant using the coregistration inverse transform.
Figure 3. Anatomical location of the cortical ROIs in each participant (pure tone experiment). Each panel plots the location of each ROI projected from MNI to the structural space of the participant using the coregistration inverse transform.
Figure 4. Anatomical location of the cortical ROIs in each participant (FM-sweeps experiment). Each panel plots the location of each ROI projected from MNI to the structural space of the participant using the coregistration inverse transform.
Figure 5. Schematics of t.eps models used for Bayesian model comparison. Each panel plots a possible linear combination of the regressors used in each of the three models for each of the nine trial types (three deviant positions × three values of D) of the experiments. Plots in panel A show the stats model, and in panel B, the task model, and in C, the combined model. Each colored line corresponds to one D value (red corresponds to the largest delta, yellow to the lowest). The apparent delay between colored lines is a visualization device: there was no such delay in the model. Note that the relative height of the first standard (in comparison to the deviant) and the relative weight that D has in the responses to the deviants are free parameters of the model.
Amplitudes of the models used for Bayesian model comparison
Multiple Concurrent Predictions Inform Prediction Error in the Human Auditory Pathway

November 2023

·

38 Reads

·

4 Citations

The Journal of Neuroscience : The Official Journal of the Society for Neuroscience

The key assumption of the predictive coding framework is that internal representations are used to generate predictions on how the sensory input will look like in the immediate future. These predictions are tested against the actual input by the so-called prediction error units, which encode the residuals of the predictions. What happens to prediction errors, however, if predictions drawn by different stages of the sensory hierarchy contradict each other? To answer this question, we conducted two fMRI experiments while male and female human participants listened to sequences of sounds: pure tones in the first experiment, frequency-modulated sweeps in the second experiment. In both experiments we used repetition to induce predictions based on stimulus statistics (stats-informed predictions) and abstract rules disclosed in the task instructions to induce an orthogonal set of (task-informed) predictions. We tested three alternative scenarios: neural responses in the auditory sensory pathway encode prediction error with respect to 1) the stats-informed predictions, 2) the task-informed predictions, or 3) a combination of both. Results showed that neural populations in all recorded regions (bilateral inferior colliculus, medial geniculate body, and primary and secondary auditory cortices) encode prediction error with respect to a combination of the two orthogonal sets of predictions. The findings suggest that predictive coding exploits the non-linear architecture of the auditory pathway for the transmission of predictions. Such non-linear transmission of predictions might be crucial for the predictive coding of complex auditory signals like speech. Significance Statement Sensory systems exploit our subjective expectations to make sense of an overwhelming influx of sensory signals. It is still unclear how expectations at each stage of the processing pipeline are used to predict the representations at the other stages. The current view is that this transmission is hierarchical and linear. Here we measured fMRI responses in auditory cortex, thalamus and midbrain while we induced two sets of mutually-inconsistent expectations on the sensory input, each putatively encoded at a different stage. We show that responses at all stages are concurrently shaped by both sets of expectations. The results challenge the hypothesis that expectations are transmitted linearly and provide for a normative explanation of the non-linear physiology of the corticofugal sensory system.



Linear mixed-effects model effects of stimulation and task on response times
Linear mixed-effects model effects of stimulation and task on the pre-TMS and post-TMS response time ratios
Linear mixed-effects model effects of stimulation and task on mean accuracies
Inhibitory TMS over Visual Area V5/MT Disrupts Visual Speech Recognition

October 2023

·

93 Reads

·

2 Citations

The Journal of Neuroscience : The Official Journal of the Society for Neuroscience

During face-to-face communication, the perception and recognition of facial movements can facilitate individuals’ understanding of what is said. Facial movements are a form of complex, biological motion. Separate neural pathways are thought to processing (i) simple, non-biological motion with an obligatory waypoint in the motion-sensitive middle temporal area (V5/MT) and (ii) complex, biological motion. Here, we present findings that challenge this dichotomy. Neuronavigated offline TMS over V5/MT on 24 participants (17 female and 7 male) led to increased response times in the recognition of simple, non-biological motion as well as visual speech recognition compared to TMS over the vertex, an active control region. TMS of area V5/MT also reduced practise effects on response times, that are typically observed in both visual speech and motion recognition tasks over time. Our findings provide first indication that area V5/MT causally influences the recognition of visual speech. Significance Statement In everyday face-to-face communication, speech comprehension is often facilitated by viewing a speaker’s facial movements. Several brain areas contribute to the recognition of visual speech. One area of interest is the visual medial temporal area V5/MT, which has been associated with the perception of simple, non-biological motion such as moving dots, as well as more complex, biological motion such as visual speech. Here, we demonstrate using non-invasive brain stimulation that area V5/MT is causally relevant in recognizing visual speech. This finding provides new insights into the neural mechanisms that support the perception of human communication signals, which will help guide future research in typically developed individuals and populations with communication difficulties.



Pragmatic competence in native German adults with and without Developmental Dyslexia

January 2023

·

40 Reads

International Review of Pragmatics

Developmental Dyslexia ( DD ) is a life-long deficit in reading and spelling with unclear causes. DD negatively impacts many language skills. Relatively little is known about whether skills of pragmatic competence are compromised in individuals with DD . Here, we assess DD symptomatology in a group of native German dyslexic adults. We first test for the presence of DD subtypes along the dimensions of phonological awareness and naming speed, two key deficits in DD . We then assess pragmatic competence in adults with DD compared to control participants without DD . We found that a subclassification of DD according to phonological awareness and naming speed only partially applies and that dyslexic participants show a lower pragmatic competence than control participants.


Citations (59)


... Functional magnetic resonance imaging (fMRI) is the most popular noninvasive method for probing macroscopic network-related brain activity. While studies of the human subcortical auditory system are somewhat limited, previous task-based fMRI research has functionally localized the subcortical auditory structures ( Sitek et al., 2019), identified the tonotopic frequency mappings within the auditory midbrain and thalamus ( De Martino et al., 2013;Moerel et al., 2015;Ress & Chandrasekaran, 2013), separated top-down and bottom-up speech-selective subregions of auditory thalamus ( Mihai et al., 2019;Tabas et al., 2021), and recorded level-dependent BOLD signals throughout the auditory pathway ( Hawley et al., 2005;Sigalovsky & Melcher, 2006). ...

Reference:

Functional connectivity across the human subcortical auditory system using an autoregressive matrix-Gaussian copula graphical model approach with partial correlations
Fast frequency modulation is encoded according to the listener expectations in the human subcortical auditory pathway

... Such changes had already been shown in dyslexic brains post mortem [23]. Now high strength magnetic resonance imaging (MRI) has confirmed that the LGN M-cell layers are significantly thinner in the left LGN of live dyslexics, particularly in males, although a similar pattern was seen in females [24]. ...

Reference:

Visual Dyslexia
Dysfunction of the magnocellular subdivision of the visual thalamus in developmental dyslexia

Brain

... 并且, 这种能力缺陷可能比其他行 为指标更具特异性和敏感性, 因为干扰抑制能力缺陷是 ADHD的典型特征, 对ADHD的探测也远好于工作记忆、运 算、规划能力等指标 [18,19] . 此外, 不同发展障碍在语音识别 任务中的表现不同, 如自闭症儿童在处理复杂语音信号时存 在更大困难 [20,21] , 语言障碍儿童则在基础听觉处理任务中表 现较差 [22,23] . Figure 1 Difficult speech-on-speech recognition as an objective indicator for ADHD [16] . ...

Responses in left inferior frontal gyrus are altered for speech‐in‐noise processing, but not for clear speech in autism

... Electrical stimulation is widely used for cognitive neurostimulation; it entails utilizing sustained electrical impulses to alter brain cell activation. This strategy utilizes several techniques, each designed to target specific parts of the brain while offering various cognitive effects [52]. Rapidly variable magnetic fields are used by TMS to induce electrical impulses in the designated brain regions. ...

Enriched Learning: Behavior, Brain, and Computation

Trends in Cognitive Sciences

... The encoding of prediction error to fast dynamic stimuli has been robustly demonstrated in the auditory cortex ( Blank & Davis, 2016;Blank et al., 2018;Hovsepyan et al., 2020;Signoret et al., 2020;Sohoglu & Davis, 2020;Stein et al., 2022;Vidal et al., 2019;Ylinen et al., 2016). However, anatomical and physiological properties make the subcortical auditory pathway very well suited to test hypotheses on fast dynamic sounds ( Giraud et al., 2000;Osman et al., 2018;von Kriegstein et al., 2008): Neural populations in the auditory midbrain (inferior colliculus; IC) and thalamus (medial geniculate body; MGB) are endowed with much shorter time constants and faster access to acoustic information than neural populations in the cerebral cortex ( Steadman & Sumner, 2018). ...

Predictive encoding of pure tones and FM-sweeps in the human auditory cortex

Cerebral Cortex Communications

... Males are diagnosed with dyslexia more frequently than females (Quinn and Wagner, 2015) due to males' lower mean and more variable reading performance (Arnett et al., 2017). Emerging research suggests potential sex differences in the neural basis of dyslexia (Altarelli et al., 2014;Evans et al., 2014;Müller-Axt et al., 2022), which may lead to variations in cognitive deficits. While the origins of these differences remain unknown, some theories implicate female sex hormones in protecting against disruptions in brain development (Geschwind and Galaburda, 1985). ...

Dysfunction of the Visual Sensory Thalamus in Developmental Dyslexia

... While verbal learning ability can be facilitated by teaching within the visual modality condition, adding auditory information can be counter-effective (Constantinidou and Baker 2002). Recruitment of kinaesthetics seems to support cognitive processes when learning new complex tasks (Geary 2008; Paas and Sweller 2012;Damsgaard et al. 2022;Mathias et al. 2022;Andrä et al. 2020), but high bodily engagement has been linked to learning gains and also the risk of cognitive overload (e.g., Ruiter et al. 2015). ...

Twelve- and Fourteen-Year-Old School Children Differentially Benefit from Sensorimotor- and Multisensory-Enriched Vocabulary Training

Educational Psychology Review

... Furthermore, in the same study within the autistic group, the P1 amplitudes in quiet and in noise were not significantly different from each other, consistent with other published evidence of less efficient speech sound processing in autistic individuals even under optimal listening conditions. Recent neuroimaging speech-in-noise studies in autistic adolescents and adults also noted subtle differences from the neurotypical comparison groups in the activation of several cortical and subcortical regions involved in speech processing (Hernandez et al. 2020;Schelinski et al. 2022;Schelinski & von Kriegstein 2023). Those findings suggested reduced efficiency of acoustic feature processing or increased reliance on top-down compensatory processes to increase sensitivity to speech presented in background noise. ...

Altered processing of communication signals in the subcortical auditory sensory pathway in autism

... In the third experiment, ASD and control participants performed speech recognition tasks on speech that was either presented with or without noise (speech-in-noise recognition experiment; Figure 1c). For the voice identity recognition and the speech-in-noise recognition experiment we recently showed dysfunctional processing of voice identity and speech-in-noise in the cerebral cortex (Schelinski et al., 2016;Schelinski & von Kriegstein, 2021), whereas processing in the cerebral cortex while passively listening to vocal sounds was on a neurotypical level (Schelinski et al., 2016). ...

Responses in left inferior frontal gyrus are altered for speech-in-noise processing, but not for clear speech in autism
  • Citing Preprint
  • September 2021

... BOLD fMRI studies that aim to measure activity within MGN and LGN hinge upon accurate localization of these structures. However, identifying regions of interest (ROIs) within the thalamus that contain BOLD activation patterns specific to auditory and visual perception using standard atlases or segmentation techniques is hindered by the relatively small size of these nuclei (e.g., estimates of approximately 120 mm 3 and 60 mm 3 , respectively, for LGN ( Muller-Axt et al., 2021) and MGN ( Garcia-Gomar et al., 2019)), and a lack of distinct anatomical landmarks in the posterior thalamus that can be identified using T1weighted (T1w) and T2-weighted (T2w) imaging sequences that causes segmentation algorithms to rely heavily on priors (relative to individual-specific anatomical information). Moreover, individual variability in the location and size of nuclei ( Andrews et al., 1997;Garcia-Gomar et al., 2019;Giraldo-Chica & Schneider, 2018;Kiwitz et al., 2022;Rademacher et al., 2002) is exacerbated by both the difficulty of precise spatial normalization to standardized spaces, and the resulting imprecision in the coregistration between anatomical structural images and echo planar images (EPIs) measuring hemodynamic BOLD signal. ...

Mapping the human lateral geniculate nucleus and its cytoarchitectonic subdivisions using quantitative MRI
  • Citing Article
  • September 2021

NeuroImage