Christian Brodbeck

Christian Brodbeck
Verified
Christian verified their affiliation via an institutional email.
Verified
Christian verified their affiliation via an institutional email.
  • PhD
  • Professor (Assistant) at McMaster University

About

50
Publications
27,922
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,037
Citations
Introduction
I study the neural basis of language and speech processing. I am particularly interested in how listeners construct meaning from continuous speech. The main tools I use are MEG and EEG together with reverse correlation.
Current institution
McMaster University
Current position
  • Professor (Assistant)

Publications

Publications (50)
Article
When we listen to speech, our brain’s neurophysiological responses “track” its acoustic features, but it is less well understood how these auditory responses are enhanced by linguistic content. Here, we recorded magnetoencephalography responses while subjects of both sexes listened to four types of continuous speechlike passages: speech envelope–mo...
Preprint
Full-text available
Human speech recognition transforms a continuous acoustic signal into categorical linguistic units, by aggregating information that is distributed in time. It has been suggested that this kind of information processing may be understood through the computations of a Recurrent Neural Network (RNN) that receives input frame by frame, linearly in time...
Preprint
Full-text available
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory responses are modulated by linguistic content. Here, we recorded magnetoencephalography (MEG) responses while subjects listened to four types of continuous-speech-like passages: speech-envelope modulate...
Article
Full-text available
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdepen...
Article
Learning to process speech in a foreign language involves learning new representations for mapping the auditory signal to linguistic structure. Behavioral experiments suggest that even listeners that are highly proficient in a non-native language experience interference from representations of their native language. However, much of the evidence fo...
Article
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information re...
Preprint
Full-text available
Learning to process speech in a foreign language involves learning new representations for mapping the auditory signal to linguistic structure. Behavioral experiments suggest that even listeners that are highly proficient in a non-native language experience interference from representations of their native language. However, much of the evidence fo...
Article
Full-text available
Speech processing often occurs amidst competing inputs from other modalities, e.g., listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. W...
Preprint
Full-text available
Speech processing often occurs amidst competing inputs from other modalities, e.g., listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. W...
Article
Full-text available
Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations of words, from sound to meaning. Here we show evidence from magnetoencephalography that this type of incremental processing is limited when words are heard in isolation as compared to continuous speech. This suggests a les...
Article
Full-text available
Voice pitch carries linguistic and non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundamental f...
Article
Full-text available
Stroke patients with hemiparesis display decreased beta band (13–25 Hz) rolandic activity, correlating to impaired motor function. However, clinically, patients without significant weakness, with small lesions far from sensorimotor cortex, exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether these minor stro...
Article
When listening to degraded speech, listeners can use high-level semantic information to support recognition. The literature contains conflicting findings regarding older listeners' ability to benefit from semantic cues in recognizing speech, relative to younger listeners. Electrophysiologic (EEG) measures of lexical access (N400) often show that se...
Article
Full-text available
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic...
Preprint
Full-text available
Voice pitch carries linguistic as well as non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundam...
Article
Full-text available
When listening to speech, our brain responses time lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic repre...
Preprint
Full-text available
Objective: Stroke patients with hemiparesis display decreased beta band (13-25 Hz) rolandic activity, correlating to impaired motor function. However, patients without significant weakness, with small lesions far from sensorimotor cortex, nevertheless exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether thes...
Preprint
Full-text available
Speech input is often understood to trigger rapid and automatic activation of successively higher-level representations for comprehension of words. Here we show evidence from magnetoencephalography that incremental processing of speech input is limited when words are heard in isolation as compared to continuous speech. This suggests a less unified...
Preprint
Full-text available
HIGHLIGHTS • Twenty-five listeners complete a 3-AFC speech recognition task with two simultaneous talkers during fMRI scanning; the two talkers utter identical phrases ("Unison" condition) or phrases that differ in terms of several key words ("Competing" condition) • A spectrotemporal modulation filtering procedure is used together with spectrotemp...
Preprint
Full-text available
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have different, interdepende...
Preprint
Full-text available
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic...
Preprint
Full-text available
Agrammatic aphasia is an acquired language disorder characterized by slow, non-fluent speech that include primarily content words. It is well-documented that people with agrammatism (PWA) have difficulty with production of verbs and verb morphology, but it is unknown whether these deficits occur at the single word-level, or are the result of a sent...
Article
Full-text available
Pervasive behavioral and neural evidence for predictive processing has led to claims that language processing depends upon predictive coding. Formally, predictive coding is a computational mechanism where only deviations from top-down expectations are passed between levels of representation. In many cognitive neuroscience studies, a reduction of si...
Preprint
When listening to speech, our brain responses time-lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic repre...
Article
Full-text available
Significance Patients with small infarcts often demonstrate a poststroke acute dysexecutive syndrome resulting in failure to successfully re-integrate into society. The mechanism is poorly understood given that lesions are small and do not typically involve areas classically associated with cognitive decline. This knowledge gap makes designing trea...
Article
Full-text available
Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the a...
Article
Full-text available
Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-loc...
Article
Speech processing in the human brain is grounded in non-specific auditory processing in the general mammalian brain, but relies on human-specific adaptations for processing speech and language. For this reason, many recent neurophysiological investigations of speech processing have turned to the human brain, with an emphasis on continuous speech. S...
Article
Full-text available
Characterizing the neural dynamics underlying sensory processing is one of the central areas of investigation in systems and cognitive neuroscience. Neuroimaging techniques such as magnetoencephalography (MEG) and Electroencephalography (EEG) have provided significant insights into the neural processing of continuous stimuli, such as speech, thanks...
Preprint
Full-text available
Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-loc...
Preprint
Full-text available
Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of multiple speech sources, even in the absence of binaural cues. Previous research on the neural representations underlying this ability suggests that the auditory cortex primarily represents only the unsegregated acoustic mixture in its early responses, and then...
Preprint
Full-text available
Characterizing the neural dynamics underlying sensory processing is one of the central areas of investigation in systems and cognitive neuroscience. Neuroimaging techniques such as magnetoencephalography (MEG) and Electroencephalography (EEG) have provided significant insights into the neural processing of continuous stimuli, such as speech, thanks...
Article
During speech perception, a central task of the auditory cortex is to analyze complex acoustic patterns to allow detection of the words that encode a linguistic message [1]. It is generally thought that this process includes at least one intermediate, phonetic, level of representations [2–6], localized bilaterally in the superior temporal lobe [7–9...
Conference Paper
Full-text available
The magnetoencephalography (MEG) response to continuous auditory stimuli, such as speech, is commonly described using a linear filter, the auditory temporal response function (TRF). Though components of the sensor level TRFs have been well characterized, the underlying neural sources responsible for these components are not well understood. In this...
Article
Full-text available
Humans have a striking capacity to combine words into sentences that express new meanings. Previous research has identified key brain regions involved in this capacity, but little is known about the time course of activity in these regions, as hemodynamic methods such as fMRI provide little insight into temporal dynamics of neural activation. We pe...
Article
Full-text available
Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the envelope of the acoustic signal more robustly. Here we investigate this puzzle by using magnetoencephalography (MEG) source localization to determine the anat...
Preprint
Full-text available
During speech perception, a central task of the auditory cortex is to analyze complex acoustic patterns to allow detection of the words that encode a linguistic message. It is generally thought that this process includes at least one intermediate, phonetic, level of representations, localized bilaterally in the superior temporal lobe. While recent...
Article
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data by comb...
Preprint
1 Summary Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the acoustic signal more robustly. Here we investigate this puzzle by using magnetoencephalography (MEG) source localization to determine the anatomica...
Preprint
Speech communication in daily listening environments is complicated by the phenomenon of reverberation, wherein any sound reaching the ear is a mixture of the direct component from the source and multiple reflections off surrounding objects and the environment. The brain plays a central role in comprehending speech accompanied by such distortion, w...
Preprint
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data by comb...
Article
Full-text available
Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract...
Article
A critical component of comprehending language in context is identifying the entities that individual linguistic expressions refer to. While previous research has shown that language comprehenders resolve reference quickly and incrementally, little is currently known about the neural basis of successful reference resolution. Using source localized...
Article
Full-text available
Previous research has shown that language comprehenders resolve reference quickly and incrementally, but not much is known about the neural processes and representations that are involved. Studies of visual short-term memory suggest that access to the representation of an item from a previously seen display is associated with a negative evoked pote...
Article
Full-text available
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE soft...
Data
Time course of the skin conductance response for all conditions of Session 1. A) The plot indicates the mean ±2 SEM of a moving window average (1 s Blackman window), and is based on the first half of the trials. The blue vertical lines indicate the time window of 1.5–5 s after cue onset used for the statistical analyses. B) Skin conductance respons...
Article
Full-text available
People show autonomic responses when they empathize with the suffering of another person. However, little is known about how these autonomic changes are related to prosocial behavior. We measured skin conductance responses (SCRs) and affect ratings in participants while either receiving painful stimulation themselves, or observing pain being inflic...

Network

Cited By