Lars MeyerMax Planck Institute for Human Cognitive and Brain Sciences | CBS · MPRG Language Cycles
Lars Meyer
PhD, Linguistics
About
72
Publications
13,715
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,087
Citations
Introduction
I am a cognitive neuroscientist interested in the electrophysiology of language. Specifically, I investigate the role of periodic activity—so-called neural oscillations—in comprehension. I employ neuroimaging (e.g., M/EEG, f/d/sMRI) on developmental, healthy, aging, and clinical populations. More recently, I combine these with corpus analysis and some rudimentary NLP.
Additional affiliations
October 2021 - present
January 2019 - present
Publications
Publications (72)
The human brain tracks regularities in the environment and extrapolates these to predict future events. Prior work on music cognition suggests that low‐frequency (1–8 Hz) brain activity encodes melodic predictions beyond the stimulus acoustics. Building on this work, we aimed to disentangle the frequency‐specific neural dynamics linked to melodic p...
Language comprehension involves the grouping of words into larger multiword chunks. This is required to recode information into sparser representations to mitigate memory limitations and counteract forgetting. It has been suggested that electrophysiological processing time windows constrain the formation of these units. Specifically, the period of...
Currently, the field of neurobiology of language is based on data from only a few Indo-European languages. The majority of this data comes from younger adults neglecting other age groups. Here we present a multimodal database which consists of task-based and resting state fMRI, structural MRI, and EEG data while participants over 65 years old liste...
In this chapter, we discuss research from behavior, event-related brain potentials, and neural oscillations which suggests that cognitive and neural constraints affect the timing of speech processing and language comprehension. Some of these constraints may even manifest as rhythmic patterns in linguistic behavior. We discuss two types of constrain...
Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words. The ability to process these non-adjacent dependencies (NADs) conflicts with the brain’s sampling of speech: We consume speech in chunks that are limited in time, containing only a...
Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we sh...
Statistical learning is the ability to extract and retain statistical regularities from the environment. In language, extracting statistical regularities—so-called transitional probabilities, TPs—is crucial for segmenting speech and learning new words. To investigate whether neural activity synchronizes with these statistical patterns, so-called ne...
Currently, the field of neurobiology of language is based on data from only a few Indo-European languages. The majority of this data comes from younger adults neglecting other age groups. Here we present a multimodal database which consists of task-based and resting state fMRI, structural MRI, and EEG data while participants over 65 years old liste...
Prediction of upcoming words is thought to be crucial for language comprehension. Here, we are asking whether bilingualism entails changes to the electrophysiological substrates of prediction. Prior findings leave it open whether monolingual and bilingual speakers predict upcoming words to the same extent and in the same manner. We address this iss...
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech are pro...
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we co...
The late development of fast brain activity in infancy restricts initial processing abilities to slow information. Nevertheless, infants acquire the short-lived speech sounds of their native language during their first year of life. Here, we trace the early buildup of the infant phoneme inventory with naturalistic electroencephalogram. We apply the...
Memory is fleeting. To avoid information loss, humans need to recode verbal stimuli into chunks of limited duration, each containing multiple words. Chunk duration may also be limited neurally by the wavelength of periodic brain activity, so‑called neural oscillations. While both cognitive and neural constraints predict some degree of behavioral re...
During daily communication, visual cues such as gestures accompany the speech signal and facilitate semantic processing. However, how gestures impact lexical retrieval and semantic prediction, especially in a naturalistic setting, remains unclear. Here, participants watched a naturalistic multimodal narrative, where an actor narrated a story and sp...
Autism is a neurodevelopmental condition that has been related to an overall imbalance between the brain's excitatory (E) and inhibitory (I) systems. Such an EI imbalance can lead to structural and functional cortical deviances and thus alter information processing in the brain, ultimately giving rise to autism traits. However, the developmental tr...
It has long been known that human breathing is altered during listening and speaking compared to rest: during speaking, inhalation depth is adjusted to the air volume required for the upcoming utterance. During listening, inhalation is temporally aligned to inhalation of the speaker. While evidence for the former is relatively strong, it is virtual...
Phonological developmental speech sound disorders (pDSSD) in childhood are often associated with later difficulties in literacy acquisition. The present study is a follow-up of the randomized controlled trial (RCT) on the effectiveness of PhonoSens, a treatment for pDSSD that focuses on improving auditory self-monitoring skills and categorial perce...
Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both, male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coher...
Infants master temporal patterns of their native language at a developmental trajectory from slow to fast: Shortly after birth, they recognize the slow acoustic modulations specific to their native language before tuning into faster language-specific patterns between 6 and 12 months of age. We propose here that this trajectory is constrained by neu...
During language acquisition, infant speech perception becomes selective for the speech sounds of their native language. It is unclear how infants can infer native speech sounds given a critical neurobiological limitation:The infant brain is too slow to match the rate of speech sounds-or phonemes-in natural speech. Infant brains are characterized ex...
Abstract
Infants master temporal patterns of their native language at a developmental trajectory from slow to fast: Shortly after birth, they recognize the slow acoustic modulations specific to their native language before tuning into faster language-specific patterns between 6-12 months of age. We here propose that this trajectory is constrained b...
It has long been known that human breathing is altered during listening and speaking compared to rest. Theoretical models of human communication suggest two distinct phenomena during speaking and listening: During speaking, inhalation depth is adjusted to the air volume required for the upcoming utterance. During listening, inhalation is temporally...
Speech processing is subserved by neural oscillations. Through a mechanism termed entrainment, oscillations can maintain speech rhythms beyond speech offset. We here tested whether entrainment affects higher-level language comprehension. We conducted four online experiments on 80 participants each. Our paradigm combined acoustic entrainment to repe...
Infants prefer to be addressed with infant-directed speech (IDS). IDS benefits language acquisition through amplified low-frequency amplitude modulations. It has been reported that this amplification increases electrophysiological tracking of IDS compared to adult-directed speech (ADS). It is still unknown which particular frequency band triggers t...
Infants prefer to be addressed with infant-directed speech (IDS). IDS benefits language acquisition through amplified low-frequency amplitude modulations. It has been reported that this amplification increases electrophysiological tracking of IDS compared to adult-directed speech (ADS). It is still unknown which particular frequency band triggers t...
Speech is transient. To comprehend entire sentences, segments consisting of multiple words need to be memorized for at least a while. However, it has been noted previously that we struggle to memorize segments longer than approximately 2.7 s. We hypothesized that electrophysiological processing cycles within the delta band (<4 Hz) underlie this tim...
Deficits in language production and comprehension are characteristic of schizophrenia. To date, it remains unclear whether these deficits arise from dysfunctional linguistic knowledge, or dysfunctional predictions derived from the linguistic context. Alternatively, the deficits could be a result of dysfunctional neural tracking of auditory informat...
Deficits in language production and comprehension are characteristic of schizophrenia. To date, it remains unclear whether these deficits arise from dysfunctional linguistic knowledge, or dysfunctional predictions derived from the linguistic context. Alternatively, the deficits could be a result of dysfunctional neural tracking of auditory informat...
Could meaning be read from acoustics, or from the refraction rate of pyramidal cells innervated by the cochlea, everyone would be an omniglot. Speech does not contain sufficient acoustic cues to identify linguistic units such as morphemes, words, and phrases without prior knowledge. Our target article (Meyer, L., Sun, Y., & Martin, A. E. (2019). Sy...
At the point of an erroneous chunking decision—about a second before the P600 indicates reanalysis of an ambiguous sentence—a delta-band phase shift occurs. Critically, this happens only when the to-be-chunked word sequence is too long to increase further in verbal working memory. This is a further piece of evidence that delta-band oscillations are...
Prosody can be entrained to affect subsequent sentence processing, such that the duration of an upcoming sentence appears to be anticipated beforehand! After all, intonation may not only help in the bottom-up segmentation of sentences, but also in the prediction of the duration of upcoming segments.
We present the Le Petit Prince Corpus (LPPC), a multilingual resource for research in (computational) psycho-and neurolinguistics. The corpus consists of the children's story The Little Prince in 26 languages. The dataset is in the process of being built using state-of-the-art methods for speech and language processing and electroencephalography (E...
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanism remains contested. While sensory–acoustic models derive predictions based on the short-term auditory input alone, cognitive models assume the use of abstract knowledge of music structure acquired over the long-term. To evaluate these two contrasting mechanism...
Research on speech processing is often focused on a phenomenon termed "entrainment", whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g. syntactic structures) has been observed. Entrainmen...
Developmental dyslexia (DD) impairs reading and writing acquisition in 5–10% of children, compromising schooling, academic success, and everyday adult life. DD associates with reduced phonological skills, evident from a reduced auditory Mismatch Negativity (MMN) in the electroencephalogram (EEG). It was argued that such phonological deficits are se...
Listening to music often evokes intense emotions [1, 2]. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better than expected [3]. Central to this view is the engagement of the nucleus accumbens-a brain region that processes reward expectations-to pleasurable m...
When sentence processing taxes verbal working memory, comprehension difficulties arise. This is specifically the case when processing resources decline with advancing adult age. Such decline likely affects the encoding of sentences into working memory, which constitutes the basis for successful comprehension. To assess age differences in encoding-r...
Verbal working memory-intensive sentence processing declines with age. This might reflect older adults’ difficulties with reducing the memory load by grouping single words into multiword chunks. Here we used a serial order task emphasizing syntactic and semantic relations. We evaluated the extent to which older compared with younger adults may diff...
Communication is an inferential process. In particular, language comprehension constantly requires top-down efforts, as often multiple interpretations are compatible with a given sentence. To assess top-down processing in the language domain, our experiment employed ambiguous sentences that allow for multiple interpretations (e.g., The client sued...
Sentence comprehension requires the encoding of phrases and their relationships into working memory. To date, despite the importance of neural oscillations in language comprehension, the neural-oscillatory dynamics of sentence encoding are only sparsely understood. Although oscillations in a wide range of frequency bands have been reported both for...
Complex auditory sequences known as music have often been described as hierarchically structured. This permits the existence of non-local dependencies, which relate elements of a sequence beyond their temporal sequential order. Previous studies in music have reported differential activity in the inferior frontal gyrus (IFG) when comparing regular a...
In auditory neuroscience, electrophysiological synchronization to low-level acoustic and high-level linguistic features is well established-but its functional purpose for verbal information transmission is unclear. Based on prior evidence for a dependence of auditory task performance on delta-band oscillatory phase, we hypothesized that the synchro...
The cognitive functionality of neural oscillations is still highly debated, as diferent functions have
been associated with identical frequency ranges. Theta band oscillations, for instance, were proposed
to underlie both language comprehension and domain-general cognitive abilities. Here we show
that the ageing brain can provide an answer to the o...
Storage and reordering of words are two core processes required for successful sentence comprehension. Storage is necessary whenever the verb and its arguments (i.e., subject and object) are separated by a long distance, while reordering is necessary whenever the argument order is atypical (e.g., object-first order in German, where subject-first or...
Neural oscillations subserve a broad range of functions in speech processing and language comprehension. On the one hand, speech contains—somewhat—repetitive trains of air pressure bursts that occur at three dominant amplitude modulation frequencies, physically marking the linguistically meaningful progressions of phonemes, syllables, and intonatio...
Sentences are easier to remember than random word sequences, likely because linguistic regularities facilitate chunking of words into meaningful groups. The present electroencephalography study investigated the neural oscillations modulated by this so-called sentence superiority effect during the encoding and maintenance of sentence fragments versu...
The understanding of neuroplasticity following stroke is predominantly based on neuroimaging measures that cannot address the subsecond neurodynamics of impaired language processing. We combined behavioral and electrophysiological measures and structural-connectivity estimates to characterize neuroplasticity underlying successful compensation of la...
Storage and reordering of incoming information are two core processes required for successful sentence comprehension. Storage is necessary whenever the verb and its arguments (i.e., subject and object) are separated over a long distance, while reordering is necessary whenever the argument order is atypical (e.g., object-first order in German, where...
The functional neuroanatomy of sentence processing is one of the most classical topics of cognitive neuropsychology of speech and language processing. We first outline the cognitive processes involved in the processing of complex sentences with noncanonical and embedded sentence structures, for which cross-linguistic psycholinguistic research has r...
Language comprehension requires that single words be grouped into syntactic phrases, as words in sentences are too many to memorize individually. In speech, acoustic and syntactic grouping patterns mostly align. However, when ambiguous sentences allow for alternative grouping patterns, comprehenders may form phrases that contradict speech prosody....
Our understanding of neuroplasticity following stroke is predominantly based on neuroimaging measures that cannot address the subsecond neurodynamics of impaired language processing. We combined for the first time behavioral and electrophysiological measures and structural- connectivity estimates to characterize neuroplasticity underlying successfu...
Unlike other aspects of language comprehension, the ability to process complex sentences develops rather late in life. Brain maturation as well as verbal working memory (vWM) expansion have been discussed as possible reasons. To determine the factors contributing to this functional development, we assessed three aspects in different age groups (5–6...
Prior structural imaging studies found initial evidence for the link between structural gray matter changes and the development of language performance in children. However, previous studies generally only focused on sentence comprehension. Therefore, little is known about the relationship between structural properties of brain regions relevant to...
Language comes in utterances in which words are bound together according to a simple rule-based syntactic computation (merge),
which creates linguistic hierarchies of potentially infinite length—phrases and sentences. In the current functional magnetic
resonance imaging study, we compared prepositional phrases and sentences—both involving merge—to...
Successful working-memory retrieval requires that items be retained as distinct units. At the neural level, it has been shown that theta-band oscillatory power increases with the number of to-be-distinguished items during working-memory retrieval. Here we hypothesized that during sentence comprehension, verbal-working-memory retrieval demands lead...
The Arcuate Fasciculus/Superior Longitudinal Fasciculus (AF/SLF) is the white-matter bundle that connects posterior superior temporal and inferior frontal cortex. Its causal functional role in sentence processing and verbal working memory is currently under debate. While impairments of sentence processing and verbal working memory often co-occur in...
Research on language comprehension using event-related potentials (ERPs) reported distinct ERP components reliably related to the processing of semantic (N400) and syntactic information (P600). Recent ERP studies have challenged this well-defined distinction by showing P600 effects for semantic and pragmatic anomalies. So far, it is still unresolve...
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant...
In sentence processing, it is still unclear how the neural language network successfully establishes argument–verb dependencies in its spatiotemporal neuronal dynamics. Previous work has suggested that the establishment of subject–verb and object–verb dependencies requires argument retrieval from working memory, and that dependency establishment in...
In sentence processing, storage and ordering of the verb and its arguments (subject and object) are core tasks. Their cortical representation is a matter of ongoing debate, and it is unclear whether prefrontal activations in neuroimaging studies on sentence processing reflect the storage of arguments or their ordering. Moreover, it is unclear how s...
Both functional magnetic resonance imaging (fMRI) and event-related brain potential (ERP) studies have shown that verbal working memory plays an important role during sentence processing. There is growing evidence from outside of sentence processing that human alpha oscillations (7-13 Hz) play a critical role in working memory. This study aims to l...
Under real-life adverse listening conditions, the interdependence of the brain's analysis of language structure (syntax) and its analysis of the acoustic signal is unclear. In two fMRI experiments, we first tested the functional neural organization when listening to increasingly complex syntax in fMRI. We then tested parametric combinations of synt...