Matthew H Davis

Matthew H Davis
MRC Cognition and Brain Sciences Unit | MRC

PhD

About

193
Publications
47,712
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
15,211
Citations
Citations since 2017
56 Research Items
7166 Citations
201720182019202020212022202302004006008001,0001,200
201720182019202020212022202302004006008001,0001,200
201720182019202020212022202302004006008001,0001,200
201720182019202020212022202302004006008001,0001,200
Additional affiliations
October 1999 - present
MRC Cognition and Brain Sciences Unit
Position
  • Programme Leader

Publications

Publications (193)
Article
Full-text available
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words...
Preprint
We aim to identify and explain neuro-cognitive sources of variability in spoken language comprehension success, focusing here on the challenge of semantic ambiguity (e.g. BALL has multiple meanings) and the role of language-specific function (e.g. quality of lexico-semantic representations) and/or domain-general function (e.g. executive processes)....
Preprint
Full-text available
Recent advances in artificial neural networks have enabled the design of automatic speech recognition systems that perform comparably to human listeners in various speech recognition tasks. Careful analysis of the behavioural characteristics of such systems can reveal similarities and critical divergences between machine and human speech recognitio...
Article
Full-text available
Listening to spoken language engages domain-general Multiple Demand (MD, fronto-parietal) regions of the human brain, in addition to domain-selective (fronto-temporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understand...
Conference Paper
Earables provide a new opportunity to study conversation in the wild. They uniquely allow (i) accurate head motion tracking recorded synchronously with the speech signal and (ii) multiple people to simultaneously receive and stream conversational speech that is unconstrained by body movement. Here, our general aim is to introduce the use of earable...
Article
This study investigates the dynamics of speech envelope tracking during speech production, listening and self listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorde...
Article
Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported w...
Preprint
Full-text available
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words...
Preprint
Full-text available
Listening to spoken language engages domain-general Multiple Demand (MD, fronto-parietal) regions of the human brain, in addition to domain-selective (fronto-temporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of comprehens...
Article
Full-text available
Low-intensity transcranial electrical stimulation (tES), including alternating or direct current stimulation, applies weak electrical stimulation to modulate the activity of brain circuits. Integration of tES with concurrent functional MRI (fMRI) allows for the mapping of neural activity during neuromodulation, supporting causal studies of both bra...
Preprint
Full-text available
Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 female) that reported words from...
Article
Full-text available
Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g. TRACE) propose that word recognition is achieved through direct inhibitory connections between u...
Article
Full-text available
There is profound and long-standing debate over the role of explicit instruction in reading acquisition. In this research, we investigated the impact of teaching regularities in the writing system explicitly rather than relying on learners to discover these regularities through text experience alone. Over 10 days, 48 adults learned to read novel wo...
Article
Full-text available
Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or “entrained”) to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rh...
Article
Full-text available
A single encounter with an ambiguous word (e.g. bark, ball) in the context of a less-frequent meaning (e.g. "Sally worried about how crowded the ball would be.") can shift the later interpretation of the word toward the same subordinate meaning. This lexical-semantic retuning functions to improve future comprehension of ambiguous words. The present...
Preprint
Full-text available
Background Low intensity transcranial electrical stimulation (tES), including alternating or direct current stimulation (tACS or tDCS), applies weak electrical stimulation to modulate brain circuits. Integration of tES with concurrent functional magnetic resonance imaging (fMRI) allows neuromodulation of brain regions while mapping network function...
Preprint
Semantically ambiguous words (e.g. "bark") challenge word meaning access. An effective comprehension system can use immediate contextual cues and adapt in response to recent experience. We explored the contributions of the domain-specific Language Network and the domain-general Multiple Demand Networks by analysing behavioural data from volunteers...
Preprint
Poor performance on measures of phonological processing is common to both Developmental Language Disorder (DLD) and dyslexia. Perceptual accounts characterise this phonological dysfunction as the result of auditory deficits at lower levels of the speech processing hierarchy, which may impair discrimination of speech sounds. However, a causal link b...
Preprint
Full-text available
Semantically ambiguous words (e.g. "bark") challenge word meaning access. An effective comprehension system can use immediate contextual cues and adapt in response to recent experience. We explored the contributions of the domain-specific Language Network and the domain-general Multiple Demand Networks by analysing behavioural data from volunteers...
Article
Full-text available
Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in sp...
Preprint
Full-text available
Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g. TRACE) propose that word recognition is achieved through direct inhibitory connections between u...
Preprint
Full-text available
Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the...
Chapter
The sixth edition of the foundational reference on cognitive neuroscience, with entirely new material that covers the latest research, experimental approaches, and measurement methodologies. Each edition of this classic reference has proved to be a benchmark in the developing field of cognitive neuroscience. The sixth edition of The Cognitive Neuro...
Preprint
Full-text available
Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We use magnetoencephalographic recordings of brain responses to degraded spoken words as a function of signal quality and prior knowledge to demonstrate that spectrotemporal modulations in speech are more clearly...
Article
Full-text available
Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) in...
Article
Several recent studies have used transcranial alternating current stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simulta...
Article
Full-text available
Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hi...
Article
Full-text available
Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different...
Preprint
Full-text available
We tested whether the phase relation between transcranial alternating current stimulation (tACS) and the rhythm of acoustically-degraded (noise-vocoded) words, presented in silence, modulates word report accuracy. We found a significant tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring elec...
Preprint
Full-text available
A single encounter with an ambiguous word (e.g. bark, ball) in a context that supports a less-frequent meaning (e.g. "Sally worried about how crowded the ball would be.") can shift the later interpretation of the word toward that same subordinate meaning. The present paper investigates the impact of varying the position of the disambiguating contex...
Preprint
Spoken language is one of the most important sounds that humans hear, yet, also one of the most difficult sounds for non-human listeners or machines to identify. In this chapter we explore different neuro-computational implementations of Bayesian Inference for Speech Perception. We propose, in line with Predictive Coding (PC) principles, that Bayes...
Article
Reading acquisition requires learning the associations between visual symbols and the sounds and meanings they represent. In alphabetic languages, the relationship between visual and spoken forms is relatively systematic, whereas the relationship between visual form and meaning is relatively arbitrary. Reading instruction that emphasises the relati...
Preprint
Full-text available
The role of neurobiologically-constrained critical periods for language learning remains controversial. We provide new evidence for critical periods by examining speech sound processing across the lifespan. We tested perceptual acuity for minimal word-word (e.g. bear-pear), and word-pseudoword (e.g. bag-pag) pairs using trial-unique audio-morphed s...
Preprint
Full-text available
The question whether perception or other processes depend on the phase of neural oscillations is a research topic that is rapidly gaining popularity. Importantly however, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect su...
Article
Full-text available
Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions especially if there is partial ove...
Article
Full-text available
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the re...
Article
Full-text available
Due to their periodic nature, neural oscillations might represent an optimal "tool" for the processing of rhythmic stimulus input [1-3]. Indeed, the alignment of neural oscillations to a rhythmic stimulus, often termed phase entrainment, has been repeatedly demonstrated [4-7]. Phase entrainment is central to current theories of speech processing [8...
Preprint
Research has shown that adults’ lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g. bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However the mechanism underlying this word-meaning priming effect re...
Article
Full-text available
Perception relies on the integration of sensory information and prior expectations. Here we show that selective neurodegeneration of human frontal speech regions results in delayed reconciliation of predictions in temporal cortex. These temporal regions were not atrophic, displayed normal evoked magnetic and electrical power, and preserved neural s...
Article
Full-text available
Research has shown that adults’ lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g. bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However the mechanism underlying this word-meaning priming effect re...
Article
Full-text available
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources, based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure tones H and L presented in the repeating patt...
Article
Full-text available
Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of ''bonnet ") in a word...
Article
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pict...
Preprint
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word as...
Article
Full-text available
The past 20 years have seen a methodological revolution in spoken language research. A diverse range of neuroscientific techniques are now available that allow researchers to observe the brain’s responses to different types of speech stimuli in both healthy and impaired listeners, and also to observe how individuals’ abilities to process speech cha...
Article
Full-text available
There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaningbased strategies. We sought to understand the be...
Article
Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/...
Chapter
In summarizing this meta-analysis, we focus on responses in the left hemisphere; however, many of the differential responses reported are similarly observed in homologous regions of the left hemisphere. Some authors—most notably Bozic et al. (2010)—have argued that lexical access processes are supported by the temporal lobe bilaterally. However, th...
Article
Full-text available
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence....
Data
Network architecture and example representations for (A) Sharpened Signal and (B) Prediction Error models. Common components of both models are outlined in black. Differences between the two models are coloured in orange (Sharpened Signal) and blue (Prediction Error). Both models map from a feature-based representation of consonant-vowel-consonant...
Data
Sensitivity analysis. (A) Prediction Error model. (B) Sharpened Signal model. The blue curves illustrate how the sum squared error (SSE, y-axis) for model fit to the behavioural (left column), univariate fMRI (middle columns), and multivariate fMRI (right columns) data changes for a range of parameters (along the x-axis). Each graph therefore shows...
Data
Representation of phonetic form in Inferior Frontal regions (A) Univariate results: Main effect of prior knowledge (Matching versus Neutral Prior) depicted on a rendered brain (p < 0.05 voxelwise FWE, n = 21). White circle marks post-hoc defined clusters of interest in the left Inferior Frontal Gyrus (IFG). (B,C) Fisher-z-transformed Spearman corre...
Data
Comparison of four different, hierarchically organised hypothesis RDMs of speech perception. Left Panel: (A) dissimilarity of the acoustic properties of the speech stimuli used in our study (see Supplementary Methods for details), (B) dissimilarity of feature representation for the canonical forms of the speech provided as the input to our computat...
Data
Univariate Analysis—F-contrast: Main effect Match/Mismatch, p < 0.05 FWE (voxelwise correction) (XLS)
Data
RSA—F-contrast: Prior information (Match/Neutral) x Sensory detail full interaction, p < 0.001 uncorrected, k > 10 voxels (searchlight analysis with a voxel size of 3 x 3 x 3.75 mm) (XLS)
Data
Univariate Analysis—F-contrast: Prior information (Match/Neutral) x Sensory detail full interaction, p < 0.001 uncorrected, k > 10 voxels (XLS)
Data
Cross-subject consistency based on empirical and simulated RDMs. (A) Empirical RDMs were extracted from the independent ROI in the left posterior STS (pSTS, Fig 4B), and the Simulated RDMs based on either (B) the Sharpened Signal or (C) the Prediction Error model were computed for 21 simulated participants. The cross-subject consistencies from the...
Data
Univariate Analysis—F-contrast: Main effect Match/Neutral, p < 0.05 FWE (voxelwise correction) (XLS)
Data
Effect of mismatching prior expectations. (A) Behavioural results. (B) Univariate results: Main effect of prior knowledge (Matching versus Mismatching Prior) depicted on a rendered brain (p < 0.05 voxelwise FWE, n = 21). (C) Mean beta values extracted from the independent region of interest in the posterior STS [57] illustrate reduced BOLD signal d...
Data
Representational similarity searchlight analysis in the whole brain. Interaction of Prior information (Match/Neutral) x Sensory detail (4- versus 12-channel) depicted on rendered brain (F-contrast, p < 0.001 uncorrected, k > 10 voxels; searchlight analysis with a voxel size of 3 x 3 x 3.75 mm; see S4 Table for coordinates). https://osf.io/2ze9n/ (d...
Data
Univariate Analysis—F-contrast: Main effect sensory detail, p < 0.05 FWE (voxelwise correction) (XLS)
Data
Text file describing the supplementary methods and supplementary results and discussion. (DOCX)
Poster
Full-text available
A neural signature of segregation of tone sequences, independent of the attended stream
Article
Full-text available
Transcranial electric stimulation (tES), comprising transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), involves applying weak electrical current to the scalp, which can be used to modulate membrane potentials and thereby modify neural activity. Critically, behavioural or perceptual consequences o...
Article
Full-text available
Understanding the neural processes that underlie learning to read can provide a scientific foundation for literacy education but studying these processes in real-world contexts remains challenging. We present behavioural data from adult participants learning to read artificial words and name artificial objects over two days. Learning profiles and g...
Article
Full-text available
Significance Experience-dependent changes in sensory processing are critical for successful perception in dynamic and noisy environments. However, the neural and computational mechanisms supporting such changes have remained elusive. Using electrical and magnetic recordings of human brain activity, we demonstrate that two different sources of exper...
Article
Full-text available
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked part...
Article
Full-text available
The extraction of general knowledge from individual episodes is critical if we are to learn new knowledge or abilities. Here we uncover some of the key cognitive mechanisms that characterise this process in the domain of language learning. In five experiments adult participants learned new morphological units embedded in fictitious words created by...
Article
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in...
Article
Sedation has a graded effect on brain responses to auditory stimuli: perceptual processing persists at sedation levels that attenuate more complex processing. We used fMRI in healthy volunteers sedated with propofol to assess changes in neural responses to spoken stimuli. Volunteers were scanned awake, sedated, and during recovery, while making per...