
Mccall E SarrettVillanova University | Nova · Department of Psychological and Brain Sciences
Mccall E Sarrett
Doctor of Philosophy
About
10
Publications
795
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
14
Citations
Introduction
I am interested in spoken language processing in the brain, specifically in how the brain integrates high-level information (for example, sentence contexts or lexical status) with low-level acoustics (such as voice onset time or coarticulatory information), and also how these mechanisms change throughout word learning and second language acquisition. More at: mccallesarrett.com
Additional affiliations
Education
August 2015 - May 2020
August 2009 - May 2013
Publications
Publications (10)
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec...
The human brain extracts meaning from the world using an extensive neural system for semantic knowledge. Whether such broadly distributed systems crucially depend on or can compensate for the loss of one of their highly interconnected hubs is controversial. The strongest level of causal evidence for the role of a brain hub is to evaluate its acute...
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec...
No PDF available
ABSTRACT
The acoustics of spoken language are highly variable, and yet most listeners easily extract meaningful information from the speech signal. Psycholinguistic work has revealed which acoustic dimensions are relevant when listeners categorize speech sounds, and how listeners use higher-level expectations to shift their categor...
Second language (L2) learners must not only acquire L2 knowledge (i.e. vocabulary and grammar), but they must also rapidly access this knowledge. In monolinguals, efficient spoken word recognition is accomplished via lexical competition, by which listeners activate a range of candidates that compete for recognition as the signal unfolds. We examine...
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e...
Second language (L2) learners must not only acquire L2 knowledge (i.e. vocabulary and grammar), but they must also rapidly access this knowledge. In monolinguals, efficient spoken word recognition is accomplished via lexical competition, by which listeners activate a range of candidates that compete for recognition as the signal unfolds. We examine...
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N=31) heard sentences in which we manipulated acoustic ambiguity (e.g...
Understanding the impact of surgical disconnection on neural responses in the human brain has the potential to advance models of normal neurophysiology and its disruption by pathology. We present data from four patients who underwent surgical disconnection of the anterior temporal lobe as part of the procedure to treat intractable epilepsy. In two...
A critical debate in speech perception concerns the stages of processing and their interactions. One source of evidence is the timecourse over which different sources of information affect ongoing processing. We used electroencephalography (EEG) to ask when semantic expectations and acoustic cues are integrated neurophysiologically. Participants (N...