Mccall E Sarrett

Mccall E Sarrett
Villanova University | Nova · Department of Psychological and Brain Sciences

Doctor of Philosophy

About

10
Publications
795
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
14
Citations
Introduction
I am interested in spoken language processing in the brain, specifically in how the brain integrates high-level information (for example, sentence contexts or lexical status) with low-level acoustics (such as voice onset time or coarticulatory information), and also how these mechanisms change throughout word learning and second language acquisition. More at: mccallesarrett.com
Additional affiliations
August 2018 - present
University of Iowa
Position
  • Fellow
Description
  • Human Auditory Neuroscience Laboratory PI: Inyong Choi, PhD
August 2017 - December 2017
University of Iowa
Position
  • Research Assistant
Description
  • Elementary Psychology, PSY:1001
May 2016 - present
University of Iowa
Position
  • Fellow
Description
  • Human Brain Research Laboratory PI: Matt Howard, MD
Education
August 2015 - May 2020
University of Iowa
Field of study
  • Neuroscience
August 2009 - May 2013
University of Tennessee
Field of study
  • Neuroscience & the Perception of Language

Publications

Publications (10)
Article
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec...
Preprint
Full-text available
The human brain extracts meaning from the world using an extensive neural system for semantic knowledge. Whether such broadly distributed systems crucially depend on or can compensate for the loss of one of their highly interconnected hubs is controversial. The strongest level of causal evidence for the role of a brain hub is to evaluate its acute...
Preprint
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec...
Article
Full-text available
No PDF available ABSTRACT The acoustics of spoken language are highly variable, and yet most listeners easily extract meaningful information from the speech signal. Psycholinguistic work has revealed which acoustic dimensions are relevant when listeners categorize speech sounds, and how listeners use higher-level expectations to shift their categor...
Article
Second language (L2) learners must not only acquire L2 knowledge (i.e. vocabulary and grammar), but they must also rapidly access this knowledge. In monolinguals, efficient spoken word recognition is accomplished via lexical competition, by which listeners activate a range of candidates that compete for recognition as the signal unfolds. We examine...
Article
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e...
Preprint
Full-text available
Second language (L2) learners must not only acquire L2 knowledge (i.e. vocabulary and grammar), but they must also rapidly access this knowledge. In monolinguals, efficient spoken word recognition is accomplished via lexical competition, by which listeners activate a range of candidates that compete for recognition as the signal unfolds. We examine...
Preprint
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N=31) heard sentences in which we manipulated acoustic ambiguity (e.g...
Poster
Full-text available
Understanding the impact of surgical disconnection on neural responses in the human brain has the potential to advance models of normal neurophysiology and its disruption by pathology. We present data from four patients who underwent surgical disconnection of the anterior temporal lobe as part of the procedure to treat intractable epilepsy. In two...
Poster
Full-text available
A critical debate in speech perception concerns the stages of processing and their interactions. One source of evidence is the timecourse over which different sources of information affect ongoing processing. We used electroencephalography (EEG) to ask when semantic expectations and acoustic cues are integrated neurophysiologically. Participants (N...

Network

Cited By