About
56
Publications
8,532
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
610
Citations
Introduction
I am an Ikerbasque Research Fellow and Associate Leader of the Spoken Language group at the Basque Center on Cognition, Brain and Language.
I study how we process spoken and written language input, with a special interest in speech perception, word recognition, and word learning. A big part of my research focuses on the learning mechanisms that allow us to acquire novel words and non-native phonemic contrasts.
Current institution
Additional affiliations
Education
August 2010 - May 2016
August 2010 - May 2012
September 2007 - September 2009
Publications
Publications (56)
During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments lik...
The speech signal carries both linguistic and non-linguistic information (e.g., a talker’s voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word’s meaning. A few studies support a...
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in sp...
Recent work by Baese-Berk and Samuel (2022) suggests that immediate –but not delayed– production has a detrimental effect on learning a non-native speech sound contrast. We tested whether this pattern is also found for word learning. Each participant learned 12 new words in one of four training conditions: Perception-Only, Immediate-Production, 2-s...
Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses that hav...
A widely held belief is that speech perception and speech production are tightly linked, with each modality available to help with learning in the other modality. This positive relationship is often summarized as perception and production being “two sides of the same coin.” There are, indeed, many situations that have shown this mutually supportive...
Speech perception gradiency allows listeners to detect and maintain subphonemic information, thereby enhancing speech perception flexibility. However, its nature is not fully understood; it is unclear whether gradiency is a generic trait, or if it depends on language status (L1 vs. L2) and/or language-specific properties (e.g., voice onset time). T...
Some listeners exhibit higher sensitivity to subphonemic acoustic differences (i.e., higher speech gradiency). Here, we asked whether higher gradiency in a listener’s first language (L1) facilitates foreign language learning and explored the possible sources of individual differences in L1 gradiency. To address these questions, we tested 164 native...
Psycholinguists define spoken word recognition (SWR) as, roughly, the processes intervening between speech perception and sentence processing, whereby a sequence of speech elements is mapped to a phonological wordform. After reviewing points of consensus and contention in SWR, we turn to the focus of this review: considering the limitations of theo...
The present study seeks to understand the effects of orthographic and image referents on vocabulary acquisition in early-stage, late second language learners.
This poster showcases preliminary results and was presented at AMLaP and ESCOP, 2023.
No PDF available
ABSTRACT
In general, listeners are sensitive to acoustic differences within their L1 phonemic categories (Andruski et al., 1994; McMurray et al., 2002; Toscano et al., 2010). However, some listeners appear to be more sensitive to subphonemic information than others (Kapnoula et al., 2017; Kong & Edwards, 2016). Recent electrophysio...
Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum, and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses – assum...
Does saying a novel word help to recognize it later? Previous research on the effect of production on this aspect of word learning is inconclusive, as both facilitatory and detrimental effects of production are reported. In a set of three experiments, we sought to reconcile the seemingly contrasting findings by disentangling the production from oth...
The examination of how words are learned can offer valuable insights into the nature of lexical representations. For example, a common assessment of novel word learning is based on its ability to interfere with other words; given that words are known to compete with each other (Luce and Pisoni, 1998; Dahan et al., 2001), we can use the capacity of...
Previous work shows that producing a new word can have both facilitatory and inhibitory effects on word learning (Leach & Samuel, 2007; Zamuner et al., 2016). A recent study by Kapnoula and Samuel (under review) aimed to reconcile these seemingly contradictory results. The results indicated an early facilitatory effect of production that switched t...
In contrast to the long-running debates around universally categorical vs. gradient speech perception, we find individual differences in listeners' sensitivity to subtle acoustic differences between speech sounds (within and between phoneme categories). Here, we used an EEG measure of listeners' early perceptual encoding of a primary acoustic cue a...
Listeners activate speech sound categories in a gradient way and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula, Winn, Kong, Edwards, and McMurray (2017) suggest that the degree to which listeners maintain within-category inform...
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in sp...
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e...
Listeners activate speech sound categories in a gradient way and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula, Winn, Kong, Edwards, and McMurray (2017) suggest that the degree to which listeners maintain within-category inform...
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N=31) heard sentences in which we manipulated acoustic ambiguity (e.g...
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from—silent—visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated...
Listeners map continuous acoustic information into distinct phoneme categories. To study this process, we routinely collect binary responses (e.g., in 2AFC tasks) and use categorization slope as a measure of categorization quality. However, a plethora of findings shows that listeners are sensitive to within-category acoustic differences and capturi...
We investigated how listeners use gender-marked adjectives to adjust lexical predictions during sentence comprehension. Participants listened to sentence fragments in Spanish (e.g. “The witch flew to the village on her … ”) that created expectation for a specific noun (broomstick_fem), and were completed by an adjective and a noun. The adjective ei...
Does saying a new word out loud help to learn it better? Previous research on the effect of production on word learning is inconclusive. One issue is that production can be confounded with speaker variability, because when a word is produced it is also unavoidably heard by an additional speaker. To address this issue, we disentangled the effects of...
Introduction: Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is an effective
treatment for limb motor symptoms in Parkinson’s disease (PD); however, its effect on vocal
motor function has yielded conflicted and highly variable results. The present study investigated
the effects of STN-DBS on the mechanisms of vocal production and mot...
A critical debate in speech perception concerns the stages of processing and their interactions. One source of evidence is the timecourse over which different sources of information affect ongoing processing. We used electroencephalography (EEG) to ask when semantic expectations and acoustic cues are integrated neurophysiologically. Participants (N...
Lip-reading is crucial to understand speech in challenging conditions. Neuroimaging investigations have revealed that lip-reading activates auditory cortices in individuals covertly repeating absent but known speech. However, in real-life, one usually has no detailed information about the content of upcoming speech. Here we show that during silent...
We evaluated the dual route cascaded (DRC) model of visual word recognition using Greek behavioural data on word and nonword naming and lexical decision, focusing on the effects of syllable and bigram frequency. DRC was modified to process polysyllabic Greek words and nonwords. The Greek DRC and native speakers of Greek were presented with the same...
We evaluated the dual route cascaded (DRC) model of visual word recognition using Greek behavioural data on word and nonword naming and lexical decision, focusing on the effects of syllable and bigram frequency. DRC was modified to process polysyllabic Greek words and nonwords. The Greek DRC and native speakers of Greek were presented with the same...
Invited talk at the Workshop on Conversational Speech and Lexical Representations, Nijmegen, The Netherlands
During spoken language comprehension, listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phoneme categories are activated in a gradient way, there are also clear individual differences, with more gradient categorization being linked to various communication impairments like dys...
A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science...
Word learning is a central issue in both L1 and L2 language learning. Within this literature, focus has been given on how a new word is added into our mental lexicon. This is particularly important when we consider that words must not just be “known”, but they must also become integrated as lexical representations that can be “used”—recognized and...
In this study predictions of the dual-route cascaded (DRC) model of word reading were tested using fMRI. Specifically, patterns of co-localization were investigated: (a) between pseudoword length effects and a pseudowords vs. fixation contrast, to reveal the sublexical grapho-phonemic conversion (GPC) system; and (b) between word frequency effects...
Language learning is generally described as a problem of acquiring new information (e.g., new words). However, equally important are changes in how the system processes known information. For example, a wealth of studies has suggested dramatic changes over development in how efficiently children recognize familiar words, but it is unknown what kind...
Introduction: Listeners appear to differ systematically in the way they categorize speech sounds; some are more sensitive to within-category differences showing more gradient responding, while others seem to disregard these differences and are primarily driven by linguistically significant, between-category differences (categorical listeners) (Kong...
Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item...
It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003)...
Changes in inhibition are linked to a variety of cognitive impairments, but it is unknown whether inhibition is amenable to learning. We examined this in spoken word recognition (Luce & Pisoni, 1998; McClelland & Elman, 1986). While studies have examined the conditions under which inhibitory links are formed for newly learned words (Gaskell & Dumay...
Listeners use acoustic cues to identify speech sounds (phonemes), but there has been substantial debate about whether this mapping is gradient—preserving fine-grained within-category detail—or discrete. Recent research, however, suggests that there may be individual differences in the degree of gradiency across individuals (Kong & Edwards, 2011). W...
During speech perception listeners use the available acoustic information to activate words and we refer to this information as acoustic cues (e.g. VOT and F0). Critically, while these cues are continuous, our conscious percept and linguistic analyses of language seem to reflect discrete categories. The continuous or discrete nature of these repres...
This chapter discusses two different approaches to the meaning of emotion terms, corpus-based linguistic analysis and feature profiling as carried out within the GRID, in order to compare their results and explore the potential for cross-fertilisation between them. To this end, we present an analysis of five Greek emotion terms (aghonia, ‘anguish’,...
We report a study of naming and lexical decision with 132 adult Greek speakers responding to 150 words and matched pseudowords with decorrelated frequency, length, neighborhood, syllable and bigram frequency, and transparency. This approach allowed us to individuate and accurately estimate the effects of each variable, and to assess their linearity...
Existing work suggests that sleep-based consolidation (Gaskell & Dumay, 2003) is required for newly learned words to interact with other words and phonology. Some studies report that meaning may also be needed (Leach and Samuel, 2007), making it unclear whether meaningful representations are required for such interactions. We addressed these issues...