Efthymia C Kapnoula

Efthymia C Kapnoula
Basque Center on Cognition, Brain and Language

Doctor of Philosophy

About

45
Publications
5,736
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
295
Citations
Introduction
I am an Ikerbasque Research Fellow at the Basque Center on Cognition, Brain and Language. I study how we process spoken and written language input, with a special interest in speech perception, word recognition, and word learning. A big part of my research focuses on the learning mechanisms that allow us to acquire novel words and non-native phonemic contrasts.
Additional affiliations
January 2022 - present
Basque Center on Cognition, Brain and Language
Position
  • Researcher
Description
  • Staff Scientist
December 2021 - present
Ikerbasque - Basque Foundation for Science
Position
  • Researcher
Description
  • Research Fellow
October 2016 - December 2021
Basque Center on Cognition, Brain and Language
Position
  • PostDoc Position
Description
  • Juan de la Cierva (Form.) Fellow (2018-2019), Marie Skłodowska-Curie Fellow (2019-2021)
Education
August 2010 - May 2016
University of Iowa
Field of study
  • Psychology
August 2010 - May 2012
University of Iowa
Field of study
  • Psychology
September 2007 - September 2009
National and Kapodistrian University of Athens
Field of study
  • Basic and Applied Cognitive Science

Publications

Publications (45)
Article
Full-text available
It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003)...
Article
Full-text available
During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments lik...
Article
Full-text available
The speech signal carries both linguistic and non-linguistic information (e.g., a talker’s voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word’s meaning. A few studies support a...
Article
Full-text available
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in sp...
Article
Full-text available
We evaluated the dual route cascaded (DRC) model of visual word recognition using Greek behavioural data on word and nonword naming and lexical decision, focusing on the effects of syllable and bigram frequency. DRC was modified to process polysyllabic Greek words and nonwords. The Greek DRC and native speakers of Greek were presented with the same...
Preprint
Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum, and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses – assum...
Article
Full-text available
Does saying a novel word help to recognize it later? Previous research on the effect of production on this aspect of word learning is inconclusive, as both facilitatory and detrimental effects of production are reported. In a set of three experiments, we sought to reconcile the seemingly contrasting findings by disentangling the production from oth...
Article
Full-text available
The examination of how words are learned can offer valuable insights into the nature of lexical representations. For example, a common assessment of novel word learning is based on its ability to interfere with other words; given that words are known to compete with each other (Luce and Pisoni, 1998; Dahan et al., 2001), we can use the capacity of...
Presentation
Previous work shows that producing a new word can have both facilitatory and inhibitory effects on word learning (Leach & Samuel, 2007; Zamuner et al., 2016). A recent study by Kapnoula and Samuel (under review) aimed to reconcile these seemingly contradictory results. The results indicated an early facilitatory effect of production that switched t...
Presentation
Full-text available
In contrast to the long-running debates around universally categorical vs. gradient speech perception, we find individual differences in listeners' sensitivity to subtle acoustic differences between speech sounds (within and between phoneme categories). Here, we used an EEG measure of listeners' early perceptual encoding of a primary acoustic cue a...
Article
Full-text available
Listeners activate speech sound categories in a gradient way and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula, Winn, Kong, Edwards, and McMurray (2017) suggest that the degree to which listeners maintain within-category inform...
Preprint
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in sp...
Article
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e...
Preprint
Listeners activate speech sound categories in a gradient way and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula, Winn, Kong, Edwards, and McMurray (2017) suggest that the degree to which listeners maintain within-category inform...
Preprint
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N=31) heard sentences in which we manipulated acoustic ambiguity (e.g...
Article
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from—silent—visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated...
Presentation
Listeners map continuous acoustic information into distinct phoneme categories. To study this process, we routinely collect binary responses (e.g., in 2AFC tasks) and use categorization slope as a measure of categorization quality. However, a plethora of findings shows that listeners are sensitive to within-category acoustic differences and capturi...
Article
We investigated how listeners use gender-marked adjectives to adjust lexical predictions during sentence comprehension. Participants listened to sentence fragments in Spanish (e.g. “The witch flew to the village on her … ”) that created expectation for a specific noun (broomstick_fem), and were completed by an adjective and a noun. The adjective ei...
Poster
Full-text available
Does saying a new word out loud help to learn it better? Previous research on the effect of production on word learning is inconclusive. One issue is that production can be confounded with speaker variability, because when a word is produced it is also unavoidably heard by an additional speaker. To address this issue, we disentangled the effects of...
Article
Introduction: Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is an effective treatment for limb motor symptoms in Parkinson’s disease (PD); however, its effect on vocal motor function has yielded conflicted and highly variable results. The present study investigated the effects of STN-DBS on the mechanisms of vocal production and mot...
Poster
Full-text available
A critical debate in speech perception concerns the stages of processing and their interactions. One source of evidence is the timecourse over which different sources of information affect ongoing processing. We used electroencephalography (EEG) to ask when semantic expectations and acoustic cues are integrated neurophysiologically. Participants (N...
Preprint
Full-text available
Lip-reading is crucial to understand speech in challenging conditions. Neuroimaging investigations have revealed that lip-reading activates auditory cortices in individuals covertly repeating absent but known speech. However, in real-life, one usually has no detailed information about the content of upcoming speech. Here we show that during silent...
Preprint
We evaluated the dual route cascaded (DRC) model of visual word recognition using Greek behavioural data on word and nonword naming and lexical decision, focusing on the effects of syllable and bigram frequency. DRC was modified to process polysyllabic Greek words and nonwords. The Greek DRC and native speakers of Greek were presented with the same...
Presentation
Full-text available
Invited talk at the Workshop on Conversational Speech and Lexical Representations, Nijmegen, The Netherlands
Thesis
Full-text available
During spoken language comprehension, listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phoneme categories are activated in a gradient way, there are also clear individual differences, with more gradient categorization being linked to various communication impairments like dys...
Article
Full-text available
A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science...
Presentation
Full-text available
Word learning is a central issue in both L1 and L2 language learning. Within this literature, focus has been given on how a new word is added into our mental lexicon. This is particularly important when we consider that words must not just be “known”, but they must also become integrated as lexical representations that can be “used”—recognized and...
Article
Full-text available
In this study predictions of the dual-route cascaded (DRC) model of word reading were tested using fMRI. Specifically, patterns of co-localization were investigated: (a) between pseudoword length effects and a pseudowords vs. fixation contrast, to reveal the sublexical grapho-phonemic conversion (GPC) system; and (b) between word frequency effects...
Article
Full-text available
Language learning is generally described as a problem of acquiring new information (e.g., new words). However, equally important are changes in how the system processes known information. For example, a wealth of studies has suggested dramatic changes over development in how efficiently children recognize familiar words, but it is unknown what kind...
Poster
Full-text available
Introduction: Listeners appear to differ systematically in the way they categorize speech sounds; some are more sensitive to within-category differences showing more gradient responding, while others seem to disregard these differences and are primarily driven by linguistically significant, between-category differences (categorical listeners) (Kong...
Article
Full-text available
Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item...
Poster
Full-text available
Changes in inhibition are linked to a variety of cognitive impairments, but it is unknown whether inhibition is amenable to learning. We examined this in spoken word recognition (Luce & Pisoni, 1998; McClelland & Elman, 1986). While studies have examined the conditions under which inhibitory links are formed for newly learned words (Gaskell & Dumay...
Poster
Full-text available
Listeners use acoustic cues to identify speech sounds (phonemes), but there has been substantial debate about whether this mapping is gradient—preserving fine-grained within-category detail—or discrete. Recent research, however, suggests that there may be individual differences in the degree of gradiency across individuals (Kong & Edwards, 2011). W...
Presentation
Full-text available
During speech perception listeners use the available acoustic information to activate words and we refer to this information as acoustic cues (e.g. VOT and F0). Critically, while these cues are continuous, our conscious percept and linguistic analyses of language seem to reflect discrete categories. The continuous or discrete nature of these repres...
Conference Paper
Full-text available
We report a study of naming and lexical decision with 132 adult Greek speakers responding to 150 words and matched pseudowords with decorrelated frequency, length, neighborhood, syllable and bigram frequency, and transparency. This approach allowed us to individuate and accurately estimate the effects of each variable, and to assess their linearity...
Article
Full-text available
Existing work suggests that sleep-based consolidation (Gaskell & Dumay, 2003) is required for newly learned words to interact with other words and phonology. Some studies report that meaning may also be needed (Leach and Samuel, 2007), making it unclear whether meaningful representations are required for such interactions. We addressed these issues...

Network

Cited By

Projects

Projects (2)
Project
Listeners perceive speech sounds in a gradient way and this sensitivity to within-category differences is maintained all the way to the level of lexical activation. However, some listeners seem more gradient than others. Gradiency has traditionally been attributed to noise, and, thus, it is commonly considered an indicator of poor speech perception. However, gradiency may in fact indicate better perception of fine-grained differences, which may be beneficial for speech perception. This project aims at addressing the following questions: 1. How can we quantify differences in gradiency in phoneme categorization? 2. What makes some listeners more gradient than others? 3. In what way does gradiency affect speech perception? By examining the sources and consequences of gradiency in phoneme categorization, this work can inform our understanding of the fundamental mechanisms that support speech perception and spoken word recognition. A further goal of this project is to explore the role of gradiency in language processing by bi-/multilinguals, second-language learners, as well individuals with hearing and language-related disorders.