Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This experiment was concerned with the effects of phonologically correct masking on the electrophysiological responses to terminal words of spoken sentences differing in contextual constraint. Two event-related potential (ERP) components, the N400 and N200, were recorded to the terminal words of high and low constraint sentences in four conditions. In the Control condition, subjects (Ss) simply attended to the sentences with no explicit task instructions. In the Semantic condition, Ss were required to listen to the stimuli in order to make semantic judgements about the terminal word of each sentence. The Control + Masking condition was identical to the Control condition except for the simultaneous presentation of a masking stimulus. The Semantic + Masking condition had Ss listening to sentences in the presence of masking with the task of making semantic judgements about the terminal word of each sentence. ERPs were recorded from Fz, Cz, Pz, T3, and T4 in 10 subjects. Amplitudes of both the N200 and the N400 were sensitive to contextual constraint with larger responses elicited by the terminal words of low constraint sentences. In addition to demonstrating the co-occurrence of the N200 and N400, this experiment highlighted a functional separation between the two components. Masking had no statistically significant effect on N200 latency but N400 latency was delayed in the masked conditions relative to those in the unmasked conditions. It is proposed that the N200 and N400 are manifestations of two different processes; the N200 reflects the acoustic/phonological processing of the terminal word while the N400 reflects the cognitive/linguistic processing. The relationship between the N200 recorded in this experiment and the discrimination N200 is discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The latest one even stated that the first study found a N400 component only due to a side effect caused by the specific choice of linguistic stimuli used for that study. Usually, the N400 component is associated with increased costs of lexical processing (Connolly et al., 1992;Kutas & Hillyard, 1980;Steinhauer & Connolly, 2008). This fact possibly points to the choice of lexically more complex word stimuli of that first study (Henrich et al., 2015). ...
... At first glance, regarding the second time window the occurrence of a N400 component within the musical conditions appears to be rather unlikely due to its primary association with increased cognitive costs of lexical processing (Connolly et al., 1992;Kutas & Hillyard, 1980;Steinhauer & Connolly, 2008). There should not be any lexical processing for purely tonal auditory stimuli. ...
... Consequently, the N400 was only detected within this previous study, but was weakened or absent within the following ones (Henrich et al., 2014(Henrich et al., , 2015. This N400 component is usually associated with increased costs of lexical processing (Connolly et al., 1992;Kutas & Hillyard, 1980;Steinhauer & Connolly, 2008), which would theoretically fit to clash and lapse because the costs of their cognitive processing might be higher in respect to their irregular stress contribution. Any phonological parsing process of a word leading to a lexical retrieval takes stresses of syllables of a word as additional features to decode the lexical meaning of it (Church, 1987). ...
... Electrophysiologically, studies have examined speech perception in noise using the N400 event-related potential (e.g., Connolly et al., 1992;Aydelott et al., 2006;Obleser and Kotz, 2011;Strauß et al., 2013;Carey et al., 2014;Coulter et al., 2020), a negative-going component that peaks approximately 400 ms following an eliciting stimulus. The N400 is elicited by semantic stimuli and its amplitude is inversely related to the semantic expectancy of the stimulus, such that it is larger when a target is semantically unexpected compared to when it is semantically expected (Kutas and Hillyard, 1980;Kutas and Van Petten, 1994;Kutas, 1997). ...
... The N400 is elicited by semantic stimuli and its amplitude is inversely related to the semantic expectancy of the stimulus, such that it is larger when a target is semantically unexpected compared to when it is semantically expected (Kutas and Hillyard, 1980;Kutas and Van Petten, 1994;Kutas, 1997). Studies have found that N400 amplitude and the N400 effect (the difference in amplitude between unexpected and expected conditions) are attenuated, and the latency of the N400 is delayed in noise compared to quiet (e.g., Connolly et al., 1992;Aydelott et al., 2006;Obleser and Kotz, 2011;Strauß et al., 2013;Carey et al., 2014), suggesting that despite the beneficial effect of semantic constraint on behavioral performance, a processing cost remains. ...
... Statistical analyses of the induced time-frequency data consisted of a linear mixed-effects model with random effects for subjects using the lme4 package (version 1.1-19) of R (version 3.5.1). Based on the typical distribution of the auditory N400 (Connolly et al., 1992;Connolly and Phillips, 1994;D'Arcy et al., 2004;van den Brink et al., 2006) and to reduce our familywise Type I error rate (Luck and Gaspelin, 2017), alpha power was operationalized as the average power in the 7.5-12 Hz frequency range at electrodes CPz and Pz. ...
Article
Full-text available
Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram (EEG) studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language (L2) proficiency and who varied in terms of age of L2 acquisition (AoA) from 0 (simultaneous bilinguals) to 15 years completed a speech perception in noise task. Participants were required to identify the final word of high and low semantically constrained auditory sentences such as “Stir your coffee with a spoon” vs. “Bob could have known about the spoon” in both of their languages and in both noise (multi-talker babble) and quiet during electrophysiological recording. We examined the effects of language, AoA, semantic constraint, and listening condition on participants’ induced alpha power during speech comprehension. Our results show an increase in alpha power when participants were listening in their L2, suggesting that listening in an L2 requires additional attentional control compared to the first language, particularly early in processing during word identification. Additionally, despite similar proficiency across participants, our results suggest that under difficult processing demands, AoA modulates the amount of attention required to process the second language.
... The earliest component that could be affected by our manipulation is the Phonological Mismatch Negativity (PMN), which is thought to reflect phonological processing sensitive to the expectations raised by the prior semantic context (Connolly and Phillips, 1994;D'Arcy et al., 2000;Newman et al., 2003). This component typically occurs with a frontocentral distribution (Connolly et al., 1990(Connolly et al., , 1992 although studies have reported a more widespread distribution of the PMN (Connolly and Phillips, 1994) and other works (cf. Lewendon et al., 2020) cast doubts on the existence of a reliable difference between PMN and N400 effects. ...
... Another component relevant to our study is the N400, which typically occurs with a centroparietal distribution in the visual modality (Kutas and Van Petten, 1988) and a frontocentral distribution in the auditory modality (Connolly et al., 1990(Connolly et al., , 1992. Both these components have also been reported in previous studies with young children (Friedrich and Friederici, 2005;Sheehan et al., 2007;Mani et al., 2012). ...
... In contrast, adults' sensitivity to the scalar term in the prototypical N400 window (300-500 ms) had a frontocentral topographical distribution similar to that reported by previous auditory PMN/N400 studies (Connolly et al., 1990(Connolly et al., , 1992. Interpretation of this finding is, therefore, fairly simple with increased negativity to the scalar term in contexts where the alternative scalar term might have been more appropriate. ...
Article
Full-text available
How quickly do children and adults interpret scalar lexical items in speech processing? The current study examined interpretation of the scalar terms some vs. all in contexts where either the stronger (some = not all) or the weaker interpretation was permissible (some allows all). Children and adults showed increased negative deflections in brain activity following the word some in some-infelicitous versus some-felicitous contexts. This effect was found as early as 100 ms across central electrode sites (in children), and 300–500 ms across left frontal, fronto-central, and centro-parietal electrode sites (in children and adults). These results strongly suggest that young children (aged between 3 and 4 years) as well as adults quickly have access to the contextually appropriate interpretation of scalar terms.
... On the other hand, cloze probability measures the probability of a word in a sentence. Following this logic, the way associative frequency affect grammaticality, wherein low associative frequency enhanced morphosyntactic detection, was similar to previous studies (Brunellière & Soto-faraco, 2015;Connolly et al., 1990Connolly et al., , 1992 in spoken language comprehension, in the sense that sentences with low cloze probability increased the negative amplitude compared with high cloze probability sentences. ...
... In regards to the N400 time window, we found that the associative representations were accessed and the way they affect the system was similar to previous studies in spoken language comprehension (Brunellière & Soto-faraco, 2015;Connolly et al., 1990Connolly et al., , 1992. ...
... The associative frequency between pronouns and verbal inflections affected the processing of verb targets in the same manner as semantic constraints provided within a sentence. As in previous electrophysiological studies in spoken language comprehension focusing on the influence of semantic constraints of sentence context (e.g., Brunellière & Soto-Faraco, 2015;Connolly et al., 1990Connolly et al., , 1992, target words embedded in low constraining context elicited a response with greater negative amplitude in comparison to the same targets when embedded in high constraining context (hence, more predictable) in a time window from 300 ms after the word onset. It can be defined that the high associative frequency between a pronoun and a verbal inflection provides a high constraining context from hearing this pronoun which can cause a strong pre-activation of the associated verbal inflection. ...
Thesis
This thesis is an attempt to contribute more information about subject-verb agreement in spoken language processing. The subject-verb agreement contains thematic role information that informs the listener who does the action and how many people are involved. To understand the meaning of a sentence, it is therefore essential to recognize words sharing subject-verb relationships. By recording the brain activity with the electroencephalography (EEG) method, it has been found that abstract morphosyntactic features (number, person, and gender) were accessed and separately used during agreement processing in reading when morphosyntactic violations were introduced and abstract morphosyntactic features were manipulated. Despite this, we know little about the nature of representations and processes involved in agreement processing. By using brain measures, this thesis investigates the nature of the representations operating in subject-verb agreement processing by examining two levels of representations (abstract and associative), the flexibility in accessing these two levels of representations and the role of prediction in the computation of subject-verb dependencies. To this end, we will examine the subject-verb agreement in spoken language processing with the French language.To achieve these three aims, we conducted three studies in which we manipulated both the nature of agreement violations in terms of abstract features (single violation of person feature, single violation of number feature, and double violation of person and number features) and associative representations by contrasting pronouns which had either a high co-occurrence frequency with one verbal inflection in French language use (high associative frequency) or a low co-occurrence frequency (low associative frequency). Our ERP results elicited by spoken verbs when they were preceded by pronouns confirmed the access to abstract features representations in spoken language as soon as the verbal inflection is recognized. Moreover, it was found that associative representations were also used in the processing of subject-verb agreement. By using the associative representations after hearing the pronoun, the cognitive system actively makes a prediction about the upcoming verbal inflection, leading up to affect the verbal processing at low levels from its initial phoneme.For the second aim, we also manipulated the task demands in two EEG experiments by using the lexical decision task (LDT) or the noun categorization task. Our ERP results time-locked to verbs preceded by pronouns showed that there is flexibility in accessing the abstract representations such that their access was enhanced by the lexical decision task. On the contrary, the sensitivity to associative representations between pronominal subject and verbal inflection was observed, regardless the task demands, in lexical decision and noun categorization tasks.Regarding the third aim, we conducted a magnetoencephalographical (MEG) study with the same stimuli as in our previous EEG experiments. In line with our previous findings, MEG data time-locked to the verb onset showed an influence of associative frequency in the early stage of verbal processing at phonological level in the primary auditory cortex. This suggests that higher-level representations such as associative representations were used to preactivate information related to the upcoming verbal inflection immediately after the recognition of pronouns, causing low-level processing of new information to be affected. This prediction in subject-verb agreement was also associated with the activation of inferior frontal cortex and motor area. Overall, this thesis makes a strong contribution in the understanding of subject-verb agreement by showing a flexible access to different representational levels and the role of prediction from statistical information in language use.
... The N200 indexes phonological processing after the recognition of initial phonemes in contact with the lexicaland sentence-level processes. Therefore, when the initial phonemes of the perceived word do not match the initial phonemes of the expected word from the sentence constraints, a negative shift is elicited (Connolly, Phillips, Stewart, & Brake, 1992). ...
... Several authors have found that the P200 is more associated with the phonological content of speech sounds than the N100, since its amplitude reacted to incongruency between auditory and visual information (Klucharev et al., 2003;Stekelenburg & Vroomen, 2007) and its speeding-up occurred only in speech events . The function of the P200 4 may be understood as being somewhat similar to that of the N200 in word recognition in sentence contexts, since it is thought to reflect the phonological processing that occurs during the recognition of the initial phonemes in a word (Connolly et al., 1992). In the context of auditory sentence processing, phonological processing however interacts with the lexical-and sentence-level processes over the N200 component (Connolly et al., 1992). ...
... The function of the P200 4 may be understood as being somewhat similar to that of the N200 in word recognition in sentence contexts, since it is thought to reflect the phonological processing that occurs during the recognition of the initial phonemes in a word (Connolly et al., 1992). In the context of auditory sentence processing, phonological processing however interacts with the lexical-and sentence-level processes over the N200 component (Connolly et al., 1992). A possible explanation for the increased effect of semantic congruency in audiovisual speech is that the latter acts on lexical activation thanks to phonological processing at the sub-lexical level. ...
Article
In everyday communication, natural spoken sentences are expressed in a multisensory way through auditory signals and speakers’ visible articulatory gestures. An important issue is to know whether audiovisual speech plays a main role in the linguistic encoding of an utterance until access to meaning. To this end, we conducted an event-related potential experiment during which participants listened passively to spoken sentences and a lexical recognition task. The results revealed that N200 and N400 waves had a greater amplitude after semantically incongruous words than after expected words. This effect of semantic congruency was increased over N200 in the audiovisual trials. Words presented audiovisually also elicited a reduced amplitude of the N400 wave and a facilitated recovery in memory. Our findings shed light on the influence of audiovisual speech on the understanding of natural spoken sentences by acting on the early stages of word recognition in order to access a lexical-semantic network.
... According to prior ERP literature, in spoken-language comprehension (van den Brink, Brown, & Hagoort, 2001;van den Brink & Hagoort, 2004), the N100 component mostly reflects bottom-up processing of the incoming signal, based on the acoustic properties of word onset. According to this view, the top-down processes driven by sentential context only begin to influence word recognition as early as 200 ms from the incoming word, as revealed by differential ERP responses in amplitude between strongly and weakly constraining contexts (Connolly, Phillips, Stewart, & Brake, 1992;Connolly, Stewart, & Phillips, 1990). Some authors (Connolly & Phillips, 1994;Connolly et al., 1990Connolly et al., , 1992Hagoort & Brown, 2000) proposed that the N200, occurring between 150 and 300 ms, is triggered by the mismatches at a phonological level with the lexical expectancies from the sentence context, whereas the N400, peaking around 400 ms after stimulus onset with a more posterior distribution across the scalp, reflects the consequences of contextually based expectations regarding upcoming words at a lexicosemantic level. ...
... According to this view, the top-down processes driven by sentential context only begin to influence word recognition as early as 200 ms from the incoming word, as revealed by differential ERP responses in amplitude between strongly and weakly constraining contexts (Connolly, Phillips, Stewart, & Brake, 1992;Connolly, Stewart, & Phillips, 1990). Some authors (Connolly & Phillips, 1994;Connolly et al., 1990Connolly et al., , 1992Hagoort & Brown, 2000) proposed that the N200, occurring between 150 and 300 ms, is triggered by the mismatches at a phonological level with the lexical expectancies from the sentence context, whereas the N400, peaking around 400 ms after stimulus onset with a more posterior distribution across the scalp, reflects the consequences of contextually based expectations regarding upcoming words at a lexicosemantic level. Additionally, ERP studies in written language have shown that the degree of semantic constraint does not modulate the N400 amplitude during the processing of semantically plausible words when they are not the most expected word for sentence (Kutas & Hillyard, 1984;Thornhill & Van Petten, 2012;Van Petten & Luka, 2012). ...
... This conclusion is in accordance with the study of Van Petten, Coulson, Rubin, Plante, & Parks (1999) showing one unique N400 wave, triggered by the top-down predictions due to the sentence context. Regarding the effects of semantic constraint, and based on Connolly and colleagues' results (Connolly et al., 1990(Connolly et al., , 1992, target words embedded in low semantically constraining sentence frames should evoke a response with greater negative amplitude in comparison to the same targets when embedded in high constraining sentence frames (hence, more predictable). This shift may begin as early as 250 ms (putatively, the N200 or early N400 effect) and persist over later processing stages around the N400 window. ...
Article
This study addresses how top-down predictions driven by phonological and semantic information interact on spoken-word comprehension. To do so, we measured event-related potentials to words embedded in sentences that varied in the degree of semantic constraint (high or low) and in regional accent (congruent or incongruent) with respect to the target word pronunciation. The data showed a negative amplitude shift following phonological mismatch (target pronunciation incongruent with respect to sentence regional accent). Here, we show that this shift is modulated by sentence-level semantic constraints over latencies encompassing auditory (N100) and lexical (N400) components. These findings suggest a fast influence of top-down predictions and the interplay with bottom-up processes at sublexical and lexical levels of analysis.
... A PMN is generated if a stimulus word onset does not acoustically match the anticipated word form (Connolly & Phillips, 1994). The PMN has been reported consistently at centroanterior electrodes in time windows between ~220-350 ms (Connolly & Phillips, 1994;Connolly, Phillips, Stewart, & Brake, 1992; Connolly, Service, D'Arcy, Kujala, & Alho, 2001;Connolly, Stewart, & Phillips, 1990;Newman & Connolly, 2009;Newman, Connolly, Service, & McIvor, 2003;van den Brink, Brown, & Hagoort, 2001). Before 200 ms, a centroposterior effect has also been described with onsets at 130-140 ms (D'Arcy, Connolly, & Crocker 2000;van den Brink et al., 2001). ...
... An early PMN was produced by words where the onset phonemes mismatched the predicted word. Studies with later PMN increase have investigated word onsets that occurred in semantically less constraining contexts (Connolly et al., 1990(Connolly et al., , 1992, onsets that did not match that of the highest cloze probability word (Connolly & Phillips, 1994), or unfulfilled expectations formed by instructing participants to alter the onset consonant of a stimulus word (Connolly et al., 2001;Kujala, Alho, Service, Ilmoniemi, & Connolly, 2004;Newman & Connolly, 2009;Newman et al., 2003). The later PMN has been source-localized to the left frontal lobe using ERPs (Connolly et al., 2001) and to the left anterior temporal lobe using magnetoencephalography (MEG) (Kujala et al., 2004). ...
Article
Full-text available
We propose that a recently discovered event-related potential (ERP) component—the pre-activation negativity (PrAN)—indexes the predictive strength of phonological cues, including segments, word tones, and sentence-level tones. Specifically, we argue that PrAN is a reflection of the brain’s anticipation of upcoming speech (segments, morphemes, words, and syntactic structures). Findings from a long series of neurolinguistic studies indicate that the effect can be divided into two time windows with different possible brain sources. Between 136–200 ms from stimulus onset, it indexes activity mainly in the primary and secondary auditory cortices, reflecting disinhibition of neurons sensitive to the expected acoustic signal, as indicated by the brain regions’ response to predictive certainty rather than sound salience. After ~200 ms, PrAN is related to activity in Broca’s area, possibly reflecting inhibition of irrelevant segments, morphemes, words, and syntactic structures
... First, the current study was specifically designed to investigate the N400 ERP component (although an exploratory analysis found evidence for a late frontal response that was also pupil-mediated), which represents only one aspect of the perceptual and cognitive processing of speech. In fact, most prior work using the ERP technique to investigate the impacts of acoustic challenge have likewise mostly focused on the N400 component (e.g., Strauß, Kotz, & Obleser, 2013;Daltrozzo, Wioland, & Kotchoubey, 2012;Obleser & Kotz, 2011;Aydelott et al., 2006;Connolly, Phillips, Stewart, & Brake, 1992). Importantly however, the processing represented by the N400 may be relatively effortless (Federmeier, 2022;Kutas & Federmeier, 2011) compared with other stages of integrative processing that take place later than the N400 and that may be more sensitive to the conscious allocation of effort Aurnhammer, Delogu, Brouwer, & Crocker, 2023;Payne, Stites, & Federmeier, 2019;Batterink & Neville, 2013). ...
Article
Full-text available
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences—presented in quiet or in noise—that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
... There is relatively little empirical evidence demonstrating how background noise affects listeners' ability to generate predictions based on linguistic context. Evidence from electroencephalography (EEG) suggests that background noise disrupts listeners' ability to generate predictions: brain wave responses to unexpected semantic input (i.e., N400 effects) tend to be delayed when speech is presented in background noise (Connolly et al., 1992;Silcox and Payne, 2021;Hsin et al., 2023). On this account, listeners may be less efficient when facing greater perceptual complexity (i.e., acoustic-phonetic similarity) associated with informational masking (e.g., babble noise) as compared to energetic masking (e.g., SSN). ...
Article
Full-text available
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
... Regarding the influence of noise on semantic processing, such as the N400 component (Kutas and Federmeier, 2011;Kutas and Hillyard, 1984), several studies have reported robust or increased amplitude of N400 under mild degradation, which might be related to additional cognitive effort (Jamison et al., 2016;Romei et al., 2011;Zendel et al., 2015), while other studies reported reduced/delayed N400 for degraded speech, which might be related to damaged signal quality (Aydelott et al., 2006;Connolly et al., 1992;Daltrozzo et al., 2012;Obleser and Kotz, 2011;Strauß et al., 2013). These mixed results provided valuable information on the complex relationship between noise and semantic processing. ...
Article
Despite the distortion of speech signals caused by unavoidable noise in daily life, our ability to comprehend speech in noisy environments is relatively stable. However, the neural mechanisms underlying reliable speech-in-noise comprehension remain to be elucidated. The present study investigated the neural tracking of acoustic and semantic speech information during noisy naturalistic speech comprehension. Participants listened to narrative audio recordings mixed with spectrally matched stationary noise at three signal-to-ratio (SNR) levels (no noise, 3 dB, -3 dB), and 60-channel electroencephalography (EEG) signals were recorded. A temporal response function (TRF) method was employed to derive event-related-like responses to the continuous speech stream at both the acoustic and the semantic levels. Whereas the amplitude envelope of the naturalistic speech was taken as the acoustic feature, word entropy and word surprisal were extracted via the natural language processing method as two semantic features. Theta-band frontocentral TRF responses to the acoustic feature were observed at around 400 ms following speech fluctuation onset over all three SNR levels, and the response latencies were more delayed with increasing noise. Delta-band frontal TRF responses to the semantic feature of word entropy were observed at around 200 to 600 ms leading to speech fluctuation onset over all three SNR levels. The response latencies became more leading with increasing noise and were correlated with comprehension performance and perceived speech intelligibility. While the following responses to speech acoustics were consistent with previous studies, our study revealed the robustness of leading responses to speech semantics, which suggests a possible predictive mechanism at the semantic level for maintaining reliable speech comprehension in noisy environments.
... Studies on speech comprehension also demonstrated the effects of predictability and semantic congruency on the N400 and LPC, suggesting that listeners may use contextual information to predict upcoming linguistic inputs (McCallum et al., 1984;Holcomb and Neville, 1991;Van Petten et al., 1999;Hagoort and Brown, 2000;van den Brink and Hagoort, 2004;Boudewyn et al., 2015). However, only a few N400 studies have examined how degraded speech affects the use of context during listening comprehension (Connolly et al., 1992;Aydelott et al., 2006;Obleser and Kotz, 2011;Strauß et al., 2013;Coulter et al., 2021;Silcox and Payne, 2021). A typical finding across these studies is that the amplitude and latency of the N400 regarding the predictability effect were reduced and delayed as a function of decreased speech clarity. ...
Article
Full-text available
Introduction Speech comprehension involves context-based lexical predictions for efficient semantic integration. This study investigated how noise affects the predictability effect on event-related potentials (ERPs) such as the N400 and late positive component (LPC) in speech comprehension. Methods Twenty-seven listeners were asked to comprehend sentences in clear and noisy conditions (hereinafter referred to as “clear speech” and “noisy speech,” respectively) that ended with a high-or low-predictability word during electroencephalogram (EEG) recordings. Results The study results regarding clear speech showed the predictability effect on the N400, wherein low-predictability words elicited a larger N400 amplitude than did high-predictability words in the centroparietal and frontocentral regions. Noisy speech showed a reduced and delayed predictability effect on the N400 in the centroparietal regions. Additionally, noisy speech showed a predictability effect on the LPC in the centroparietal regions. Discussion These findings suggest that listeners achieve comprehension outcomes through different neural mechanisms according to listening conditions. Noisy speech may be comprehended with a second-pass process that possibly functions to recover the phonological form of degraded speech through phonetic reanalysis or repair, thus compensating for decreased predictive efficiency.
... Next, we reiterate that stimulation rate needs to be monitored carefully when testing unresponsive individuals. It is has been well-established that the N400 is exquisitely sensitive to stimulus presentation characteristics, including accelerated word presentation [72], and stimulus degradation [73][74][75]. As illustrated in Figure 5, absence of response may reflect an attenuated ability to perceive the stimuli, rather than a lack of consciousness in unresponsive individuals. ...
Article
Full-text available
A consistent limitation when designing event-related potential paradigms and interpreting results is a lack of consideration of the multivariate factors that affect their elicitation and detection in behaviorally unresponsive individuals. This paper provides a retrospective commentary on three factors that influence the presence and morphology of long-latency event-related potentials—the P3b and N400. We analyze event-related potentials derived from electroencephalographic (EEG) data collected from small groups of healthy youth and healthy elderly to illustrate the effect of paradigm strength and subject age; we analyze ERPs collected from an individual with severe traumatic brain injury to illustrate the effect of stimulus presentation speed. Based on these critical factors, we support that: (1) the strongest paradigms should be used to elicit event-related potentials in unresponsive populations; (2) interpretation of event-related potential results should account for participant age; and (3) speed of stimulus presentation should be slower in unresponsive individuals. The application of these practices when eliciting and recording event-related potentials in unresponsive individuals will help to minimize result interpretation ambiguity, increase confidence in conclusions, and advance the understanding of the relationship between long-latency event-related potentials and states of consciousness.
... For instance, Connolly and Phillips [34] report an early negativity (between 150-300 ms) in response to sentence final words phonologically inconsistent with the semantic context thus far. This component typically occurs with a fronto-central distribution [37,38] (but cf. [34]). ...
Article
Full-text available
Sign languages use the horizontal plane to refer to discourse referents introduced at referential locations. However, the question remains whether the assignment of discourse referents follows a particular default pattern as recently proposed such that two new discourse referents are respectively assigned to the right (ipsilateral) and left (contralateral) side of (right handed) signers. The present event-related potential study on German Sign Language investigates the hypothesis that signers assign distinct and contrastive referential locations to discourse referents even in the absence of overt localization. By using a semantic mismatch-design, we constructed sentence sets where the second sentence was either consistent or inconsistent with the used pronoun. Semantic mismatch conditions evoked an N400, whereas a contralateral index sign engendered a Phonological Mismatch Negativity. The current study provides supporting evidence that signers are sensitive to the mismatch and make use of a default pattern to assign distinct and contrastive referential locations to discourse referents.
... Consistent with this interpretation, in a written word study, a similar right frontal effect has been associated with orthographic processing during rhyme judgments (Rugg and Barrett, 1987). Although the timing and distribution are not perfectly matched, this early component may also be related to the PMN observed in auditory studies, elicited by mismatch between an expected and presented initial phoneme (e.g., Connolly et al., 1992Connolly et al., , 1995Connolly and Phillips, 1994); however, in this case, one might have expected a similar effect for orthographically mismatched pairs, which was not significant. An effect of orthographic congruence that varied by group was also apparent in the middle (N400/N450) time window, which has consistently been associated with lexical access and lexicosemantic processing at multiple levels of representation (e.g., Coch and Holcomb, 2003;Grainger and Holcomb, 2009;Laszlo and Federmeier, 2011). ...
Article
Full-text available
In an event-related potential (ERP) study using picture stimuli, we explored whether spelling information is co-activated with sound information even when neither type of information is explicitly provided. Pairs of picture stimuli presented in a rhyming paradigm were varied by both phonology (the two images in a pair had either rhyming, e.g., boat and goat, or non-rhyming, e.g., boat and cane, labels) and orthography (rhyming image pairs had labels that were either spelled the same, e.g., boat and goat, or not spelled the same, e.g., brain and cane). Electrophysiological picture rhyming (sound) effects were evident in terms of both N400/N450 and late effect amplitude: Non-rhyming images elicited more negative waves than rhyming images. Remarkably, the magnitude of the late ERP rhyming effect was modulated by spelling – even though words were neither explicitly seen nor heard during the task. Moreover, both the N400/N450 and late rhyming effects in the spelled-the-same (orthographically matched) condition were larger in the group with higher scores (by median split) on a standardized measure of sound awareness. Overall, the findings show concomitant meaning (semantic), sound (phonological), and spelling (orthographic) activation for picture processing in a rhyming paradigm, especially in young adults with better reading skills. Not outwardly lexical but nonetheless modulated by reading skill, electrophysiological picture rhyming effects may be useful for exploring co-activation in children with dyslexia.
... The N200 effect is thought to functionally reflect the perceptual comparison of a target word with an expected word form (van den Brink, Brown, & Hagoort, 2002). This is based on findings showing that participants demonstrate larger N200 responses when they hear a target word that has an initial sound that mismatches the initial sound of the expected word (e.g., participants hear target word 'queen' when the expected word is 'eyes') given the sentence context (e.g., 'Phil put some drops in his ___'), compared to when they hear a target word (e.g., 'icicles') that has an initial sound that matches the initial sound of the expected word (Connolly, Stewart, & Phillips, 1990;Connolly, Phillips, Stewart, & Brake, 1992;Connolly & Phillips, 1994;Hagoort & Brown, 2000;van den Brink et al., 2002). ...
Chapter
Full-text available
Much of the current literature on selective social learning focuses on the external factors that trigger children’s selectivity. In this chapter, we review behavioral, eye-tracking, and electrophysiological evidence for how children selectively learn words—what the internal processes are that enable them to block learning when they doubt the epistemic quality of the source. We propose that young children engage a semantic-blocking mechanism that allows for the initial encoding of words but disrupts the creation of lexico-semantic representations. We offer a framework that can be extended to other selective word learning contexts to investigate whether a similar semantic-gating mechanism is engaged in different contexts. Lastly, we propose several implications for the evidence we review on the standard model of word learning.
... In sentence context, critical words were embedded in continuous speech and did not follow a period of silence, so the early ERP responses to these words were not clearly visible. This is in line with previous studies showing a reduction/disappearance of early ERP peaks due to the lack of silence between successive auditory stimuli (e.g., Connolly, Phillips, Stewart, & Brake, 1992;Hagoort & Brown, 2000;Näätänen & Picton, 1987). Thus, we performed our analysis only over the 400-700-millisecond time window. ...
Article
Numerous studies suggest that audiovisual speech influences lexical processing. However, it is not clear which stages of lexical processing are modulated by audiovisual speech. In this study, we examined the time course of the access to word representations in long-term memory when they were presented in auditory-only and audiovisual modalities. We exploited the effect of the prior access to a word on the subsequent access to that word known as the word repetition effect. Using event-related potentials, we identified an early time window at about 200 milliseconds and a late time window starting at about 400 milliseconds related to the word repetition effect. Our results showed that the word repetition effect over the early time window was modulated by the speech modality while this influence of speech modality was not found over the late time window. Visual cues thus play a role in the early stages of lexical processing.
Article
Full-text available
The Phonological Mismatch Negativity (PMN) is an ERP component said to index the processing of phonological information, and is known to increase in amplitude when phonological expectations are violated. For example, in a context that generates expectation of a certain phoneme, the PMN will become relatively more negative if the phoneme is switched for an alternative. The response is comparable to other temporally-proximate components, insofar as it indicates a neurological response to unexpected auditory input, but remains considered distinct by the field on the basis of its proposed specific sensitivity to phonology. Despite this, reports of the PMN overlap notably, both in temporal and topographic distribution, with the Mismatch Negativity (MMN) and the N400, and limited research to date has been conducted to establish whether these extant distinctions withstand testing. In the present study, we investigate the PMN’s sensitivity to non-linguistic mismatches so as to test the response’s specific language sensitivity. Participants heard primes—three-syllable words—played simultaneously to three-note tunes, with the instructions to attend exclusively to either the linguistic or musical content. They were then tasked with removing the first syllable (phoneme manipulation) or note (music manipulation) to form the target. Targets either matched or mismatched primes, thus achieving physically identical note or phoneme mismatches. Results show that a PMN was not elicited during the musical mismatch condition, a finding which supports suggestions that the PMN may be a language-specific response. However, our results also indicate that further research is necessary to determine the relationship between the PMN and N400. Though our paper probes a previously unstudied dimension of the PMN, questions still remain surrounding whether the PMN, although seemingly language-specific, is truly a phonology-specific component.
Article
The application of wearable magnetoencephalography using optically-pumped magnetometers has drawn extensive attention in the field of neuroscience. Electroencephalogram system can cover the whole head and reflect the overall activity of a large number of neurons. The efficacy of optically-pumped magnetometer in detecting event-related components can be validated through electroencephalogram results. Multivariate pattern analysis is capable of tracking the evolution of neurocognitive processes over time. In this paper, we adopted a classical Chinese semantic congruity paradigm and separately collected electroencephalogram and optically-pumped magnetometer signals. Then, we verified the consistency of optically-pumped magnetometer and electroencephalogram in detecting N400 using mutual information index. Multivariate pattern analysis revealed the difference in decoding performance of these two modalities, which can be further validated by dynamic/stable coding analysis on the temporal generalization matrix. The results from searchlight analysis provided a neural basis for this dissimilarity at the magnetoencephalography source level and the electroencephalogram sensor level. This study opens a new avenue for investigating the brain’s coding patterns using wearable magnetoencephalography and reveals the differences in sensitivity between the two modalities in reflecting neuron representation patterns.
Article
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Article
The Phonological Mapping Negativity (PMN) is an event-related potential component thought to index pre-lexical phonological processing. The response has long been considered distinct from the temporally-proximate Mismatch Negativity (MMN) – a distinction that primarily rests on the assumption that the PMN, unlike the MMN, cannot be elicited in inattentive contexts, thus implying differing underlying auditory-cortex mechanisms. Despite this, no study to date has established whether elicitation of an inattentive PMN response is possible. Here, we tested this assumption in two experiments during which participants heard phonological mismatches whilst engaging in a distractor task (experiment 1) or watching a film (experiment 2). Our results showed no consistent evidence for an inattentive PMN. Though attention may indeed serve to distinguish the two components, our results highlight consistent discrepancies in the temporal, topographical, and functional characteristics of the PMN that undermine efforts to establish its significance in the electrophysiological timeline of speech processing.
Article
The experimental study of artificial language learning has become a widely used means of investigating the predictions of theories of language learning and representation. Although much is now known about the generalizations that learners make from various kinds of data, relatively little is known about how those representations affect speech processing. This paper presents an event-related potential (ERP) study of brain responses to violations of lab-learned phonotactics. Novel words that violated a learned phonotactic constraint elicited a larger Late Positive Component (LPC) than novel words that satisfied it. Similar LPCs have been found for violations of natively acquired linguistic structure, as well as for violations of other types of abstract generalizations, such as musical structure. We argue that lab-learned phonotactic generalizations are represented abstractly and affect the evaluation of speech in a manner that is similar to natively acquired syntactic and phonological rules.
Article
Słuchowe potencjały korowe (ang. cortical auditory evoked potentials, CAEP) to bioelektryczne odpowiedzi mózgu na bodźce dźwiękowe, generowane w ośrodkach nerwowych znajdujących się na wyższych piętrach analizy informacji słuchowej. Wiele dotychczas przeprowadzonych badań eksperymentalnych pokazuje, że rejestracja oraz ocena tych odpowiedzi stwarza olbrzymie możliwości w diagnostyce audiologicznej oraz innych dziedzinach nauki, w których konieczne bądź potrzebne jest sprawdzenie stanu funkcjonalnego ośrodków mózgowych i procesów związanych z przetwarzaniem bodźców dźwiękowych. Niniejsza praca zawiera przegląd najczęściej opisywanych w literaturze sensorycznych (egzogennych) oraz związanych ze zdarzeniem (endogennych) składowych słuchowych potencjałów korowych oraz przykłady klinicznego zastosowania tych komponentów w ocenie i diagnostyce ośrodkowych procesów słuchowych oraz związanych z nimi procesów poznawczych i językowych.
Article
Objectives: The purpose of this study is to evaluate the effect of Signal to Noise Ratio (SNR) and working memory capacity on the auditory processing of the elderly by comparing the average amplitude of event-related potentials (ERP) between groups in the sentence plausibility judgment task under the control of SNR.Methods: A total of 26 elderly people participated in this study, and based on the results of the working memory test, were divided into high a working memory (high WM) group (N= 13) and a low working memory (low WM) group (N= 13). The sentence stimuli consisted of plausible and implausible sentences. Implausible sentences were manipulated to cause a semantic violation in the verb. A white noise was added to the portion of the predicate to be –5 dB, 0 dB, and 5 dB SNR in the recorded sentence.Results: In the behavioral analysis, differences between the SNRs were significant. –5 dB SNR conditions revealed inactive performance, confirming that noise is a variable that increases cognitive load and difficulty in auditory processing. In the ERP analysis, the mean amplitude of the high WM group was significantly greater than the low WM group, and a distinct difference in N400 components between the groups was also observed in the grand average waveform graph.Conclusion: Intergroup differences were more evident in the conditions of SNR, which requires more listening effort. However, this may show that the low WM group had an inferior ability to detect the cognition of semantic implausibility in real time compared to the high WM group.
Article
The N400 event-related brain potential (ERP) can be used to evaluate language comprehension, and may be a particularly powerful tool for the assessment of individuals who are behaviourally unresponsive. This study presents a set of semantic violation sentences developed in Canadian French and characterizes their ability to elicit an N400 effect in healthy adults. A novel set of 100 French sentences were created and normed through two surveys that assessed sentence cloze probability ( n = 98) and semantic plausibility ( n = 99). The best 80 sentences (40 congruent; 40 incongruent) were selected for the final stimulus set and tested for their ability to elicit N400 effects in 33 French-speaking individuals. The final stimulus set successfully generated an N400 effect in the grand-average across all individuals, and in the grand-average within age groups (young, middle-age, and older adults). On a single-subject level, the final stimulus set elicited N400 effects in 76% of the participants. The feasibility of using this stimulus set to assess semantic processing in behaviourally unresponsive individuals was demonstrated in a case example of a French individual in a disorder of consciousness. These sentences enable the inclusion of Canadian French speakers in this simple assessment of language comprehension abilities.
Article
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words (“The prisoners were planning their escape/party”) or were low-constraint sentences with unexpected sentence-final words (“All day she thought about the party”). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants’ showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Article
Gender characteristics of speech to do with prosody and linguistic style influence how a person is understood. We investigated electrophysiological correlates of these. Twenty-four people listened to sentences spoken by a female who manipulated her prosody by using a feminine and an imitated masculine voice and her linguistic style by using function words differentially associated with males and females. Event-related potentials unexpectedly showed a larger N400 elicited by imitated masculine compared to feminine prosody. A prosody by linguistic style interaction was also found in late positive components and a later window, where sentences congruent with speaker sex and gender (i.e. feminine prosody, linguistic style, and voice) were more negative going than sentences that were not. Further results showed less upper-alpha (∼10–13 Hz) event-related desynchronisation with imitated masculine compared to feminine prosody in a late time-window. These results suggest gender atypical speech affects early and reduces later semantic processing.
Article
Although bilinguals benefit from semantic context while perceiving speech-in-noise in their native language (L1), the extent to which bilinguals benefit from semantic context in their second language (L2) is unclear. Here, 57 highly proficient English–French/French–English bilinguals, who varied in L2 age of acquisition, performed a speech-perception-in-noise task in both languages while event-related brain potentials were recorded. Participants listened to and repeated the final word of sentences high or low in semantic constraint, in quiet and with a multi-talker babble mask. Overall, our findings indicate that bilinguals do benefit from semantic context while perceiving speech-in-noise in both their languages. Simultaneous bilinguals showed evidence of processing semantic context similarly to monolinguals. Early sequential bilinguals recruited additional neural resources, suggesting more effective use of semantic context in L2, compared to late bilinguals. Semantic context use was not associated with bilingual language experience or working memory.
Article
Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas most neuroimaging studies on word prediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre-N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent, and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
Article
Full-text available
How the language processing system handles formulaic language such as idioms is a matter of debate. We investigated the activation of constituent meanings by means of predictive processing in an eye-tracking experiment and in two ERP experiments (auditory and visual). In the eye-tracking experiment, German-speaking participants listened to idioms in which the final word was excised ( Hannes let the cat out of the . . .). Well before the offset of these idiom fragments, participants fixated on the correct idiom completion ( bag) more often than on unrelated distractors ( stomach). Moreover, there was an early fixation bias towards semantic associates ( basket) of the correct completion, which ended shortly after the offset of the fragment. In the ERP experiments, sentences (spoken or written) either contained complete idioms, or the final word of the idiom was replaced with a semantic associate or with an unrelated word. Across both modalities, ERPs reflected facilitated processing of correct completions across several regions of interest (ROIs) and time windows. Facilitation of semantic associates was only reliably evident in early components for auditory idiom processing. The ERP findings for spoken idioms compliment the eye-tracking data by pointing to early decompositional processing of idioms. It seems that in spoken idiom processing, holistic representations do not solely determine lexical processing.
Conference Paper
Occurrences of unknown words in a conversation can be challenging and often prevent people from engaging in fluent communication with each other. Even worse, currently very little is known about possible bodily responses when a listener comes across unknown words, especially when context information is not available in the conversation to facilitate understanding. In this work, we look at facial expressions and electroencephalography (EEG) as two potential body signals that may convey whether users are having difficulties understanding the words they hear. We performed an experiment to measure the reaction of users during a vocabulary dictation test using meaningful words and pseudowords. Participants were asked to classify words as they heard them into different categories. As a result, we did not see any significant differences in the facial expressions of our participants. However, significant differences were observed in event-related potentials (ERPs) within the time range of 100ms-300ms since the onset of stimuli, with pseudowords showing significantly stronger negative responses than meaningful words. Starting at about 550ms and up to around 750ms, pseudowords elicited significantly stronger negative responses, primarily over the parietal and central brain regions. Analyses for single-electrode sites revealed that pseudowords elicited more negative responses than real words in all investigated regions except the left temporal and lateral frontal regions from 500ms to 700ms since stimuli onset. These results could pave the way for future work that aims to develop real-time solutions for facilitating communication between users with different language backgrounds.
Article
Speakers use prosodic emphasis to express the content of their message in order to help listeners to infer meaning. By measuring event-related potentials (ERPs) to semantically congruent and incongruent final words embedded in a sentential context that was emphasised or de-emphasised, we investigated whether prosodic emphasis conveyed by a sentential context leads listeners to a finer semantic analysis. The negative shift (N400) triggered by the difficulty to combine the incongruent word with the sentence representation was increased by prosodic emphasis at an early stage. Over the later stages, the amplitude of the N400 wave was increased by prosodic emphasis of the sentential context, whatever the semantic congruency of final words. As shown by the N400 wave, emphasising a sentential context affected the lexical-semantic processing of the following word. This study provides clear evidence that prosodic emphasis plays a role in the semantic analysis of sentences by inducing a deeper analysis.
Article
In this study the timing of electromagnetic signals recorded during incongruent and congruent audiovisual (AV) stimulation in 14 Italian healthy volunteers was examined. In a previous study (Proverbio et al., 2016) we investigated the McGurk effect in the Italian language and found out which visual and auditory inputs provided the most compelling illusory effects (e.g., bilabial phonemes presented acoustically and paired with non-labials, especially alveolar-nasal and velar-occlusive phonemes). In this study EEG was recorded from 128 scalp sites while participants observed a female and a male actor uttering 288 syllables selected on the basis of the previous investigation (lasting approximately 600 ms) and responded to rare targets (/re/, /ri/, /ro/, /ru/). In half of the cases the AV information was incongruent, except for targets that were always congruent. A pMMN (phonological Mismatch Negativity) to incongruent AV stimuli was identified 500 ms after voice onset time. This automatic response indexed the detection of an incongruity between the labial and phonetic information. SwLORETA (low-resolution electromagnetic tomography) analysis applied to the difference voltage incongruent - congruent in the same time window revealed that the strongest sources of this activity were the right superior temporal (STG) and superior frontal gyri, which supports their involvement in AV integration.
Article
Full-text available
Abstract Background For nearly four decades, the N400 has been an important brainwave marker of semantic processing. It can be recorded non-invasively from the scalp using electrical and/or magnetic sensors, but largely within the restricted domain of research laboratories specialized to run specific N400 experiments. However, there is increasing evidence of significant clinical utility for the N400 in neurological evaluation, particularly at the individual level. To enable clinical applications, we recently reported a rapid evaluation framework known as “brain vital signs” that successfully incorporated the N400 response as one of the core components for cognitive function evaluation. The current study characterized the rapidly evoked N400 response to demonstrate that it shares consistent features with traditional N400 responses acquired in research laboratory settings—thereby enabling its translation into brain vital signs applications. Methods Data were collected from 17 healthy individuals using magnetoencephalography (MEG) and electroencephalography (EEG), with analysis of sensor-level effects as well as evaluation of brain sources. Individual-level N400 responses were classified using machine learning to determine the percentage of participants in whom the response was successfully detected. Results The N400 response was observed in both M/EEG modalities showing significant differences to incongruent versus congruent condition in the expected time range (p
Article
Full-text available
Based on growing evidence suggesting that professional music training facilitates foreign language perception and learning, we examined the impact of musical expertise on the categorization of syllables including phonemes that did (/p/, /b/) or did not (/ph/) belong to the French repertoire by analyzing both behavior (error rates and reaction times) and Event‐Related brain Potentials (N200 and P300 components). Professional musicians and non‐musicians categorized syllables either as /ba/ or /pa/ (voicing task), or as /pa/ or /pha/ with /ph/ being a non‐native phoneme for French speakers (aspiration task). In line with our hypotheses results showed that musicians outperformed non‐musicians in the aspiration task but not in the voicing task. Moreover, the difference between the native (/p/) and the non‐native phoneme (/ph/), as reflected in N200 and P300 amplitudes, was larger in musicians than in non‐musicians in the aspiration task but not in the voicing task. These results show that behavior and brain activity associated to non‐native phoneme perception are influenced by musical expertise and that these effects are task‐dependent. The implications of these findings for current models of phoneme perception and for understanding the qualitative and quantitative differences found on the N200 and P300 components are discussed. This article is protected by copyright. All rights reserved.
Article
We investigated how struggling adult readers make use of sentence context to facilitate word processing when comprehending spoken language, conditions under which print decoding is not a barrier to comprehension. Stimuli were strongly and weakly constraining sentences (as measured by cloze probability), which ended with the most expected word based on those constraints or an unexpected but plausible word. Community-dwelling adults with varying literacy skills listened to continuous speech while their EEG was recorded. Participants, regardless of literacy level, showed N400 effects yoked to the cloze probability of the targets, with larger N400 amplitudes for less expected than more expected words. However, literacy-related differences emerged in an earlier time window of 170-300 ms: higher literacy adults produced a reduced negativity for strongly predictable targets over anterior channels, similar to previously reported effects on the Phonological Mapping Negativity (PMN), whereas low-literacy adults did not. Collectively, these findings suggest that in auditory sentence processing literacy may not notably affect the incremental activation of semantic features, but that comprehenders with underdeveloped literacy skills may be less likely to engage predictive processing. Thus, basic mechanisms of comprehension may be recruited differently as a function of literacy development-even in spoken language.
Article
Full-text available
Comprehension impairments in Wernicke's aphasia are thought to result from a combination of impaired phonological and semantic processes. However, the relationship between these cognitive processes and language comprehension has only been inferred through offline neuropsychological tasks. This study used ERPs to investigate phonological and semantic processing during online single word comprehension. EEG was recorded in a group of Wernicke's aphasia n=8 and control participants n=10 while performing a word-picture verification task. The N400 and Phonological Mapping Negativity/ Phonological Mismatch Negativity (PMN) event-related potential components were investigated as an index of semantic and phonological processing, respectively. Individuals with Wernicke's aphasia displayed reduced and inconsistent N400 and PMN effects in comparison to control participants. Reduced N400 effects in the WA group were simulated in the control group by artificially degrading speech perception. Correlation analyses in the Wernicke's aphasia group found that PMN but not N400 amplitude was associated with behavioural word-picture verification performance. The results confirm impairments at both phonological and semantic stages of comprehension in Wernicke's aphasia. However, reduced N400 responses in Wernicke's aphasia are at least partially attributable to earlier phonological processing impairments. The results provide further support for the traditional model of Wernicke's aphasia which claims a causative link between phonological processing and language comprehension impairments.
Article
Full-text available
Purpose Speech-in-noise testing relies on a number of factors beyond the auditory system, such as cognitive function, compliance, and motor function. It may be possible to avoid these limitations by using electroencephalography. The present study explored this possibility using the N400. Method Eleven adults with typical hearing heard high-constraint sentences with congruent and incongruent terminal words in the presence of speech-shaped noise. Participants ignored all auditory stimulation and watched a video. The signal-to-noise ratio (SNR) was varied around each participant's behavioral threshold during electroencephalography recording. Speech was also heard in quiet. Results The amplitude of the N400 effect exhibited a nonlinear relationship with SNR. In the presence of background noise, amplitude decreased from high (+4 dB) to low (+1 dB) SNR but increased dramatically at threshold before decreasing again at subthreshold SNR (−2 dB). Conclusions The SNR of speech in noise modulates the amplitude of the N400 effect to semantic anomalies in a nonlinear fashion. These results are the first to demonstrate modulation of the passively evoked N400 by SNR in speech-shaped noise and represent a first step toward the end goal of developing an N400-based physiological metric for speech-in-noise testing.
Chapter
About 40% of patients with cardiac arrest can be resuscitated with restoration of spontaneous circulation and respiration. The overall 1 year survival rate of initially unconscious survivors resuscitated from cardiac arrest has been variably quoted to be between 10 and 25% [1, 2]. Anoxic-ischemic encephalopathy is the principal cause of mortality in at least 30–40% of those who die; only 3–10% return to their previous lifestyle, including employment [3]. Under 1% survive in a persistent vegetative state (PVS) [4]. Since primary cardiac arrest is so common, the problems of assessment and management of anoxic-ischemic encephalopathy are of great importance to the intensivist and emergency physician.
Article
Full-text available
New evidence is accumulating for a deficit in binding visual-orthographic information with the corresponding phonological code in developmental dyslexia. Here, we identify the mechanisms underpinning this deficit using event-related brain potentials (ERPs) in dyslexic and control adult readers performing a letter-matching task. In each trial, a printed letter was presented synchronously with an auditory letter name. Incongruent (mismatched), frequent trials were interleaved with congruent (matched) infrequent target pairs, which participants were asked to report by pressing a button. In critical trials, incongruent letter pairs were mismatched but confusable in terms of their visual or phonological features. Typical readers showed early detection of deviant trials, indicated by larger modulation in the range of the phonological mismatch negativity (PMN) compared with standard trials. This was followed by stronger modulation of the P3b wave for visually confusable deviants and an increased lateralized readiness potential (LRP) for phonological deviants, compared with standards. In contrast, dyslexic readers showed reduced sensitivity to deviancy in the PMN range. Responses to deviants in the P3b range indicated normal letter recognition processes, but the LRP calculation revealed a specific impairment for visual-orthographic information during response selection in dyslexia. In a follow-up experiment using an analogous non-lexical task in the same participants, we found no reading-group differences, indicating a degree of specificity to over-learnt visual-phonological binding. Our findings indicate early insensitivity to visual-phonological binding in developmental dyslexia, coupled with difficulty selecting the correct orthographic code.
Article
This chapter discusses the contributions of functional imaging to our understanding of how action and tool concepts are represented and processed in the human brain. Section 7.1 introduces cognitive models of semantic organization. Section 7.2 provides a brief overview of functional imaging approaches to identify brain regions that have specialized for processing action and tool representations. Section 7.3 discusses the relationship between the visuomotor system and semantic processing of actions. Section 7.4 investigates the effects of action type and visual experience on action-selective responses. Section 7.5 characterizes the neural systems engaged in tool processing and how they are modulated by task and stimulus modality. Section 7.6 delineates future directions that may enable us to characterize the neural mechanisms that mediate tool and action-selective brain responses. Cognitive models of semantic organization: Since the seminal work of Warrington and Shallice (1984), double dissociations of semantic deficits have been established between tools and animals (for review, see Gainotti et al., 1995; Warrington & Shallice, 1984; Capitani et al., 2003; Gainotti & Silveri, 1996; Farah et al., 1996; Hillis & Caramazza, 1991; Sacchett & Humphreys, 1992; Warrington & McCarthy, 1987). These double dissociations persist even when attempts are made to control general processing differences due to confounding variables such as familiarity, visual complexity, or word frequency (Farah et al., 1996; Sartori et al., 1993). They appear, therefore, to reflect some sort of semantic organization at the neuronal level. Many cognitive models have been offered to explain these category-specific deficits. © Cambridge University Press 2007 and Cambridge University Press, 2009.
Article
Full-text available
Most event-related brain potential (ERP) studies that showed the role of anticipation processes during sentence processing focused on reading. However, in everyday conversation speech unfolds at higher speed; the present study examines whether comprehenders anticipate words when processing auditory sentences. In high-constrained Spanish sentences, we time-locked ERPs on the article preceding the critical noun, which was muted to avoid overlapping effects. Articles that mismatched the gender of the expected nouns triggered an early (200-280 ms) and a late negativity (450-900 ms), suggesting that anticipation processes are at play also during speech processing. A subsequent lexical recognition task revealed that (muted) “expected” words were (falsely) recognised significantly more often than (muted) “unexpected” words, and as often as “old” words that were actually presented. These results suggest that anticipation processes allow creating a memory trace of a word prior to presentation. The findings support a top-down view of spoken sentence comprehension.
Article
Event-related potentials (ERPs) may provide a non-invasive index of brain function for a range of clinical applications. However, as a lab-based technique, ERPs are limited by technical challenges that prevent full integration into clinical settings. To translate ERP capabilities from the lab to clinical applications, we have developed methods like the Halifax Consciousness Scanner (HCS). HCS is essentially a rapid, automated ERP evaluation of brain functional status. The present study describes the ERP components evoked from auditory tones and speech stimuli. ERP results were obtained using a 5-minute test in 100 healthy individuals. The HCS sequence was designed to evoke the N100, the mismatch negativity (MMN), P300, the early negative enhancement (ENE), and the N400. These components reflected sensation, perception, attention, memory, and language perception, respectively. Component detection was examined at group and individual levels, and evaluated across both statistical and classification approaches. All ERP components were robustly detected at the group level. At the individual level, nonparametric statistical analyses showed reduced accuracy relative to support vector (SVM) machine classification, particularly for speech-based ERPs. Optimized SVM results were MMN: 95.6%; P300: 99.0%; ENE: 91.8%; and N400: 92.3%. A spectrum of individual-level ERPs can be obtained in a very short time. Machine learning classification improved detection accuracy across a large healthy control sample. Translating ERPs into clinical applications is increasingly possible at the individual-level. Copyright © 2015. Published by Elsevier B.V.
Conference Paper
Much research has been done on analyzing relationship between ERP (Event Related Potential) and linguistic, but did little job on Chinese. Here, participants were equally divided into two groups. Group A was in a cold environment, and group B was in tired condition. Recording ERP signal when participants were repeated Chinese sentence. From the work, we found that the amplitude of ERP components (N200, N400, P600) had a big change when participants were repeated different Chinese sentence of feeling. Using MATLAB to set a specific digital filter to remove the artifacts, then analyzing the effective EEG signals by EEGLAB.
Article
Full-text available
Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.
Article
Context: Using natural connected speech, the aim of the present study was to examine the semantic congruity effect (i.e. the difference between semantically incongruous and congruous words) in sentence contexts that generate high or moderate final word expectancies. Methods: We used sentences with two levels of word expectancy in the auditory modality: familiar proverbs (that generate high final word expectancy), and unfamiliar sentences (that generate only moderate final word expectancy). Results: Results revealed an early congruity effect (0-200 ms) that developed across all scalp sites for familiar proverbs but not for unfamiliar sentences. By contrast, typical centro-parietal N400 and Late Positivity Component congruity effects developed later (200-500 ms and 600-900 ms ranges) for both familiar proverbs and unfamiliar sentences. Discussion: We argue that the early congruity effect for proverbs comprises both a Phonological Mismatch Negativity, reflecting the processing of the acoustic/phonological mismatch between the expected (congruous) and unexpected (incongruous) sentence completions and a typical N400 semantic congruity effect with an unusual short latency because final words can be predicted from the unusually high contextual constraints of familiar proverbs. These results are considered in the light of current views of anticipation and prediction processes in sentence contexts.
Chapter
Full-text available
Many patients with Disorders of Consciousness (DOC) are misdiagnosed for a variety of reasons. These patients typically cannot communicate. Because such patients are not provided with the needed tools, one of their basic human needs remains unsatisfied, leaving them truly locked in to their bodies. This chapter first reviews current methods and problems of diagnoses and assistive technology for communication, supporting the view that advances in both respects are needed for patients with DOC. The authors also discuss possible solutions to these problems and introduce emerging developments based on EEG (Electroencephalography), fMRI (Functional Magnetic Resonance Imaging), and fNIRS (Functional Near-Infrared Spectroscopy) that have been validated with patients and healthy volunteers.
Article
Full-text available
All 10 forms of the test of Speech Perception in Noise (SPIN) were presented to 128 listeners who had some degree of sensorineural hearing loss. Presentation of the speech track was at 50 dB above the estimated threshold for the babble track. Signal-to-babble ratio was 8 dB. Half of the subjects listened through headphones and half via loudspeaker. Half were tested in a single session and half in two sessions spaced 2–4 weeks apart. Two markers independently scored every test session. Statistical analyses indicate that transducer, number of visits, and order of test form presentation have little or no effect on test scores, and differences between markers, although significant, are quite small. The subtests consisting of items with strong contextual cues generate an average reliability coefficient of .91, whereas the value for the low-context subtests is .85. The 10 forms do not, however, constitute a set of equivalent forms, and there are large differences in mean performance on the low-context portions.
Article
Full-text available
Three experiments examined the generality of context effects displayed for congruous completions appearing in high- and low-constraint sentences. Exp 1 found an effect of context for a broader range of completions for low-constraint than high-constraint sentences. Lexical decisions for unexpected congruous words that were related in meaning to the most expected completion for the sentence showed a benefit from context in low-constraint sentences only. Unexpected words that were unrelated to the most expected completion never benefited from appearing in either high- or low-constraint sentence contexts. Exp 2 varied the semantic relatedness of the unexpected words within Ss and found that unrelated words still did not benefit from sentence context. Exp 3 included only low-constraint sentences to encourage Ss to develop broader expectations for upcoming words. Unrelated words continued not to display any benefit from context. It is concluded that the scope of facilitation for upcoming words demonstrated in a lexical decision task is wider for low-constraint than high-constraint sentences, but never includes unrelated, although acceptable, completions for the sentence. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The interactions, during word-recognition in continuous speech, between the bottom-up analyses of the input and different forms of internally generated top-down constraint, were investigated using a shadowing task and a mispronunciation detection task (in the detection task the subject saw a text of the original passage as he listened to it). The listener's dependence on bottom-up analyses in the shadowing task, as measured by the number of fluent restorations of mispronounced words, was found to vary as a function of the syllable position of the mispronunciation within the word and of the contextual constraints on the word as a whole. In the detection task only syllable position effects were obtained. The results, discussed in conjunction with earlier research, were found to be inconsistent with either the logogen model of word-recognition or an autonomous search model. Instead, an active direct access model is proposed, in which top-down processing constraints interact directly with bottom-up information to produce the primary lexical interpretation of the acoustic-phonetic input.
Article
Full-text available
The word-by-word time-course of spoken language understanding was investigated in two experiments, focussing simultaneously on word-recognition (local) processes and on structural and interpretative (global) processes. Both experiments used three word-monitoring tasks, which varied the description under which the word-target was monitored for (phonetic, semantic, or both) and three different prose contexts (normal, semantically anomalous, and scrambled), as well as distributing word-targets across nine word-positions in the test-sentences. The presence or absence of a context sentence, varied across the two experiments, allowed an estimate of between-sentence effects on local and global processes. The combined results, presenting a detailed picture of the temporal structuring of these various processes, provided evidence for an on-line interactive language processing theory, in which lexical, structural (syntactic), and interpretative knowledge sources communicate and interact during processing in an optimally efficient and accurate manner.
Article
Full-text available
A high-level cognitive dichotomy ("language and context") is reviewed in relation to empirical findings concerning the functions of the human cerebral hemispheres. We argue that the right hemisphere's involvement in the generation of connotative and contextual information in parallel with the denotative and literal language functions of the left hemisphere provides an important insight into the organization of viable cognitive systems. The role of the corpus callosum in producing the dichotomy is discussed. Finally, the generation of asymmetrical activity in structurally symmetrical, bilateral neural nets is described. The model demonstrates how complementary memory states can be generated in bilateral nets without assuming different modes of information processing, provided that the nets have inhibitory, homotopic connections. Unlike excitatory connections, inhibitory connections are sufficient to generate asymmetric hemispheric activity without postulating intrinsic differences between the cerebral hemispheres.
Article
Full-text available
The process of spoken word-recognition breaks down into three basic functions, of access, selection and integration. Access concerns the mapping of the speech input onto the representations of lexical form, selection concerns the discrimination of the best-fitting match to this input, and integration covers the mapping of syntactic and semantic information at the lexical level onto higher levels of processing. This paper describes two versions of a “cohort”-based model of these processes, showing how it evolves from a partially interactive model, where access is strictly autonomous but selection is subject to top-down control, to a fully bottom-up model, where context plays no role in the processes of form-based access and selection. Context operates instead at the interface between higher-level representations and information generated on-line about the syntactic and semantic properties of members of the cohort. The new model retains intact the fundamental characteristics of a cohort-based word-recognition process. It embodies the concepts of multiple access and multiple assessment, allowing a maximally efficient recognition process, based on the principle of the contingency of perceptual choice.
Article
Full-text available
Investigated how masking varies with origin in 9 experiments with naive and experienced undergraduates (N = 36). Stimuli masked the target forms only monoptically (or binocularly), or both monoptically and dichoptically. Peripheral forward and backward masking were described by a simple relation between target stimulus energy and the minimal interval between target offset and mask onset permitting evasion of masking: The minimal interval multiplied by the target energy equaled a constant. Peripheral forward masking, however, was more sensitive to mask intensity than was peripheral backward masking. Central masking, which was primarily backward, was relatively unaffected by stimulus energy and was determined by the interval elapsing between the onsets of the 2 stimuli. The multiplicative and the onset-onset rules characterized, respectively, peripheral and central visual processes. The peripheral processes are viewed as a set of parallel systems or nets signaling crude features of the stimulus and the central processes as a series of decisions conducted, in part, on these features and resulting in stimulus recognition. It is concluded that the peripheral and central processes are related in a concurrent and contingent fashion, apparently occurring in parallel, with the central decisions contingent on the output of the peripheral systems which signal different features at different rates. (4 p. ref.)
Article
Ten English speaking subjects listened to sentences that varied in sentential constraint (i.e., the degree to which the context of a sentence predicts the final word of that sentence) and event-related potentials (ERPs) were recorded during the presentation of the final word of each sentence. In the Control condition subjects merely listened to the sentences. In the Orthographic processing condition subjects decided, following each sentence, whether a given letter had been present in the final word of the preceding sentence. In the Phonological processing condition the subjects judged whether a given speech sound was contained in the terminal word. In the Semantic processing condition subjects determined whether the final word was a member of a given semantic category. A previous finding in the visual modality that the N400 component was larger in amplitude for low constraint sentence terminations than for high was extended to the auditory modality. It was also found that the amplitude of a N200-like response was similarly responsive to contextual constraint. The hypothesis that N400 amplitude would vary significantly with the depth of processing of the terminal word was not supported by the data. The “N200” recorded in this language processing context showed the classic frontocentral distribution of the N200. The N400 to spoken sentences had a central/centroparietal distribution similar to the N400 in visual modality experiments. It is suggested that the N400 obtained in these sentence contexts reflects an automatic semantic processing of words that occurs even when semantic analysis is not required to complete a given task. The cooccurrence and topographical dissimilarity of the “N200” and N400 suggest that the N400 may not be a delayed or a generic N200.
Article
SPIN test difference scores, which are the numeric difference between items of high predictability and low predictability, were obtained from three groups of subjects to determine whether the subject’s language skills influenced the size of the difference score. For this purpose the relationship between the difference score and the following variables was determined: syntactic skills, semantic skills, IQ, age, hearing loss, and signal‐to‐noise ratio (S/N). Results indicated that the difference scores were significantly related to the subject’s hearing and the S/N ratio used in the administration of the SPIN sentences. Possible reasons for these results are discussed.
Article
Examined the N400 component of the event-related brain potential (ERP) by presenting 12 native English speakers from a university community with stimulus series composed of 7 items. Four stimulus types were used: sentences, semantically related words, numbers, and letters. Half of each stimulus series ended appropriately and the other half ended anomalously, with half of the terminal items for each ending type printed in large letters. Ss were instructed to read each item of the series and press a button after the last item to indicate whether each series ended normally or oddly. Results show that there were N400 components for all odd-ending series. Large-type endings tended to produce a substantial P300 component that followed and mitigated the preceding negativity effects. Although somewhat different ERP patterns were obtained across stimulus and ending types, findings suggest that the N400 can be obtained with a variety of stimuli and is followed by a P300 when an explicit categorization of the eliciting stimulus is required. (22 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The influence of featural and semantic information on word identification was examined in two experiments. Letter features were manipulated within words that occurred in sentence contexts supporting different interpretations to various degrees. In the first experiment, subjects were presented with sentences and asked to choose which interpretation of the word had been in the sentence. In the second experiment, the subject read each sentence aloud, allowing for the determination of how the critical word was identified. The combined results of the two experiments were taken to indicate that semantic and featural information jointly influence word identification, and that the obtained effect of context is unlikely to be due to factors that exert their influence after word identification has taken place. A fuzzy propositional model was used to provide an account of the results. In this model, the featural support for each interpretation of the stimulus is evaluated independently of contextual information, although contextual information influences the selection of the word's identity. This illustrates that contextual information need not influence the sensory analysis of a stimulus in order to influence the identification of that stimulus.
Article
Subjects heard successive fragments of words both in and out of context, and after each fragment they wrote down the word they thought they were hearing. Responses were analyzed for word frequency, length, and compatibility with both contextual constraints and the sensory input. These responses, pooled across subjects, enabled us to evaluate a number of claims concerning the use of top-down and bottom-up information in the process of word recognition. The analyses show that, in the process of recognizing a spoken word, subjects initially produce a large number of responses compatible with the sensory input. This set diminishes in size as more of the word is presented. The rate at which responses drop out of the initial set differs, depending on whether words are heard in isolation or in context. These properties of the elicited responses are compatible with claims made by the cohort model for the integration of top-down and bottom-up information in the process of recognizing a spoken word.
Article
The interaction between orthographic and phonological information was studied in two experiments by requiring subjects to match visually presented word pairs on the basis of their visual or rhyming similarity. Word pairs either rhymed and looked alike, rhymed but did not look alike, looked alike but did not rhyme, or did not rhyme and did not look alike. In Experiment 1 under rhyme matching, reaction time (RT) was markedly increased whenever there was a conflict between orthographic and phonological cues. Under visual matching, overall RT was shorter than rhyme matching, with visually similar rhyming and non-rhyming pairs producing equally rapid and short responses compared to the non-rhyming but visually different word pairs. Most subjects also responded slower to rhyming and visually different stimuli compared to word pairs that did no look alike or rhyme. Experiment 2 sought to specify the processing locus of these effects by recording event-related brain potentials (ERPs) under task conditions similar to the first experiment. The RT data essentially replicated the effects found in Experiment 1 for both matching tasks. The ERP data viewed in the context of these results suggested that the interaction of the orthographic and phonological codes begins at least at the stimulus comparison processing stage, but that the conflict may also contribute to delays in response selection. The results are discussed in terms of several current models of word processing.
Article
Several investigations were perfomed with normal hearing subjects to determine the effects of presentation level and signal-to-babble ratio on the speech perception in noise (SPIN) test. The SPIN test contains sentences that simulate a range of contextual situations encountered in everyday speech communication. Findings from several representative patients with sensorineural hearing loss demonstrate the possible clinical utility of the test to measure the effects of context on speech discrimination.
Article
This paper describes a test of everyday speech reception, in which a listener's utilization of the linguistic-situational information of speech is assessed, and is compared with the utilization of acoustic-phonetic information. The test items are sentences which are presented in babble-type noise, and the listener response is the final word in the sentence (the key word) which is always a monosyllabic noun. Two types of sentences are used: high-predictability items for which the key word is somewhat predictable from the context, and low-predictability items for which the final word cannot be predicted from the context. Both types are included in several 50-item forms of the test, which are balanced for intelligibility, key-word familiarity and predictability, phonetic content, and length. Performance of normally hearing listeners for various signal-to-noise ratios shows significantly different functions for low- and high-predictability items. The potential applications of this test, particularly in the assessment of speech reception in the hearing impaired, are discussed.
Article
Ten English speaking subjects listened to sentences that varied in sentential constraint (i.e., the degree to which the context of a sentence predicts the final word of that sentence) and event-related potentials (ERPs) were recorded during the presentation of the final word of each sentence. In the Control condition subjects merely listened to the sentences. In the Orthographic processing condition subjects merely listened to the sentences. In the Orthographic processing condition subjects decided, following each sentence, whether a given letter had been present in the final word of the preceding sentence. In the Phonological processing condition the subjects judged whether a given speech sound was contained in the terminal word. In the Semantic processing condition subjects determined whether the final word was a member of a given semantic category. A previous finding in the visual modality that the N400 component was larger in amplitude for low constraint sentence terminations than for high was extended to the auditory modality. It was also found that the amplitude of a N200-like response was similarly responsive to contextual constraint. The hypothesis that N400 amplitude would vary significantly with the depth of processing of the terminal word was not supported by the data. The "N200" recorded in this language processing context showed the classic frontocentral distribution of the N200. The N400 to spoken sentences had a central/centroparietal distribution similar to the N400 in visual modality experiments. It is suggested that the N400 obtained in these sentence contexts reflects an automatic semantic processing of words that occurs even when semantic analysis is not required to complete a given task. The cooccurrence and topographical dissimilarity of the "N200" and N400 suggest that the N400 may not be a delayed or a generic N200.
Article
Analysis of variance (ANOVA) interactions involving electrode location are often used to assess the statistical significance of differences between event-related potential (ERP) scalp distributions for different experimental conditions, subject groups, or ERP components. However, there is a fundamental incompatibility between the additive model upon which ANOVAs are based and the multiplicative effect on ERP voltages produced by differences in source strength. Using potential distributions generated by dipole sources in spherical volume conductor models, we demonstrate that highly significant interactions involving electrode location can be obtained between scalp distributions with identical shapes generated by the same source. Therefore, such interactions cannot be used as unambiguous indications of shape differences between distributions and hence of differences in source configuration. This ambiguity can be circumvented by scaling the data to eliminate overall amplitude differences between experimental conditions before an ANOVA is performed. Such analyses retain sensitivity to genuine differences in distributional shape, but do not confuse amplitude and shape differences.
Article
Models of word recognition differ with respect to where the effects of sentential-semantic context are to be located. Using a crossmodal priming technique, this research investigated the availability of lexical entries as a function of stimulus information and contextual constraint. To investigate the exact locus of the effects of sentential contexts, probes that were associatively related to contextually appropriate and inappropriate words were presented at various positions before and concurrent with the spoken word. The results show that sentential contexts do not preselect a set of contextually appropriate words before any sensory information about the spoken word is available. Moreover, during lexical access, defined here as the initial contact with lexical entries and their semantic and syntactic properties, both contextually appropriate and inappropriate words are activated. Contextual effects are located after lexical access, at a point in time during word processing where the sensory input by itself is still insufficiently informative to disambiguate between the activated entries. This suggests that sentential-semantic contexts have their effects during the process of selecting one of the activated candidates for recognition.
Article
In the present study, brain responses were recorded during the presentation of naturally spoken sentences. In two separate experiments, the same set of stimulus sentences was presented with subjects either being asked to pay close attention in order to answer content questions following the run ('memory instruction' - MI) or to press one of two buttons to indicate a normal or unusual ending to a sentence ('response instruction' - RI). Brain event-related potentials were not averaged across the exact same acoustic information but across 49 different words spoken in natural, uninterrupted sentences. There was no attempt to standardize the acoustic features of stimulus words by electronic means. Rather than splicing stimulus words (and trigger pulse needed for computer averaging) onto sentence stems, consonant-vowel-consonant (CVC) monosyllablic words were selected with voiceless stop consonants in the word initial position. This not only avoids acoustic overlap with the preceding word of the sentence but also allows the point of stimulus word onset to be precisely located. In the MI group, brain responses to the semantically anomalous endings were distinguished by the presence of a late negative wave (N300) followed by a sustained positive wave (P650). Responses to anomally in the RI group data was not consistently differentiated from normal in the 650-1000 ms range. Within conditions, the MI and RI waveforms were differentiated by the presence of an augmented positive-going slow wave in the RI condition which may reflect an augmented CNV release. The feasibility of averaging brain electrical responses across non-isolated words which differed acoustically but were of similar phonemic structure was demonstrated. This paradigm provides a means of studying speech-activated neurolinguistic processes in the stream of speech and may make complex spoken language contexts available for event-related potential investigations of brain and language functions.
Article
This paper reexamines Tyler’s (1984) hypothesis that hearers do not use contextual information during their processing of the early parts of auditorily presented words. In the gating experiment described here, identical word tokens were heard in a no-context condition and two with. context conditions. The data from the no-context condition provided a measure of the likelihood that a response with particular semantic and/or syntactic characteristics would occur by chance (i.e., without the influence of contextual factors), A comparison of these data with those produced in the with-context conditions revealed that hearers made use of contextual information even during the processing of the first 50 msec of test words. These results are discussed in relation to current theories of word recognition.
Article
In general, studies on the effects of a sentence context on word identification have focused on how context affects the efficiency of processing a single target word, presented separately from the context. Such studies probably would be incapable of measuring contextual facilitation resulting from cascaded or parallel processing of neighboring words within a sentence. To measure these and other types of facilitation, we presented entire phrases and sentences for subjects to read as fast as possible and to monitor for nonwords. Subjects read at rates representative of natural reading. Experiment 1 demonstrated a large contextual facilitation effect on decision time. Experiment 2 showed that facilitation is caused by specific semantic information and, perhaps to a greater degree, by nonpredictive syntactic information. Experiment 3 showed that the amount of facilitation is greater than could be accounted for by separate contributions from autonomous word level and sentence level processes. These results present difficulties for an autonomous model of reading, but are consistent with interactive models, in which the results of ongoing sentential analyses are combined with stimulus information to identify words.
Article
The N400 component of the event-related brain potential (ERP) was examined by presenting subjects with a series of words belonging to the same category and a series of declarative sentences. Half of the word series ended with a semantically unrelated word, while half of the sentences ended with a semantically inappropriate word. In the first experiment, subjects were instructed to read the word series and sentences, while in the second experiment they were instructed to indicate whether the word series or sentences ended appropriately or not with a button-press response. Word series and sentences with semantically incongruous endings produced a robust negative component at 400 msec followed by a positive-going wave for both the reading and decision tasks. When the subjects were required to categorize the word series and sentences endings, the negative component was followed by a robust P3 in both conditions. Analysis of scalp amplitude distributions for each task taken in conjunction with previous findings suggests that the semantically induced N400 component is most likely a "generic" N2. The relationship between the N2, N400, and P3 is discussed.
Article
The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and response- and computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.
Article
Sets of words can be grouped in terms of their denotation (cold and warm both refer literally to temperature) or in terms of their connotation (cold and warm connote remoteness and intimacy, respectively). To assess whether these two facets of meaning are dissociable, unilaterally left- and right-hemisphere-damaged patients were presented with word triads and asked to group together the two words that were closest in meaning. Right-hemisphere-damaged patients showed a preserved sensitivity to denotation, and a selective insensitivity to connotative facets of meanings. In contrast, left-hemisphere-damaged patients exhibited a preserved sensitivity to connotation as well as a selective insensitivity to denotative aspects of meanings. Inasmuch as normal control subjects displayed a flexible sensitivity to both denotative and connotative aspects of meaning, the results suggest that unilateral brain damage selectively curtails use of one or the other major aspect of word meaning.
Article
Twenty subjects listened to a series of simple sentences, spoken by a male voice, in which the last word was on occasions either made semantically incongruous or was unexpectedly spoken by a female voice. Averages of ERPs to the last words revealed that a consistent late negative component (N456) was associated with semantic incongruity and a late positive component (P416) with physical (voice) incongruity. The results were consistent with those in the visual modality by Kutas and Hillyard (1980a,b) and are interpreted in terms of the facilitatory and inhibitory effects of contextual priming on the processing of the words concerned. In a subsidiary experiment 6 subjects were required to repeat the last words of the same set of sentences as rapidly as possible. Verbal response latency increased by 62 msec to physically incongruous words and by 185 msec to semantically incongruous words.
Article
Subjects were shown the terms of simple sentences in sequence (e.g., “A sparrow / is not / a vehicle”) and manually indicated whether the sentence was true or false. When the sentence form was affirmative (i.e., “X is a Y”), false sentences produced scalp potentials that were significantly more negative than those for true sentences, in the region of about 250 to 450 msec following presentation of the sentence object. In contrast, when the sentence form was negative (i.e., “X is not a Y”), it was the true statements that were associated with the ERP negativity. Since both the false-affirmative and the true-negative sentences consist of “mismatched” subject and object terms (e.g., sparrow / vehicle), it was concluded that the negativity in the potentials reflected a semantic mismatch between terms at a preliminary stage of sentence comprehension, rather than the falseness of the sentence taken as a whole. Similarities between the present effects of semantic mismatches and the N400 associated with incongruous sentences (Kutas & Hillyard, 1980) are discussed. The pattern of response latencies and of ERPs taken together supported a model of sentence comprehension in which negatives are dealt with only after the proposition to be negated is understood.
Article
Two experiments investigated Event-Related Potentials (ERPs) in matching tasks. In Experiment 1 subjects judged, in separate conditions, whether two words rhymed or were written in the same case. The CNV developing between the two words was larger in the latter task compared to the former at the right temporal site. In the rhyme judgment task, an increased late negativity differentiated the ERPs to nonrhyming words from those that rhymed with the previously presented word. This difference was maximal at the midline and over the right hemisphere. Experiment 2 further investigated ERPs in the rhyme judgment task, increasing memory demands with an extended interstimulus interval (ISI) and varying the number of items subjects had to hold in memory during this period (one vs. three). Irrespective of memory load, CNVs during the ISI were more negative from the left hemisphere, and the ERPs to the rhyming and nonrhyming words showed the same differences as in Experiment 1. The CNV asymmetries are interpreted as being associated with the engagement of lateralized short-term memory processes. The rhyme/nonrhyme differences are possibly related to the "N400" component elicited by semantically incongruous words. Possible reasons for their scalp distribution are discussed.
Article
Event-related brain potentials (ERPs) were recorded while subjects silently read several prose passages, presented one word at a time. Semantic anomalies and various grammatical errors had been inserted unpredictably at different serial positions within some of the sentences. The semantically inappropriate words elicited a large N400 component in the ERP, whereas the grammatical errors were associated with smaller and less consistent components that had scalp distributions different from that of the N400. This result adds to the evidence that the N400 wave is more closely related to semantic than to grammatical processing. Additional analyses revealed that different ERP configurations were elicited by open-class (“content”) and closed-class (“function”) words in these prose passages.
Article
The neuroelectric activity of the human brain that accompanies linguistic processing can be studied through recordings of event-related potentials (e.r.p. components) from the scalp. The e.r.ps triggered by verbal stimuli have been related to several different aspects of language processing. For example, the N400 component, peaking around 400 ms post-stimulus, appears to be a sensitive indicator of the semantic relationship between a word and the context in which it occurs. Words that complete sentences in a nonsensical fashion elicit much larger N400 waves than do semantically appropriate words or non-semantic irregularities in a text. In the present study, e.r.ps were recorded in response to words that completed meaningful sentences. The amplitude of the N400 component of the e.r.p. was found to be an inverse function of the subject's expectancy for the terminal word as measured by its 'Cloze probability'. In addition, unexpected words that were semantically related to highly expected words elicited lower N400 amplitudes. These findings suggest N400 may reflect processes of semantic priming or activation.
Article
Event-related potentials were studied while subjects performed physical and semantic discrimination tasks. Two negative components, NA and N2, were observed in both kinds of discriminations. The earlier component, NA, had a constant onset latency, but its peak latency varied as a function of stimulus complexity. N2 latency varied in relation to changes in the peak of NA. RT and P3 followed N2 by similar amounts of time across tasks. The NA and N2 components were interpreted as reflecting partially overlapping sequential stages of processing associated with pattern recognition and stimulus classification, respectively.
Article
The timing of two event-related potential components was differentially affected by two experimental variables. The earlier component (NA) was affected by degradation of the stimuli and the later component (N2) by the nature of a classification task. The results support the hypothesis that NA and N2 reflect sequential stages of information processing, namely, pattern recognition and stimulus classification.
Article
Event-related brain potentials (ERPs) were recorded from adults as they read 160 different sentences, half of which ended with a semantically anomalous word. These deviant words elicited a broad, negative component (N400). Measured in the difference wave between ERPs to incongruous and congruous endings, the N400 was slightly larger and more prolonged over the right than the left hemisphere and diminished in amplitude over the course of the experiment. A left-greater-than-right asymmetry was again observed in the slow, positive ERP elicited by the first six words in the sentences, being most pronounced for subjects having no left-handers in their immediate family.
Article
To secure information on which aspects of linguistic functioning might be mediated by the nondominant hemisphere, a test battery assessing sensitivity to narrational and humorous materials was administered to a population of right-hemisphere-damaged patients, as well as relevant control groups of normal, aging, and aphasic individuals. While elementary linguistic functioning was adequate, the right-hemisphere-injured groups exhibited consistent difficulties in respecting the boundaries of a fictive entity, assessing the plausibility of elements within a story or joke, selecting the appropriate punch line for a joke, and integrating elements of a story into a coherent narrative. Certain elements—specifically emotional content and noncanonical facts injected into a narrative—also posed characteristic difficulties for these patients. The results suggest that, in contrast to the other populations, right-hemisphere patients exhibit special difficulties in processing complex linguistic entities and in utilizing the surrounding context as they assess linguistic messages.
Article
In a sentence reading task, words that occurred out of context were associated with specific types of event-related brain potentials. Words that were physically aberrant (larger than normal) elecited a late positive series of potentials, whereas semantically inappropriate words elicited a late negative wave (N400). The N400 wave may be an electrophysiological sign of the "reprocessing" of semantically anomalous information.
Article
The electrocortical manifestations of levels of perceptual processing and memory performance were investigated by recording event related potentials (ERPs) during a verbal comparison task and a subsequent test of recognition memory. Two words were judged, on each trial, to be the same or different according to an orthographic, phonemic, or semantic criterion. Orthographic processing of words led to poor performance on a test of recognition memory whereas phonemic and semantic processing led to increasingly better performance. “Same” judgments led to better memory performance for phonemically and semantically processed items. The ERP waveforms included a late positive component (LPC) and a slow wave, both of which were responsive to the comparison criterion and the type of judgment. Further, the LPC appeared to index recognition accuracy in the memory test. The ERP data are interpreted as reflecting associative activation in the human memory system.
Event-related brain potential studies of language Advances in Psychophysiology
  • M Kutas
  • C Van Petten
Kutas, M., & Van Petten, C. 1988. Event-related brain potential studies of language. In P. K. Ackles, J. R. Jennings, & M. G. H. Coles (Eds.), Advances in Psychophysiology. Greenwich, CT: JAI Press. Marslen-Wilson, W. 1987. Functional parallelism in spoken word-recognition.
Does N400 reflect automatic lexical processing? Presented at EPIC IX: Proceedings of the Ninth International Conference on Event-Related Potentials of the Brain Sensitivity to lexical denotation and connotation in brain-damaged patients: A double dissociation? Brain and Language
  • C Brown
  • P Hagoort
  • T Swaab
Brown, C., Hagoort, P., & Swaab, T. 1989. Does N400 reflect automatic lexical processing? Presented at EPIC IX: Proceedings of the Ninth International Conference on Event-Related Potentials of the Brain. Brownell, H. H., Potter, H. H., & Michelow, D. 1984. Sensitivity to lexical denotation and connotation in brain-damaged patients: A double dissociation? Brain and Language, 22, 253-265.
Does N400 reflect automatic lexical processing?
  • Brown
Event-related brain potential studies of language
  • Kutas